From patchwork Fri Dec 9 02:10:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13069138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 492F7C10F1E for ; Fri, 9 Dec 2022 01:53:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A35F38E0003; Thu, 8 Dec 2022 20:53:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BEEF8E0001; Thu, 8 Dec 2022 20:53:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85F348E0003; Thu, 8 Dec 2022 20:53:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 725658E0001 for ; Thu, 8 Dec 2022 20:53:45 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EE9BC808A5 for ; Fri, 9 Dec 2022 01:53:44 +0000 (UTC) X-FDA: 80221096368.27.C9D9D8B Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf12.hostedemail.com (Postfix) with ESMTP id DCC7A40013 for ; Fri, 9 Dec 2022 01:53:42 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670550823; a=rsa-sha256; cv=none; b=kfw+rsP4QR9udDtVp4ckKgtWXhfkevsf7WffzeM+OYNi+VySYDot/+63XYN4bhhgeoVF4h JlsphLnLnH0l+2aUdSuJIuNmQCD8ipYiZ1lnKTViYo1tQ3giu+/0DKUd6CUzksHFNfjJ+B 5HlWmvAESpvEjMFFBGPD+T4tyXHzwmA= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670550823; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dVqj/pLfI1Quj4499c9IaSQehTndRmqLV1l34UFeCbY=; b=e2fqvFQLawkPQD+c6cI/mXWAMxPgnU9k/Eb8DYYkZ3A3qTkGx2CxiNof30ZqCrqY85NUse dU9CFQs84m+BbWv47wONPFyOxTNO1Cth7/OOWZbpRdJsiqxPqGragZp0MebIhdD1lbVbo9 EfYoTcahDvgoLPMSgbHgK3A7NCNVJVs= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NSvDF6rh9zRpnP; Fri, 9 Dec 2022 09:52:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 9 Dec 2022 09:53:38 +0800 From: Kefeng Wang To: , , CC: , , , Kefeng Wang Subject: [PATCH] mm: hwposion: support recovery from ksm_might_need_to_copy() Date: Fri, 9 Dec 2022 10:10:41 +0800 Message-ID: <20221209021041.192835-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221209021041.192835-1-wangkefeng.wang@huawei.com> References: <20221209021041.192835-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DCC7A40013 X-Stat-Signature: 8ysqkxisw988ma848y4oxiys1h4w3hyz X-HE-Tag: 1670550822-593177 X-HE-Meta: U2FsdGVkX1+/eSGc9WuqtHKwGj/sW/gIA698tJCaGRrzSyghigopR+C/ko5zGGafBpYwHR2WaJqSERXazpRNwlg9lx2QkxVCUV/G7nVXTsuloOSJezJktSk72hBtfWEi6CWgUJ9PN/b9T+guqH2YHubWQ4LIIkyR6cbAjNadJmFshZO9DzKdPocQjUCkGYOspEFJl8d5LrtfqO/KJ9FOwWNGB2qlg9WcTAuSAHhgmhhByUS/NT7I7Xh63MAtPidiKTuvAJKCSrEd8PLsdQYZQySwMYuVYFIDSIfSDPaGrg7yV6OR2gloUfm15bqg3ZUejGBo+Ue7E/fTm22qxK21ojDEFbYq2B2uI85z6vzKEdzAJ8sZ+EozY/FrXbsmHkqeseMZCklz3yeFgIBmbwi57GJKrD8z3TrS+R3NH+aVYF3kSqsWPIZSFKYZpPvjxpecig7EKXR1up/g7Tw7B1VXdaPMl+cMh+K6kYCjjF77mdjDnx+cghI8obP1Bem0pglta5r+eiS7YSg4nkQiKRyN3Voge7d0rikK7X3vE0noslBwSuQBYeobqYPbX49w8GpDb6T/B0WHyOplGimo6w0CW6LkwWxkYB4HtScU8Gdn+H8SIrVM46VcFVZpYfmSIBzYs2sTfJLGZ78E8IKFWvo4FUKTSZb96CojR3mIaQsC1H+LQ/Hqnd24TGGcRM0hJOpVDclgP69pC1pns6pBeGuKST1leWqY3oSAK2PfV1Hv4V5ZXgHlf3JABVpEMAfaYLTA4+AC0Q2/XK+L/Am+D0xmIOIkxRrB4uF1j84pN2jBtbXlQaEaNMOcClc5z0w8d7ztEnTQY9w8RQJHudZ0NGRhu3/KSNh31VkG0Mh5LebviA0Mh8Gli9wAb5UPG8MHmPQ58BfmOsKpvmZnKDrjzESNHSk/EkRCY7XN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the kernel copy a page from ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since poisoned page is consumed by kernel, this is similar to Copy-on-write poison recovery, When an error is detected during the page copy, return VM_FAULT_HWPOISON, which help us to avoid system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure process. Signed-off-by: Kefeng Wang --- mm/ksm.c | 8 ++++++-- mm/memory.c | 3 +++ mm/swapfile.c | 2 +- 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index f1e06b1d47f3..356e93b85287 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, new_page = NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + new_page = ERR_PTR(-EHWPOISON); + memory_failure_queue(page_to_pfn(page), 0); + return new_page; + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); diff --git a/mm/memory.c b/mm/memory.c index 2615fa615be4..bb7b35e42297 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!page)) { ret = VM_FAULT_OOM; goto out_page; + } els if (unlikely(PTR_ERR(page) == -EHWPOISON)) { + ret = VM_FAULT_HWPOISON; + goto out_page; } folio = page_folio(page); diff --git a/mm/swapfile.c b/mm/swapfile.c index f670ffb7df7e..763ff6a8a576 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, swapcache = page; page = ksm_might_need_to_copy(page, vma, addr); - if (unlikely(!page)) + if (IS_ERR_OR_NULL(!page)) return -ENOMEM; pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);