From patchwork Wed Jun 26 08:53:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13712427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D0D9C27C4F for ; Wed, 26 Jun 2024 08:54:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40E726B008A; Wed, 26 Jun 2024 04:54:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 398436B008C; Wed, 26 Jun 2024 04:54:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 237DC6B0092; Wed, 26 Jun 2024 04:54:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 065596B008A for ; Wed, 26 Jun 2024 04:54:44 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 936ED160BFD for ; Wed, 26 Jun 2024 08:54:43 +0000 (UTC) X-FDA: 82272429246.01.A133D13 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf15.hostedemail.com (Postfix) with ESMTP id 72287A001C; Wed, 26 Jun 2024 08:54:40 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719392066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D55No0sOA7xeG1UPfcnEqQYDbF60ezPO2crDG8379Dk=; b=hDY4tH51To8PnSyAsEhV2BUTHlMPLM+Wq2bG9dk0lZg9/SgQdMqOXi6kHeY2GWxneZbHsi iMkQXwwSeIiGdCL+/cdxPh9jGVMNFX8QRH8zB4ZKrSqU3CCeQ55bIf/R1KDH+GSU8URY35 C5hB5aX7CheiRRyhCi5fgOkSdMk5Dwg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719392066; a=rsa-sha256; cv=none; b=Y8q+kpQmLqJceGmPHYcSiY6YEWahczcPJq6Dql+k2Abui3wtBBmk0tHU29hQ8VM8ddngz5 /2E5NvcIuoFz0Q6ZHMzMypg+dPBPBBnM70E1U0kmUsCSxaJgCnGoyNnWIGTj8gtE8qxyD9 O2S4t327F6Lho2IAkYqPLXH7iBN3s4Q= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4W8Fnj5tCyz1HDZP; Wed, 26 Jun 2024 16:52:25 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 26FB214037E; Wed, 26 Jun 2024 16:54:36 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 26 Jun 2024 16:54:35 +0800 From: Kefeng Wang To: , CC: Tony Luck , Miaohe Lin , , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , Jiaqi Yan , Hugh Dickins , Vishal Moola , Alistair Popple , Jane Chu , Oscar Salvador , Lance Yang , Kefeng Wang Subject: [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Date: Wed, 26 Jun 2024 16:53:23 +0800 Message-ID: <20240626085328.608006-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240626085328.608006-1-wangkefeng.wang@huawei.com> References: <20240626085328.608006-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 72287A001C X-Stat-Signature: e71om4ga4u3yctkgxehtx7epduy5xcs9 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719392080-963881 X-HE-Meta: U2FsdGVkX191LNXiLVgfdE1UYsUUPkQHzOt96tlIWCE0cJ2HDFBPOkm/+1h8BaZ6F35nmYB0PtvRnXsNEbnS3+75EPFjGXgHsMhxVOr1WAlGWom9YygSJNXbm60FbpruXwOkiCyI2XOXMJ6/6ZvKIuaTpwhI6Du2slknCutPwdRrXLFAH9497FOQbDeG4vWF+E4PSLKBf3F17+2AXcaZCUhA0fhtMRHXWpMSnVqWIzj5nJmBzOnJaaICZLGm8KQ/BqqC9KcIKmBlAE7UThvPqmGBUoKqBtb8DeWEl+UlXwy0HUr8YiIHjEbHjM0y7FXhWKJDwy8qQl2U1oZ3OqPTXHa3WJwtCXSXpHxpCmjDPv7XmcgD9S76Jq4FIv01S+yf4ikk91iYRqfGGBHH9CrEzFkymBFajrV9UGyLmN+6ITIov48ULRDxgwxbkyx8L4Jf089/P+9mMQbg279wYs5uuVrxHLNT0YJLIEQhNfePBH1R3WxEzmvLtGF0xZmDCcTLomysw9LGutQxHCgV6VxFtHZkXboI13N5lGW85PYX0JdxuOVUStDppNdSIbg4fBaUmkJ6hHClav047binzmSqdkJ3XIBT8E+Ui+Q6TVSI92A9OYvHPXgPbqL0LoBmLYd7SkwuHN09Oz9gtxBAuaICBZLFLLHbLu2dtH53Y91/iLUkLcMuGvjue4ATy0IgiIvDgPKeDC45TQmvXMJpXsfbSHtJ6+BuAPp5Pc08OLyHqdN1CfMZuAQuOrbj0z0E+Elh/4NnDRjtP9FlN1S7x5Mik4dFpy2YFEa+IhdzmgoX5N9R+DR3Bt7+ys0mGcdZQdrZV6yWuyom2V5QISlF27D9EVik8lgf4ED46svDZBFU5FuCG4aY4lTwTxo456mG8K8TmJ33ENn/vwOKNSgWM+PJebNpcMCiT2EVsJOUit5mmdSsgck/9VNihSgu2RtfgkffgS9oaRIld2C88fQ1mg3 LmTTtRJq TXZLPwMJWxtYReGxgeWRzovHQBjLeVWYrJGdDCkpz+mMj3NVldx7EPtLBhpiZjHaJGuEb210EQ/OM9omTcNXZB94HwWCuzanX08FDzUZAX0b9sUOtPJH+ljE6AxSM6Xy9XqkhMGIH16bNRxevt5waaq5gwXwzrCXK3ArjubdmXobrcGm90rwGHPbP+yHsN3jEdZLkOOzOhFkjlfVab6E2l8UvCf0/vKLtUiJsScbPQSp0VcbC6NuEFs7dYw5upLr1Hz1XP1AB/XJqNM2gcWRTXLAPHa/aX0fjNA75 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There is a memory_failure_queue() call after copy_mc_[user]_highpage(), see callers, eg, CoW/KSM page copy, it is used to mark the source page as h/w poisoned and unmap it from other tasks, and the upcomming poison recover from migrate folio will do the similar thing, so let's move the memory_failure_queue() into the copy_mc_[user]_highpage() instead of adding it into each user, this should also enhance the handling of poisoned page in khugepaged. Reviewed-by: Jane Chu Reviewed-by: Miaohe Lin Signed-off-by: Kefeng Wang --- include/linux/highmem.h | 6 ++++++ mm/ksm.c | 1 - mm/memory.c | 12 +++--------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index fa6891e06316..930a591b9b61 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -352,6 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } @@ -368,6 +371,9 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } #else diff --git a/mm/ksm.c b/mm/ksm.c index b9a46365b830..df6bae3a5a2c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2998,7 +2998,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, if (copy_mc_user_highpage(folio_page(new_folio, 0), page, addr, vma)) { folio_put(new_folio); - memory_failure_queue(folio_pfn(folio), 0); return ERR_PTR(-EHWPOISON); } folio_set_dirty(new_folio); diff --git a/mm/memory.c b/mm/memory.c index d4f0e3df68bc..0a769f34bbb2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3022,10 +3022,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, vma)) return -EHWPOISON; - } return 0; } @@ -6492,10 +6490,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, cond_resched(); if (copy_mc_user_highpage(dst_page, src_page, - addr + i*PAGE_SIZE, vma)) { - memory_failure_queue(page_to_pfn(src_page), 0); + addr + i*PAGE_SIZE, vma)) return -EHWPOISON; - } } return 0; } @@ -6512,10 +6508,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) struct page *dst = folio_page(copy_arg->dst, idx); struct page *src = folio_page(copy_arg->src, idx); - if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) return -EHWPOISON; - } return 0; }