From patchwork Thu Jun 13 10:53:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13696638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACE6C27C4F for ; Thu, 13 Jun 2024 10:54:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 857316B0093; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8055D6B0096; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F3E16B0098; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 52CB26B0093 for ; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EFD381C24F2 for ; Thu, 13 Jun 2024 10:53:58 +0000 (UTC) X-FDA: 82225555356.21.62AE66F Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id 43F77100002 for ; Thu, 13 Jun 2024 10:53:55 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718276036; a=rsa-sha256; cv=none; b=inJZ+RhidxyPiZpFKU4C24BAibKH59KjR3fCJ2uQROtjVDT6vUFx/YtVA3mmkJdOkZN2+M kHnfpzf7r3bnwRtp3t+5BmSMLc1+j8LGFJzfkDwnXHrQWKnx60o/vE4jI8qMTZFw6KidnQ euEeAjAWPgslObvekAq/Cm6uM9vsUuk= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718276036; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=RyTQNMlPBUz5RK5U/OTS3bpq2qk4HZYBdVhuCKrRhSo=; b=X3jM9TxPFhQf39/OvVPGBL+NgCnn816XJraR5vZKkdGWR/490U7hJkV4wTnIzfGnYQNMhn 2mQnl7IsgTajV8jcHvxCHhEwe7/jSrh2u2J7oOlvdec5U+2wPRgkwGaYEtAnUXHGYv34NV FEeFnJ9tt90tt4o3+j2UFQATszs207s= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4W0K474yxhzdcfZ; Thu, 13 Jun 2024 18:52:23 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 863B71402CA; Thu, 13 Jun 2024 18:53:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 13 Jun 2024 18:53:51 +0800 From: Kefeng Wang To: Andrew Morton CC: Muchun Song , "Matthew Wilcox (Oracle)" , David Hildenbrand , , Kefeng Wang Subject: [PATCH 1/2] mm: convert clear_huge_page() to clear_large_folio() Date: Thu, 13 Jun 2024 18:53:43 +0800 Message-ID: <20240613105344.2876119-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 43F77100002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: cuemcs9j3twqiuh5zta4b9qwy787c1ha X-HE-Tag: 1718276035-923039 X-HE-Meta: U2FsdGVkX1+VtOwOoCTPfk3bsBJEC0AJH0v88jbKK16o6rtE9sBw3C5c2gFcOLUmQFEqLywfIM/GMoIoUyJHOhEaw1t22I+/Wh6TsogtE0b62B0CwBsfzQnKe+qOIxnPD3lWeTjumTzoqe99+/ud+ixqIdzy+gLr9R2LtjyvFLUgCm7XepF/Ka28Plk0f50w2ywn6vnlwpsiC+jiuASJEqLEK+Ht42XPXA6AhlKessWN1CKFELT8BwHdERGHSJZhTKeP+KxF4/pqpdsZL7prRuA4gK3HSw67sX8aOKelKwCIjnu1EhZinxWJe6REsGK0k+LDhps9FfTD/TQXfKPVxzxsWZDJ7OgbJX9E2e0fb7v9Ee9wT/nqgvVUce1mQw485xXgGqLUeR/5NsKJqgNnsqhOrxdxa12rFUiBttx9xBgJrrVwuidn5CHQSijppgsPRsLuWQnEa3kNedfsjMEZ0J2dpXGdx0DnSwKO06yYj3acxYDiSK358QTbDAdllbgSMyQhcERQjeD+szCQPvcdO6teuTAtEY8UmLZMRjNA+Jj1EpVttWOjgs7Av3omGp/JJ+fvfjDmtnLQlutSmb7lTmEqzDSUEFWc3Adfc12xyjwL9KN6j00ucuyPm10Q3H3aO6f4L74DXQ3dRV1IxukTzof7A7w4C4HE8hisyQNZrv7X/2kRzau1f0uxAKmZOUpCsxoeDdfBbSalVBAJEBgmlm71zPfX189rhVf/DTCaxpaGpvS8IjPGKeYF/ZjDX1sFbcK+nW1IIe9o9AWnRB7mHk0ZGQnTkR+2GnWFQFgIUXukn4GB1lP4DhUK+AaCOgnve5l76S/P9zOsvHNJnACP/IsaSXdoFPY/VeQ7QDJzb9qZThenDrCNSdwzt5xzkuu5Nav6cSu5WHgil5Pmi6IAkamaEKbykmHB3u9bHBFcWfmSu73ViCzOS2xztJsSbcgi9mz0MFeb/DaFAG7O8Wz Kj5HNiLP Xj7/I1NeuS7W2EPkE4iL5+K52+AQHJqJvv6p+4km3x44kPfTbLEccL0BV/nx/VpPD+Tj247JxNKElY1OWrytMIpv7St+++gk/GhmGdTS5PUYEn0SKntdpOr37F5T3iKmjBaWzFmW9UM32wUQ+OgvLSMtiyQmhLeaw5dBLVy7UUOM/OoC99yCHXzzWfrwCshcdJErA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace clear_huge_page() with clear_large_folio(), and take a folio instead of a page. Directly get number of pages by folio_nr_pages() to remove pages_per_huge_page argument. Signed-off-by: Kefeng Wang --- fs/hugetlbfs/inode.c | 2 +- include/linux/mm.h | 4 +--- mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 3 +-- mm/memory.c | 21 +++++++++------------ 5 files changed, 14 insertions(+), 20 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 986c1df63361..0e71ee8fee4a 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -893,7 +893,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, error = PTR_ERR(folio); goto out; } - clear_huge_page(&folio->page, addr, pages_per_huge_page(h)); + clear_large_folio(folio, addr); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 106bb0310352..4c5b20ee1106 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4071,9 +4071,7 @@ enum mf_action_page_type { }; #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) -extern void clear_huge_page(struct page *page, - unsigned long addr_hint, - unsigned int pages_per_huge_page); +void clear_large_folio(struct folio *folio, unsigned long addr_hint); int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr_hint, struct vm_area_struct *vma); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f409ea9fcc18..0a33eda80790 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -943,10 +943,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, goto release; } - clear_huge_page(page, vmf->address, HPAGE_PMD_NR); + clear_large_folio(folio, vmf->address); /* * The memory barrier inside __folio_mark_uptodate makes sure that - * clear_huge_page writes become visible before the set_pmd_at() + * clear_large_folio writes become visible before the set_pmd_at() * write. */ __folio_mark_uptodate(folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3518321f6598..99d8cd0f7f11 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6296,8 +6296,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping, ret = 0; goto out; } - clear_huge_page(&folio->page, vmf->real_address, - pages_per_huge_page(h)); + clear_large_folio(folio, vmf->real_address); __folio_mark_uptodate(folio); new_folio = true; diff --git a/mm/memory.c b/mm/memory.c index 3f11664590d2..6ef84cd0b2bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4488,7 +4488,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) goto next; } folio_throttle_swaprate(folio, gfp); - clear_huge_page(&folio->page, vmf->address, 1 << order); + clear_large_folio(folio, vmf->address); return folio; } next: @@ -6419,41 +6419,38 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct page *page, - unsigned long addr, +static void clear_gigantic_page(struct folio *folio, unsigned long addr, unsigned int pages_per_huge_page) { int i; - struct page *p; might_sleep(); for (i = 0; i < pages_per_huge_page; i++) { - p = nth_page(page, i); cond_resched(); - clear_user_highpage(p, addr + i * PAGE_SIZE); + clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); } } static int clear_subpage(unsigned long addr, int idx, void *arg) { - struct page *page = arg; + struct folio *folio = arg; - clear_user_highpage(nth_page(page, idx), addr); + clear_user_highpage(folio_page(folio, idx), addr); return 0; } -void clear_huge_page(struct page *page, - unsigned long addr_hint, unsigned int pages_per_huge_page) +void clear_large_folio(struct folio *folio, unsigned long addr_hint) { + unsigned int pages_per_huge_page = folio_nr_pages(folio); unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { - clear_gigantic_page(page, addr, pages_per_huge_page); + clear_gigantic_page(folio, addr, pages_per_huge_page); return; } - process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); + process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, folio); } static int copy_user_gigantic_page(struct folio *dst, struct folio *src, From patchwork Thu Jun 13 10:53:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13696639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CE5FC27C79 for ; Thu, 13 Jun 2024 10:54:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E98F26B0096; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E46836B0098; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D357C6B0099; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B4A0B6B0096 for ; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 401981219C1 for ; Thu, 13 Jun 2024 10:53:59 +0000 (UTC) X-FDA: 82225555398.06.442F5D4 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf29.hostedemail.com (Postfix) with ESMTP id CD2B0120010 for ; Thu, 13 Jun 2024 10:53:56 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718276037; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h0S7yF7wpzrUoqotOipq3SjDSseA96EwUGD4qDVkMmk=; b=MC2/46eIYe59XLI7+ewvesLH1faB+AbvFF1FimJ51cNqDCpS6PU4Z1fSTiR/WHED3z7jwi stDpDyx4YPlVNwKLFMOPYtMwiZ1FoqqNQVPrW4eSgGTHrW4F2UNMJs/9s+VwxQX/S1mh23 RsK+l+hHNgZ9oRNTTr3+JLACUjuVHS0= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718276037; a=rsa-sha256; cv=none; b=rQ8OEBeRa3atYHFGhg0yoPHngbcHoRpY3wGqSoBeW8RJPCZl8jWcY9/w3wEdSGFuoP1gwE mtuW53Q2+jWNbxUMTwPsXmO1tD7+m6vM1EZATimyaWEPb4NQxaEwMKzYWMH+pqE9yrGzzu TRRlC7voo3SshbckH2K6jZkobQiDzpk= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4W0K145mB4zwSPy; Thu, 13 Jun 2024 18:49:44 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id F15641402CA; Thu, 13 Jun 2024 18:53:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 13 Jun 2024 18:53:51 +0800 From: Kefeng Wang To: Andrew Morton CC: Muchun Song , "Matthew Wilcox (Oracle)" , David Hildenbrand , , Kefeng Wang Subject: [PATCH 2/2] mm: memory: use folio in struct copy_subpage_arg Date: Thu, 13 Jun 2024 18:53:44 +0800 Message-ID: <20240613105344.2876119-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240613105344.2876119-1-wangkefeng.wang@huawei.com> References: <20240613105344.2876119-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: CD2B0120010 X-Stat-Signature: ek5rwgd1sdmocrtt4bw7fjahbjaa18sy X-HE-Tag: 1718276036-685414 X-HE-Meta: U2FsdGVkX1/lPESAvjRMCBVgo9SzrDu6osSK79WsbqGVOPYVjiWn3iUQrN9omFywTMBeU/KEhRPh48SGl6ZYn5V6GdGxvsnVj03+Y7xRwD6OFb8GxD0taGCZnvQKOAuiAg4tOR/QR0qeWdh37C3KTS2Pax1CGc866iNyxnnoSOkbz6GVtIH+nEGKus+LjKURhOAT9MnWrEhMGlHyLKzxcZZJpTCig4sumQzStu3q/4iM1ZS8e71wC52zss+hpOluVK4chXCkav8eWV0Si6sJz29bb8e0lDpasObLTh1PSZgS+vSioRuRGY06gsTLfi6AduYo5QPpdr5KImtzVBP9K3mnkgB7d1dAaU3Y//nSg6oiZy3bhkaveWBpCy0up0LLo3KyBeWkT6q0wnNsyT6ascGXtcgn3h2kTyxQv2tWqi0Szh6qgvP8BtvNWGzoc/CVx/UbHxMv0LaSRX08YBt+PqCgz/UftAYUtsFkebPw5oWLdWmWZcviMS2OvDHNIzMCU8szcvbZZ0rkYro91h4oT2VjEHnY2ETYrx3BDYCO5FQnxKRVfBEbE964TKA5+BwqJFSyC6cVh/ioLCPLgRa7OUXrBEUISfHqLyA5SJymNyi68+UXw9wkO89GDBCtjEQ2Ctt1GPw9YPKrQB1hXrpp6iotoVh7lGSom4+gpDEkoZ6MAUFPWw5vq/PnsQEb9rMBarmse7Uj1bJLQYX6B/d+0Q04vV2BNtwBifquEWVBPDShNEs4JG7tIMqXLCREiqzE62iBKqrVBfvXzV1tn52YCm/xeMjAMoIqOOnF97Us4InQkmcnDW/uZqyM4HhKPY/XPxlZWoJw4lpbzv3EeCTStmlkR+T/BSbpv0SeQbf7JeGRfoWdwUJWIcS3Vl41K+djEd5Qd43aUyjedmwyXQn2cEu2k68kKLHGGCy7ddY/+wFHbCgMeOHTmnbEw7bmUSvtKy2Vt1pY9rvmt2B7WtQ tJ0XXAHQ cyz0pvyzNg1IOqxlVBl0y9ydDl8JMLkgzNkRCTHmZCZV5E3h3Rj8p1VcSEWkq7i3wLE7FmmULyPNbcsMyTpymBkiI2Kk2r7DAKOKNvwfpPf5eB9DYjKpXl17aSu7U4Ql1XWCkTDangsWSOB0VYyLUpbhJZnPoUsPF/+XYaEAXL/O4VRfwOt6YIdRt9a/Q1JR94Mqg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Directly use folio in struct copy_subpage_arg. Signed-off-by: Kefeng Wang Acked-by: David Hildenbrand --- mm/memory.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 6ef84cd0b2bf..ca44da80fd47 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6475,16 +6475,16 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, } struct copy_subpage_arg { - struct page *dst; - struct page *src; + struct folio *dst; + struct folio *src; struct vm_area_struct *vma; }; static int copy_subpage(unsigned long addr, int idx, void *arg) { struct copy_subpage_arg *copy_arg = arg; - struct page *dst = nth_page(copy_arg->dst, idx); - struct page *src = nth_page(copy_arg->src, idx); + struct page *dst = folio_page(copy_arg->dst, idx); + struct page *src = folio_page(copy_arg->src, idx); if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) return -EHWPOISON; @@ -6498,8 +6498,8 @@ int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); struct copy_subpage_arg arg = { - .dst = &dst->page, - .src = &src->page, + .dst = dst, + .src = src, .vma = vma, };