From patchwork Thu Jun 13 10:53:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13696638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACE6C27C4F for ; Thu, 13 Jun 2024 10:54:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 857316B0093; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8055D6B0096; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F3E16B0098; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 52CB26B0093 for ; Thu, 13 Jun 2024 06:53:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EFD381C24F2 for ; Thu, 13 Jun 2024 10:53:58 +0000 (UTC) X-FDA: 82225555356.21.62AE66F Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id 43F77100002 for ; Thu, 13 Jun 2024 10:53:55 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718276036; a=rsa-sha256; cv=none; b=inJZ+RhidxyPiZpFKU4C24BAibKH59KjR3fCJ2uQROtjVDT6vUFx/YtVA3mmkJdOkZN2+M kHnfpzf7r3bnwRtp3t+5BmSMLc1+j8LGFJzfkDwnXHrQWKnx60o/vE4jI8qMTZFw6KidnQ euEeAjAWPgslObvekAq/Cm6uM9vsUuk= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718276036; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=RyTQNMlPBUz5RK5U/OTS3bpq2qk4HZYBdVhuCKrRhSo=; b=X3jM9TxPFhQf39/OvVPGBL+NgCnn816XJraR5vZKkdGWR/490U7hJkV4wTnIzfGnYQNMhn 2mQnl7IsgTajV8jcHvxCHhEwe7/jSrh2u2J7oOlvdec5U+2wPRgkwGaYEtAnUXHGYv34NV FEeFnJ9tt90tt4o3+j2UFQATszs207s= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4W0K474yxhzdcfZ; Thu, 13 Jun 2024 18:52:23 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 863B71402CA; Thu, 13 Jun 2024 18:53:51 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 13 Jun 2024 18:53:51 +0800 From: Kefeng Wang To: Andrew Morton CC: Muchun Song , "Matthew Wilcox (Oracle)" , David Hildenbrand , , Kefeng Wang Subject: [PATCH 1/2] mm: convert clear_huge_page() to clear_large_folio() Date: Thu, 13 Jun 2024 18:53:43 +0800 Message-ID: <20240613105344.2876119-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 43F77100002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: cuemcs9j3twqiuh5zta4b9qwy787c1ha X-HE-Tag: 1718276035-923039 X-HE-Meta: U2FsdGVkX1+VtOwOoCTPfk3bsBJEC0AJH0v88jbKK16o6rtE9sBw3C5c2gFcOLUmQFEqLywfIM/GMoIoUyJHOhEaw1t22I+/Wh6TsogtE0b62B0CwBsfzQnKe+qOIxnPD3lWeTjumTzoqe99+/ud+ixqIdzy+gLr9R2LtjyvFLUgCm7XepF/Ka28Plk0f50w2ywn6vnlwpsiC+jiuASJEqLEK+Ht42XPXA6AhlKessWN1CKFELT8BwHdERGHSJZhTKeP+KxF4/pqpdsZL7prRuA4gK3HSw67sX8aOKelKwCIjnu1EhZinxWJe6REsGK0k+LDhps9FfTD/TQXfKPVxzxsWZDJ7OgbJX9E2e0fb7v9Ee9wT/nqgvVUce1mQw485xXgGqLUeR/5NsKJqgNnsqhOrxdxa12rFUiBttx9xBgJrrVwuidn5CHQSijppgsPRsLuWQnEa3kNedfsjMEZ0J2dpXGdx0DnSwKO06yYj3acxYDiSK358QTbDAdllbgSMyQhcERQjeD+szCQPvcdO6teuTAtEY8UmLZMRjNA+Jj1EpVttWOjgs7Av3omGp/JJ+fvfjDmtnLQlutSmb7lTmEqzDSUEFWc3Adfc12xyjwL9KN6j00ucuyPm10Q3H3aO6f4L74DXQ3dRV1IxukTzof7A7w4C4HE8hisyQNZrv7X/2kRzau1f0uxAKmZOUpCsxoeDdfBbSalVBAJEBgmlm71zPfX189rhVf/DTCaxpaGpvS8IjPGKeYF/ZjDX1sFbcK+nW1IIe9o9AWnRB7mHk0ZGQnTkR+2GnWFQFgIUXukn4GB1lP4DhUK+AaCOgnve5l76S/P9zOsvHNJnACP/IsaSXdoFPY/VeQ7QDJzb9qZThenDrCNSdwzt5xzkuu5Nav6cSu5WHgil5Pmi6IAkamaEKbykmHB3u9bHBFcWfmSu73ViCzOS2xztJsSbcgi9mz0MFeb/DaFAG7O8Wz Kj5HNiLP Xj7/I1NeuS7W2EPkE4iL5+K52+AQHJqJvv6p+4km3x44kPfTbLEccL0BV/nx/VpPD+Tj247JxNKElY1OWrytMIpv7St+++gk/GhmGdTS5PUYEn0SKntdpOr37F5T3iKmjBaWzFmW9UM32wUQ+OgvLSMtiyQmhLeaw5dBLVy7UUOM/OoC99yCHXzzWfrwCshcdJErA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace clear_huge_page() with clear_large_folio(), and take a folio instead of a page. Directly get number of pages by folio_nr_pages() to remove pages_per_huge_page argument. Signed-off-by: Kefeng Wang --- fs/hugetlbfs/inode.c | 2 +- include/linux/mm.h | 4 +--- mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 3 +-- mm/memory.c | 21 +++++++++------------ 5 files changed, 14 insertions(+), 20 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 986c1df63361..0e71ee8fee4a 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -893,7 +893,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, error = PTR_ERR(folio); goto out; } - clear_huge_page(&folio->page, addr, pages_per_huge_page(h)); + clear_large_folio(folio, addr); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 106bb0310352..4c5b20ee1106 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4071,9 +4071,7 @@ enum mf_action_page_type { }; #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) -extern void clear_huge_page(struct page *page, - unsigned long addr_hint, - unsigned int pages_per_huge_page); +void clear_large_folio(struct folio *folio, unsigned long addr_hint); int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr_hint, struct vm_area_struct *vma); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f409ea9fcc18..0a33eda80790 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -943,10 +943,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, goto release; } - clear_huge_page(page, vmf->address, HPAGE_PMD_NR); + clear_large_folio(folio, vmf->address); /* * The memory barrier inside __folio_mark_uptodate makes sure that - * clear_huge_page writes become visible before the set_pmd_at() + * clear_large_folio writes become visible before the set_pmd_at() * write. */ __folio_mark_uptodate(folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3518321f6598..99d8cd0f7f11 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6296,8 +6296,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping, ret = 0; goto out; } - clear_huge_page(&folio->page, vmf->real_address, - pages_per_huge_page(h)); + clear_large_folio(folio, vmf->real_address); __folio_mark_uptodate(folio); new_folio = true; diff --git a/mm/memory.c b/mm/memory.c index 3f11664590d2..6ef84cd0b2bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4488,7 +4488,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) goto next; } folio_throttle_swaprate(folio, gfp); - clear_huge_page(&folio->page, vmf->address, 1 << order); + clear_large_folio(folio, vmf->address); return folio; } next: @@ -6419,41 +6419,38 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct page *page, - unsigned long addr, +static void clear_gigantic_page(struct folio *folio, unsigned long addr, unsigned int pages_per_huge_page) { int i; - struct page *p; might_sleep(); for (i = 0; i < pages_per_huge_page; i++) { - p = nth_page(page, i); cond_resched(); - clear_user_highpage(p, addr + i * PAGE_SIZE); + clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); } } static int clear_subpage(unsigned long addr, int idx, void *arg) { - struct page *page = arg; + struct folio *folio = arg; - clear_user_highpage(nth_page(page, idx), addr); + clear_user_highpage(folio_page(folio, idx), addr); return 0; } -void clear_huge_page(struct page *page, - unsigned long addr_hint, unsigned int pages_per_huge_page) +void clear_large_folio(struct folio *folio, unsigned long addr_hint) { + unsigned int pages_per_huge_page = folio_nr_pages(folio); unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { - clear_gigantic_page(page, addr, pages_per_huge_page); + clear_gigantic_page(folio, addr, pages_per_huge_page); return; } - process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); + process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, folio); } static int copy_user_gigantic_page(struct folio *dst, struct folio *src,