From patchwork Thu Mar 6 22:44:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luiz Capitulino X-Patchwork-Id: 14005453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE843C282D1 for ; Thu, 6 Mar 2025 22:45:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73592280006; Thu, 6 Mar 2025 17:45:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E574280002; Thu, 6 Mar 2025 17:45:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 588A7280006; Thu, 6 Mar 2025 17:45:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 38B77280002 for ; Thu, 6 Mar 2025 17:45:12 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7DD391C850D for ; Thu, 6 Mar 2025 22:45:12 +0000 (UTC) X-FDA: 83192608464.18.863537B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 93CB2C0003 for ; Thu, 6 Mar 2025 22:45:10 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=L7Cvphl4; spf=pass (imf28.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741301110; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=teoBvxfOPLZG+ZTVaUDHrQiabUlTMJDRM/IlVl3SrJk=; b=D26XB8Glc51/TllBRMbh6ul77lR5ea6/Yi1Hgg+mG4e38rnaXHcHcBy7xYOp+5m95ahrLf 72BWhZDSmN6P/oiRouWupr9EACLgWMIsXQ9SeWW1wAgqYbWSSdLqqIeZ7p+Yiz6j0s6xT/ ZCg3lyumy1zO1aH8+++xLs2xJeQbQiI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741301110; a=rsa-sha256; cv=none; b=tzPA0nHiQtl/lxMygzAb3RD4/7Im2GEwZWcJR966Om1h/smd+c2s6myieB+q3soFBH+VT8 WXCapn0iIiGlg2wwzfJHfWZu1AirV8UthNcuWuzjCaOJfCPn93WmXgMPVeIxFS0gKf5ot0 0GytK+CLZkxA7ziwSQ5jan7hxOe2GCg= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=L7Cvphl4; spf=pass (imf28.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741301109; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=teoBvxfOPLZG+ZTVaUDHrQiabUlTMJDRM/IlVl3SrJk=; b=L7Cvphl4qJ8BOH1ixMD7bHQxlnvW+wvblIeaR7YkpvpL0f5P/vewee+F+6veb+HaFDWOzB 2oIJOiEsZBa00Jf3k9u2P2WRP9d5bubYVnTZHEim/lhwTWwfObfg5sbBQtWerlix3C1inp gA+wVZnAtFJDMsOSHgHitrGmtLYs+CA= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-124-e9BIJS8YNq2T-zcJVrc-Tw-1; Thu, 06 Mar 2025 17:45:06 -0500 X-MC-Unique: e9BIJS8YNq2T-zcJVrc-Tw-1 X-Mimecast-MFC-AGG-ID: e9BIJS8YNq2T-zcJVrc-Tw_1741301105 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C1A0719560A3; Thu, 6 Mar 2025 22:45:04 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.88.191]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4B98818009BC; Thu, 6 Mar 2025 22:45:02 +0000 (UTC) From: Luiz Capitulino To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, yuzhao@google.com, pasha.tatashin@soleen.com Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, muchun.song@linux.dev, luizcap@redhat.com Subject: [PATCH v3 1/3] mm: page_ext: add an iteration API for page extensions Date: Thu, 6 Mar 2025 17:44:50 -0500 Message-ID: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspamd-Server: rspam02 X-Stat-Signature: jzr9kg41uq7tfhcnhm98bm6aai6cayxn X-Rspamd-Queue-Id: 93CB2C0003 X-Rspam-User: X-HE-Tag: 1741301110-375177 X-HE-Meta: U2FsdGVkX18a8JCJdKjrMJRaXSswtLYLItnxuy+DEDLY24O+/GRNSH+X4CMWfsM70Gqjduzmyu2slvtC9jxZQjzTfcYGOgDyGfnoiwQUebpsXopBryTiNsJlHRnhSj32JPa+N4LyBmSNZIOayvsmYOMKerH3DiDGnvHjl+OdMbQwvhohlJiON6YX/5SIR+se9Jo50fYz7r4T/CwrAmW1hlU0PlPsn3RPBgTs4w/Mu8QkdCDwoXoWkj6CYgfWJq+/6n3IVRgbWHrbAgPHJeMmiS7vv+SCpLWqZWhqm7OaC8/zcQ9EPxQt//6/tC188VJIktcxNQ3rlkKCl8wIGN8TYuZfpwqML5LxOwKaeHH1Qz/Y0v/6VXVr/S96fXKD63FBdx9th3B4ItVLwM70kMBikA/6Lm5VJ2Y7w74bqLoQxmtV7lvcog6I1fj8P3z/XmUH0M8mKF0fxBSNNjXufe21zm47Rc7ksxC8NGs1NWDmPEqEYuGBvY729TSAA4PWawgM6I7suOi96jjBqpHQ1If0AOEJXVZvq+WjQk7uPeT9/mS8R+UbR2fzxBJfqoGd4HzLZKotE1l9Ao4d5fGoe/etWVSGAjkgC8EqhMZQ2LJx61bYoJ5/o7mf3seayqrTJIAOmBfYcn1+jwNh3jnyBI2A1m0MDip4PMWTYsqQI7iVXwS/V8zwcoA9VWGx8Obx7NZF2llf90Tp4iuTDGl3ReQtEA0ZBLeR4PyMSaHxLcWFJuk+f1BYtwFyl9xP08zGCZrg7BcKNMK27J9/krlpRmti/gYCeEt38V3+6vFCgvZp6AEEUuRtenvIhxPBZmp4O1fzC8+6lIw27AbaJ+RBYKy591CH3deeL+rWmD8O4wkO3JT/AMDCqSNptl5r9Xrzg77Vbjmb8jZBnOJJjUsU8e/cxBVyJ5maXl5ZylT09D1KCsztD/zCVa7mglD1i9PzN3QTGFzRbLp4LMwhOLYuoPs 4x5y6C/P 1X0q8Hj2xuPJtfUB687E/3s4EVocgEOZaTiPnXtEy16sqtgp5Dn2T47VVajSoEUZWZHx7UcCpuI232+YToF6/d29Fl7kVQ9RgjV29+6qUNRmkOwmJATNl7YVPCUHur4VCz5/oPXQLeoPlZyP2woOciKv0CYLH3WSAlb5LQjWQS58/sIsEM38Q1D2crQVGI4lFhxuOnWAyOCdqPm/utsP+pr5s31rQIpJ22OZtTGaAbEyIS5gL7246s8vLmoG87TlDYYjVLOYKs2/qHfXYfzZ36goGublO8LUlKLe6KNNCLI3Sa1FEBto1YWRhQcIMHyJVtmRaXbe1Eidj7rilaQbXWpSfn2CKh277sZMr6n0vPamVns17MirMVRjI2ZGE05ckdevCesVMkwCqxVmmDner/jo25A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The page extension implementation assumes that all page extensions of a given page order are stored in the same memory section. The function page_ext_next() relies on this assumption by adding an offset to the current object to return the next adjacent page extension. This behavior works as expected for flatmem but fails for sparsemem when using 1G pages. The commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") exposes this issue, making it possible for a crash when using page_owner or page_table_check page extensions. The problem is that for 1G pages, the page extensions may span memory section boundaries and be stored in different memory sections. This issue was not visible before commit cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") because alloc_contig_pages() never passed more than MAX_PAGE_ORDER to post_alloc_hook(). However, the series introducing mentioned commit changed this behavior allowing the full 1G page order to be passed. Reproducer: 1. Build the kernel with CONFIG_SPARSEMEM=y and table extensions support 2. Pass 'default_hugepagesz=1 page_owner=on' in the kernel command-line 3. Reserve one 1G page at run-time, this should crash (backtrace below) To address this issue, this commit introduces a new API for iterating through page extensions. The main iteration macro is for_each_page_ext() and it must be called with the RCU read lock taken. Here's an usage example: """ struct page_ext_iter iter; struct page_ext *page_ext; ... rcu_read_lock(); for_each_page_ext(page, 1 << order, page_ext, iter) { struct my_page_ext *obj = get_my_page_ext_obj(page_ext); ... } rcu_read_unlock(); """ The loop construct uses page_ext_iter_next() which checks to see if we have crossed sections in the iteration. In this case, page_ext_iter_next() retrieves the next page_ext object from another section. Thanks to David Hildenbrand for helping identify the root cause and providing suggestions on how to fix and optmize the solution (final implementation and bugs are all mine through). Lastly, here's the backtrace, without kasan you can get random crashes: [ 76.052526] BUG: KASAN: slab-out-of-bounds in __update_page_owner_handle+0x238/0x298 [ 76.060283] Write of size 4 at addr ffff07ff96240038 by task tee/3598 [ 76.066714] [ 76.068203] CPU: 88 UID: 0 PID: 3598 Comm: tee Kdump: loaded Not tainted 6.13.0-rep1 #3 [ 76.076202] Hardware name: WIWYNN Mt.Jade Server System B81.030Z1.0007/Mt.Jade Motherboard, BIOS 2.10.20220810 (SCP: 2.10.20220810) 2022/08/10 [ 76.088972] Call trace: [ 76.091411] show_stack+0x20/0x38 (C) [ 76.095073] dump_stack_lvl+0x80/0xf8 [ 76.098733] print_address_description.constprop.0+0x88/0x398 [ 76.104476] print_report+0xa8/0x278 [ 76.108041] kasan_report+0xa8/0xf8 [ 76.111520] __asan_report_store4_noabort+0x20/0x30 [ 76.116391] __update_page_owner_handle+0x238/0x298 [ 76.121259] __set_page_owner+0xdc/0x140 [ 76.125173] post_alloc_hook+0x190/0x1d8 [ 76.129090] alloc_contig_range_noprof+0x54c/0x890 [ 76.133874] alloc_contig_pages_noprof+0x35c/0x4a8 [ 76.138656] alloc_gigantic_folio.isra.0+0x2c0/0x368 [ 76.143616] only_alloc_fresh_hugetlb_folio.isra.0+0x24/0x150 [ 76.149353] alloc_pool_huge_folio+0x11c/0x1f8 [ 76.153787] set_max_huge_pages+0x364/0xca8 [ 76.157961] __nr_hugepages_store_common+0xb0/0x1a0 [ 76.162829] nr_hugepages_store+0x108/0x118 [ 76.167003] kobj_attr_store+0x3c/0x70 [ 76.170745] sysfs_kf_write+0xfc/0x188 [ 76.174492] kernfs_fop_write_iter+0x274/0x3e0 [ 76.178927] vfs_write+0x64c/0x8e0 [ 76.182323] ksys_write+0xf8/0x1f0 [ 76.185716] __arm64_sys_write+0x74/0xb0 [ 76.189630] invoke_syscall.constprop.0+0xd8/0x1e0 [ 76.194412] do_el0_svc+0x164/0x1e0 [ 76.197891] el0_svc+0x40/0xe0 [ 76.200939] el0t_64_sync_handler+0x144/0x168 [ 76.205287] el0t_64_sync+0x1ac/0x1b0 Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") Signed-off-by: Luiz Capitulino Acked-by: David Hildenbrand --- include/linux/page_ext.h | 93 ++++++++++++++++++++++++++++++++++++++++ mm/page_ext.c | 13 ++++++ 2 files changed, 106 insertions(+) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index e4b48a0dda244..76c817162d2fb 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -3,6 +3,7 @@ #define __LINUX_PAGE_EXT_H #include +#include #include struct pglist_data; @@ -69,16 +70,31 @@ extern void page_ext_init(void); static inline void page_ext_init_flatmem_late(void) { } + +static inline bool page_ext_iter_next_fast_possible(unsigned long next_pfn) +{ + /* + * page_ext is allocated per memory section. Once we cross a + * memory section, we have to fetch the new pointer. + */ + return next_pfn % PAGES_PER_SECTION; +} #else extern void page_ext_init_flatmem(void); extern void page_ext_init_flatmem_late(void); static inline void page_ext_init(void) { } + +static inline bool page_ext_iter_next_fast_possible(unsigned long next_pfn) +{ + return true; +} #endif extern struct page_ext *page_ext_get(const struct page *page); extern void page_ext_put(struct page_ext *page_ext); +extern struct page_ext *page_ext_lookup(unsigned long pfn); static inline void *page_ext_data(struct page_ext *page_ext, struct page_ext_operations *ops) @@ -93,6 +109,83 @@ static inline struct page_ext *page_ext_next(struct page_ext *curr) return next; } +struct page_ext_iter { + unsigned long index; + unsigned long start_pfn; + struct page_ext *page_ext; +}; + +/** + * page_ext_iter_begin() - Prepare for iterating through page extensions. + * @iter: page extension iterator. + * @pfn: PFN of the page we're interested in. + * + * Must be called with RCU read lock taken. + * + * Return: NULL if no page_ext exists for this page. + */ +static inline struct page_ext *page_ext_iter_begin(struct page_ext_iter *iter, + unsigned long pfn) +{ + iter->index = 0; + iter->start_pfn = pfn; + iter->page_ext = page_ext_lookup(pfn); + + return iter->page_ext; +} + +/** + * page_ext_iter_next() - Get next page extension + * @iter: page extension iterator. + * + * Must be called with RCU read lock taken. + * + * Return: NULL if no next page_ext exists. + */ +static inline struct page_ext *page_ext_iter_next(struct page_ext_iter *iter) +{ + unsigned long pfn; + + if (WARN_ON_ONCE(!iter->page_ext)) + return NULL; + + iter->index++; + pfn = iter->start_pfn + iter->index; + + if (page_ext_iter_next_fast_possible(pfn)) + iter->page_ext = page_ext_next(iter->page_ext); + else + iter->page_ext = page_ext_lookup(pfn); + + return iter->page_ext; +} + +/** + * page_ext_iter_get() - Get current page extension + * @iter: page extension iterator. + * + * Return: NULL if no page_ext exists for this iterator. + */ +static inline struct page_ext *page_ext_iter_get(const struct page_ext_iter *iter) +{ + return iter->page_ext; +} + +/** + * for_each_page_ext(): iterate through page_ext objects. + * @__page: the page we're interested in + * @__pgcount: how many pages to iterate through + * @__page_ext: struct page_ext pointer where the current page_ext + * object is returned + * @__iter: struct page_ext_iter object (defined in the stack) + * + * IMPORTANT: must be called with RCU read lock taken. + */ +#define for_each_page_ext(__page, __pgcount, __page_ext, __iter) \ + for (__page_ext = page_ext_iter_begin(&__iter, page_to_pfn(__page));\ + __page_ext && __iter.index < __pgcount; \ + __page_ext = page_ext_iter_next(&__iter)) + #else /* !CONFIG_PAGE_EXTENSION */ struct page_ext; diff --git a/mm/page_ext.c b/mm/page_ext.c index 641d93f6af4c1..c351fdfe9e9a5 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -507,6 +507,19 @@ void __meminit pgdat_page_ext_init(struct pglist_data *pgdat) #endif +/** + * page_ext_lookup() - Lookup a page extension for a PFN. + * @pfn: PFN of the page we're interested in. + * + * Must be called with RCU read lock taken and @pfn must be valid. + * + * Return: NULL if no page_ext exists for this page. + */ +struct page_ext *page_ext_lookup(unsigned long pfn) +{ + return lookup_page_ext(pfn_to_page(pfn)); +} + /** * page_ext_get() - Get the extended information for a page. * @page: The page we're interested in. From patchwork Thu Mar 6 22:44:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luiz Capitulino X-Patchwork-Id: 14005455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 737F7C282D1 for ; Thu, 6 Mar 2025 22:45:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C3F928000B; Thu, 6 Mar 2025 17:45:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34C1F28000A; Thu, 6 Mar 2025 17:45:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F11828000B; Thu, 6 Mar 2025 17:45:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F388028000A for ; Thu, 6 Mar 2025 17:45:23 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6522BA8ED4 for ; Thu, 6 Mar 2025 22:45:24 +0000 (UTC) X-FDA: 83192608968.21.B42EA02 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 9034D40007 for ; Thu, 6 Mar 2025 22:45:22 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=L7Jb+1UQ; spf=pass (imf04.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741301122; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=igjG5KqbdWExRJ3eRwEZliK2f0M4oSjOk76qmgyzu2Y=; b=G5tF3/4m3jVf1KVQ230skpcdf4dkTk3zCX2nvNa0Tilarr5yLghMQYYhPn035x2UsYKXKv iPElux70OcqzFDOVU/ntp6RcN0eM2YSRk5I9ZOWgcivzAE60ad3EMTWLejko+K3eXtA2Se wjSVoOhcRtId2nW9p9yrlmxDxbK5s1A= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=L7Jb+1UQ; spf=pass (imf04.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741301122; a=rsa-sha256; cv=none; b=4wOibqx8UsJdJWAysrpK0n1bRffISoqA04WF5GNHzTzMCZWsMtoDNzjkjbg9LXVmXVM6H0 kLg02FujcVN1DnxXzpt4yxG8+xEc9tD6KTU9dFLKwnIZq1+emJsCfSf/dyYgYNlzzGFoTt M+HESFroZbBW6yGD8RVW0x1DKoB7O3M= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741301121; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=igjG5KqbdWExRJ3eRwEZliK2f0M4oSjOk76qmgyzu2Y=; b=L7Jb+1UQn0RwutvR3DQ2j9F4eYwdVBBYLjxbsceDIM7DhQOQSv1xjKyz7AiD2QyvVfcGiw /dtVpGPGYXOyKfC+On3YMvTrtS6SMzBAl27TZ5Kq+WymW1lBuv0iGkbhIMxtGF8JPWDQby aqOqzyg5e7KozT+uqqdUmAMyiSzP3h0= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-688-eFNtnSS5PtelNava3YdU2w-1; Thu, 06 Mar 2025 17:45:08 -0500 X-MC-Unique: eFNtnSS5PtelNava3YdU2w-1 X-Mimecast-MFC-AGG-ID: eFNtnSS5PtelNava3YdU2w_1741301107 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CB792195608A; Thu, 6 Mar 2025 22:45:06 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.88.191]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E540B18001D3; Thu, 6 Mar 2025 22:45:04 +0000 (UTC) From: Luiz Capitulino To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, yuzhao@google.com, pasha.tatashin@soleen.com Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, muchun.song@linux.dev, luizcap@redhat.com Subject: [PATCH v3 2/3] mm: page_table_check: use new iteration API Date: Thu, 6 Mar 2025 17:44:51 -0500 Message-ID: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspam-User: X-Rspamd-Queue-Id: 9034D40007 X-Rspamd-Server: rspam09 X-Stat-Signature: 3xre4y1onuiaoinj3trcj934u6876q6f X-HE-Tag: 1741301122-505153 X-HE-Meta: U2FsdGVkX1+a/Ps5vVemgjd5gWvyruJz8xpFVfj9uxLuZUYLFzj0UlX35zaID2imdwGs9RwCZb9tfvt/74BTtewEPGUdy/umNNUipGsNlrcAz5NDPjdCy9QDy0eIrOnXHG59s4wquDEWsTalObtZ4hPUbiMSqZpE06gGk6CQzHqRS1Ml7F9oXC0p4QyXZ1JnP6m0XOm1WNSTKbULGeBGIJbW0Ksp1Ld3TUZGUfoKEtiTOMQZ0GC1RLMv6Jim0VcXpWBo388+jO18rpomfNMBCVOKX4WMMR969Z958KlXuxEHsNl1+H4Qx2WkVd+pEJa32Wy8DibnRZz1la/BvK1KsG8W086J/6F5J7svkLflN08wMgK5iHttTUFINlkKH9CFOQevqOlu3j8elAoT/nTbIinDPZn5lTSsOM2c0aJ31WLZy69UBqDIQU3gqm9Zl20qa+XvRvUnxlVSjQUuNT6uO1U2qem22STzEWaeffqTuX8Ge6iblbF1pbhqbYRo5tIiIOJyMStlZo4t4EFsc6KFqoJFqLk5qxwYsLRyvVru2wkDyRMaPYLLoavUIgHn1NSXl/DtoTMk0Toa4Xc/pPP8XQGGWMfcTfBCVOlBLyYJvkKPvl8+5icziuM/QYML/19LbP4b5b4Kt4H5efO7UWtz88u8BV/0k25LHEA5KPXVpGBVEEHzJ2H7LKrBGMnPhWjhSs/QtpJ/l9Uax7JQevLaEnGmFkGw3zKMqay7dE9rIgGwyTTziRlfWchyLCp50hhLO/USixCxeiMrK5/2mZxNWX9hFjk63G9fn8G253ylWiQsZ1YyS7RiSacSj/YMvoo6KjLDykiUlFyPlS6G5xu1GNwmLKvI/dZc7dkztRZn4voNRP+BtwuUUTBiNZqY3yEM8shavU0d6D0ja7B/Rh0MQ6E2V1t6zI267lG1PksRyzg9oFjc0ODrrYHBdbP0kfOwlkTmG2/N/3urYlSuLNA 3gm+HP/x 8ekIMH8W6JZbicP0zcF8k53004p6XRevadqK+fXjS/ruwPu6ToKKRAvtwyw6AMmz3womzKj5EbIc3h5kgyfG8XDlRcLz5e+qT3LcYZsuTaMKnxLdkpeZSK37Kfdfb8n4kRDjfrSjVhE2wLzwdzPstZ4h3wC5F/vHvTVyCloYSQqbiqEqCBGp8to2qJU0klVnnVyuAdlj0jmac6+Ekp8+LGcEcxf0sBODfIAnSoP9hiZ7v4T4BbDiUJHz6fhhPOWfVfyV1Qvjq5ayNjym0lDLqpjiD5WSc4r5miJrGWq9Nfk/GlBfMOPmkNcX8kzpI/weVL5iZl/87G9xyFrRgcXTYFiVb/aYgKnDSzmOIJ0SkVhbWtLDIOpkV7KAiGEhjxoOD0nGwnQ2B2xP+psEofvrgykGBWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The page_ext_next() function assumes that page extension objects for a page order allocation always reside in the same memory section, which may not be true and could lead to crashes. Use the new page_ext iteration API instead. Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") Acked-by: David Hildenbrand Signed-off-by: Luiz Capitulino --- mm/page_table_check.c | 39 ++++++++++++--------------------------- 1 file changed, 12 insertions(+), 27 deletions(-) diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 509c6ef8de400..e11bebf23e36f 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -62,24 +62,20 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext) */ static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt) { + struct page_ext_iter iter; struct page_ext *page_ext; struct page *page; - unsigned long i; bool anon; if (!pfn_valid(pfn)) return; page = pfn_to_page(pfn); - page_ext = page_ext_get(page); - - if (!page_ext) - return; - BUG_ON(PageSlab(page)); anon = PageAnon(page); - for (i = 0; i < pgcnt; i++) { + rcu_read_lock(); + for_each_page_ext(page, pgcnt, page_ext, iter) { struct page_table_check *ptc = get_page_table_check(page_ext); if (anon) { @@ -89,9 +85,8 @@ static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt) BUG_ON(atomic_read(&ptc->anon_map_count)); BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0); } - page_ext = page_ext_next(page_ext); } - page_ext_put(page_ext); + rcu_read_unlock(); } /* @@ -102,24 +97,20 @@ static void page_table_check_clear(unsigned long pfn, unsigned long pgcnt) static void page_table_check_set(unsigned long pfn, unsigned long pgcnt, bool rw) { + struct page_ext_iter iter; struct page_ext *page_ext; struct page *page; - unsigned long i; bool anon; if (!pfn_valid(pfn)) return; page = pfn_to_page(pfn); - page_ext = page_ext_get(page); - - if (!page_ext) - return; - BUG_ON(PageSlab(page)); anon = PageAnon(page); - for (i = 0; i < pgcnt; i++) { + rcu_read_lock(); + for_each_page_ext(page, pgcnt, page_ext, iter) { struct page_table_check *ptc = get_page_table_check(page_ext); if (anon) { @@ -129,9 +120,8 @@ static void page_table_check_set(unsigned long pfn, unsigned long pgcnt, BUG_ON(atomic_read(&ptc->anon_map_count)); BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0); } - page_ext = page_ext_next(page_ext); } - page_ext_put(page_ext); + rcu_read_unlock(); } /* @@ -140,24 +130,19 @@ static void page_table_check_set(unsigned long pfn, unsigned long pgcnt, */ void __page_table_check_zero(struct page *page, unsigned int order) { + struct page_ext_iter iter; struct page_ext *page_ext; - unsigned long i; BUG_ON(PageSlab(page)); - page_ext = page_ext_get(page); - - if (!page_ext) - return; - - for (i = 0; i < (1ul << order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << order, page_ext, iter) { struct page_table_check *ptc = get_page_table_check(page_ext); BUG_ON(atomic_read(&ptc->anon_map_count)); BUG_ON(atomic_read(&ptc->file_map_count)); - page_ext = page_ext_next(page_ext); } - page_ext_put(page_ext); + rcu_read_unlock(); } void __page_table_check_pte_clear(struct mm_struct *mm, pte_t pte) From patchwork Thu Mar 6 22:44:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luiz Capitulino X-Patchwork-Id: 14005456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C013FC282EC for ; Thu, 6 Mar 2025 22:45:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72ACD280012; Thu, 6 Mar 2025 17:45:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DA3128000A; Thu, 6 Mar 2025 17:45:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A28E280012; Thu, 6 Mar 2025 17:45:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 391E828000A for ; Thu, 6 Mar 2025 17:45:36 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B5585C063C for ; Thu, 6 Mar 2025 22:45:36 +0000 (UTC) X-FDA: 83192609472.14.4138D3D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id BF8B24000B for ; Thu, 6 Mar 2025 22:45:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VSLJM7nI; spf=pass (imf07.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741301134; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WRxmppCKJxJfC19HA1OxBrFfA0HTaxCM6cQbJYX0rnQ=; b=3R9F5b157zKfGNqZw7IcOv484Qu1/y5H8w2an3QUg57KaBECzRAkChndAL8d19VGD2zWlr sOWvKNjHjm7c5o6AXdnZv+4Li08nHLA6QWLAS0ZQEbtoU5wi+mqzBz+4OVBHV0fj8ZQYKq 1xRmmyNc1GSDyr+WPMVmZpKybpLGcng= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VSLJM7nI; spf=pass (imf07.hostedemail.com: domain of luizcap@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=luizcap@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741301134; a=rsa-sha256; cv=none; b=UlRhY6HzskhzzGZHorxDTY4G7RmGrjQ6quA4H0dCPLmFr6WXEut2M9ujtJ3NU1XcDgJjwj hozBaMMqhdGkQhQw0on6qXLp7m01quT54igbflJd81F4BzoktaYpIDFKI/xMEDxdl9pD9s guLDyLhrMTvOJiSbA8exGRLFB7gCT0I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741301134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WRxmppCKJxJfC19HA1OxBrFfA0HTaxCM6cQbJYX0rnQ=; b=VSLJM7nIBh5NES0QP1mVpZfpK/q4gD7+kYr+UthsI6vrHAIubfXXMEt0suLW+YQUZ79oO0 5FXSL6od9edV2VFxSI4EDikUueGjJLYi2AtKNFXUWjCH2Hsv2LklngHkgEpns4i9kOD6qE W2awN/usfbVbeHrogOl+2TEE9z5oOF4= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-134-EFFnaN7fO9OoBTQOWk3X2g-1; Thu, 06 Mar 2025 17:45:10 -0500 X-MC-Unique: EFFnaN7fO9OoBTQOWk3X2g-1 X-Mimecast-MFC-AGG-ID: EFFnaN7fO9OoBTQOWk3X2g_1741301109 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 54743180AF7A; Thu, 6 Mar 2025 22:45:09 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.88.191]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1D81A18009BC; Thu, 6 Mar 2025 22:45:06 +0000 (UTC) From: Luiz Capitulino To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, david@redhat.com, yuzhao@google.com, pasha.tatashin@soleen.com Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, muchun.song@linux.dev, luizcap@redhat.com Subject: [PATCH v3 3/3] mm: page_owner: use new iteration API Date: Thu, 6 Mar 2025 17:44:52 -0500 Message-ID: <93c80b040960fa2ebab4a9729073f77a30649862.1741301089.git.luizcap@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Rspam-User: X-Stat-Signature: zguir7teu31kzjjen5dhawurd9jfyzqn X-Rspamd-Queue-Id: BF8B24000B X-Rspamd-Server: rspam07 X-HE-Tag: 1741301134-211157 X-HE-Meta: U2FsdGVkX1+rTUqJSTz0osWZAGrA5HaKnlytWOiJHX5SDPQpVmFsnANsE9GAkfA1ARSM5AY4dPpu47j6w69BFe/5zbHPPQxb79SKG9+w2ZPed8AFrG4chQ4Mdi+9tciQ0apYzvc66zaXmlnCY9eDe5QEN2u6I8qpxCP+iQibE41ewoqBX7xbx+cSYy6c3Bw32tqe121p+OKFVB1S2qqyCnGm7tHj6Ie9EVpNIonyCXu1nIt9FoN3ZN+hiBYkUUoK27YQU3ij4FyvSF3kGZp2oMfAt/A/tsjwJqLJsscha0pLA2YFWxuqnusf1RmnCLgbH3LuilpB51TYt+Ue4B9p1XTSfyQoUSMrkgxHXCS/Pf+NefOYRV2ExNO2d+QfqF8zwdKCra1HQrOTEpnKB5rZ/C6QlxnR4qnyUJ/4Ikt2mg34mQvFgcLge+WsqUJpwcBfB+dlIU4YNh083wjW45omGJhbxcvLXqay2SIf7Zfsn45j9pUNUFw0gy+df3wWav+kIldIoSWPMd1IQiADa4b/S06TkiZjwc2QJ//phgSUAZ4a+aM7cP6FeMCMRwBFD7og3+WP8C29l+1J7nZBgYZ3mQruKEzy/fhpTp7k5vac7KILLeSnfcX9QmwjmQby+AcK8Zd3PuIbdhM407oM3iFLTDj7gUY8ajOMwL8zL3HAHiylb/E61yOXq74a/I54s5ZPSE1goMCokdUmEO80G6JWQSXK6AWeKfnDjM5fw3CSj/oRnHjTRtwHXpmB2RjVOz2R+KWW7NdZGRba+Rm1QFlMGB/Yp1ux2F9Umzcivvj3ZDF8o5sSyNtSJnolwHOfPxRAGs6vlAoKZPKigg9K0/QFI5CCcsScKOSEWsLaS5nyVi8r443Ork3L1SvxEmOlgk4JllIe9MDrja0K3evrEYrkbUvl+Pr6Y/mjD6rpx8T2t5+EYtLCk+71jOv1lUmxV3UTiOTwKAwKjZnw4RQSZnr 8E9ZT43X QyVL5ec8+3zAEiRO6nSYzNYx3f1vaHOgXSPyw34V3qf2gh/JJppZHgX6o8ADdsO3FPFoH7REpZWqMlMKIIkivHydCvlfZZ0bpg/zmPkGOhVazpyvpwED4HVUwfSVulIthYvzPkLtKYwPVJzt30Lpi8l8TuxRKm9eEBH4859HtCifrkKy/3MZ4vMWe/xysaioOlzwYK4AT6buMvfz9RxBR/2LJXK7IqXIWPEeC9KSFjhIB+0N4gWke4yNTVHAIp8Bb2sYQqNpI7JHcgs9V2TBN3LClw91eXjsUsvn2eVSfCNKOEWad7ywVNUp9HzRjJ9tQMk+6HAevOWxKRUjaEeUKe+BNqmLK8ly+Mf7buX4Bm3nTeZKoCQih24C0JA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The page_ext_next() function assumes that page extension objects for a page order allocation always reside in the same memory section, which may not be true and could lead to crashes. Use the new page_ext iteration API instead. Fixes: cf54f310d0d3 ("mm/hugetlb: use __GFP_COMP for gigantic folios") Signed-off-by: Luiz Capitulino Acked-by: David Hildenbrand --- mm/page_owner.c | 84 +++++++++++++++++++++++-------------------------- 1 file changed, 39 insertions(+), 45 deletions(-) diff --git a/mm/page_owner.c b/mm/page_owner.c index 2d6360eaccbb6..65adc66582d82 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -229,17 +229,19 @@ static void dec_stack_record_count(depot_stack_handle_t handle, handle); } -static inline void __update_page_owner_handle(struct page_ext *page_ext, +static inline void __update_page_owner_handle(struct page *page, depot_stack_handle_t handle, unsigned short order, gfp_t gfp_mask, short last_migrate_reason, u64 ts_nsec, pid_t pid, pid_t tgid, char *comm) { - int i; + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - for (i = 0; i < (1 << order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << order, page_ext, iter) { page_owner = get_page_owner(page_ext); page_owner->handle = handle; page_owner->order = order; @@ -252,20 +254,22 @@ static inline void __update_page_owner_handle(struct page_ext *page_ext, sizeof(page_owner->comm)); __set_bit(PAGE_EXT_OWNER, &page_ext->flags); __set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); - page_ext = page_ext_next(page_ext); } + rcu_read_unlock(); } -static inline void __update_page_owner_free_handle(struct page_ext *page_ext, +static inline void __update_page_owner_free_handle(struct page *page, depot_stack_handle_t handle, unsigned short order, pid_t pid, pid_t tgid, u64 free_ts_nsec) { - int i; + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - for (i = 0; i < (1 << order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << order, page_ext, iter) { page_owner = get_page_owner(page_ext); /* Only __reset_page_owner() wants to clear the bit */ if (handle) { @@ -275,8 +279,8 @@ static inline void __update_page_owner_free_handle(struct page_ext *page_ext, page_owner->free_ts_nsec = free_ts_nsec; page_owner->free_pid = current->pid; page_owner->free_tgid = current->tgid; - page_ext = page_ext_next(page_ext); } + rcu_read_unlock(); } void __reset_page_owner(struct page *page, unsigned short order) @@ -293,11 +297,11 @@ void __reset_page_owner(struct page *page, unsigned short order) page_owner = get_page_owner(page_ext); alloc_handle = page_owner->handle; + page_ext_put(page_ext); handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); - __update_page_owner_free_handle(page_ext, handle, order, current->pid, + __update_page_owner_free_handle(page, handle, order, current->pid, current->tgid, free_ts_nsec); - page_ext_put(page_ext); if (alloc_handle != early_handle) /* @@ -313,19 +317,13 @@ void __reset_page_owner(struct page *page, unsigned short order) noinline void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask) { - struct page_ext *page_ext; u64 ts_nsec = local_clock(); depot_stack_handle_t handle; handle = save_stack(gfp_mask); - - page_ext = page_ext_get(page); - if (unlikely(!page_ext)) - return; - __update_page_owner_handle(page_ext, handle, order, gfp_mask, -1, + __update_page_owner_handle(page, handle, order, gfp_mask, -1, ts_nsec, current->pid, current->tgid, current->comm); - page_ext_put(page_ext); inc_stack_record_count(handle, gfp_mask, 1 << order); } @@ -344,44 +342,42 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) void __split_page_owner(struct page *page, int old_order, int new_order) { - int i; - struct page_ext *page_ext = page_ext_get(page); + struct page_ext_iter iter; + struct page_ext *page_ext; struct page_owner *page_owner; - if (unlikely(!page_ext)) - return; - - for (i = 0; i < (1 << old_order); i++) { + rcu_read_lock(); + for_each_page_ext(page, 1 << old_order, page_ext, iter) { page_owner = get_page_owner(page_ext); page_owner->order = new_order; - page_ext = page_ext_next(page_ext); } - page_ext_put(page_ext); + rcu_read_unlock(); } void __folio_copy_owner(struct folio *newfolio, struct folio *old) { - int i; - struct page_ext *old_ext; - struct page_ext *new_ext; + struct page_ext *page_ext; + struct page_ext_iter iter; struct page_owner *old_page_owner; struct page_owner *new_page_owner; depot_stack_handle_t migrate_handle; - old_ext = page_ext_get(&old->page); - if (unlikely(!old_ext)) + page_ext = page_ext_get(&old->page); + if (unlikely(!page_ext)) return; - new_ext = page_ext_get(&newfolio->page); - if (unlikely(!new_ext)) { - page_ext_put(old_ext); + old_page_owner = get_page_owner(page_ext); + page_ext_put(page_ext); + + page_ext = page_ext_get(&newfolio->page); + if (unlikely(!page_ext)) return; - } - old_page_owner = get_page_owner(old_ext); - new_page_owner = get_page_owner(new_ext); + new_page_owner = get_page_owner(page_ext); + page_ext_put(page_ext); + migrate_handle = new_page_owner->handle; - __update_page_owner_handle(new_ext, old_page_owner->handle, + __update_page_owner_handle(&newfolio->page, old_page_owner->handle, old_page_owner->order, old_page_owner->gfp_mask, old_page_owner->last_migrate_reason, old_page_owner->ts_nsec, old_page_owner->pid, @@ -391,7 +387,7 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) * will be freed after migration. Keep them until then as they may be * useful. */ - __update_page_owner_free_handle(new_ext, 0, old_page_owner->order, + __update_page_owner_free_handle(&newfolio->page, 0, old_page_owner->order, old_page_owner->free_pid, old_page_owner->free_tgid, old_page_owner->free_ts_nsec); @@ -400,14 +396,12 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old) * for the new one and the old folio otherwise there will be an imbalance * when subtracting those pages from the stack. */ - for (i = 0; i < (1 << new_page_owner->order); i++) { + rcu_read_lock(); + for_each_page_ext(&old->page, 1 << new_page_owner->order, page_ext, iter) { + old_page_owner = get_page_owner(page_ext); old_page_owner->handle = migrate_handle; - old_ext = page_ext_next(old_ext); - old_page_owner = get_page_owner(old_ext); } - - page_ext_put(new_ext); - page_ext_put(old_ext); + rcu_read_unlock(); } void pagetypeinfo_showmixedcount_print(struct seq_file *m, @@ -813,7 +807,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) goto ext_put_continue; /* Found early allocated page */ - __update_page_owner_handle(page_ext, early_handle, 0, 0, + __update_page_owner_handle(page, early_handle, 0, 0, -1, local_clock(), current->pid, current->tgid, current->comm); count++;