From patchwork Sat Oct 5 20:01:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF4E6CFB42B for ; Sat, 5 Oct 2024 20:01:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD5366B0325; Sat, 5 Oct 2024 16:01:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C84C46B0326; Sat, 5 Oct 2024 16:01:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD8636B0328; Sat, 5 Oct 2024 16:01:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 824AC6B0325 for ; Sat, 5 Oct 2024 16:01:46 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 373201C54FF for ; Sat, 5 Oct 2024 20:01:46 +0000 (UTC) X-FDA: 82640619012.06.3813708 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 9C3FD12000B for ; Sat, 5 Oct 2024 20:01:44 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="ARz/99bn"; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158463; a=rsa-sha256; cv=none; b=ob753kztL9/RGjxXmh5Fqcw91EYDN3gYi7fU1QvcuJ6biaNNZv+acPZJOOrX4+PkzCB1C0 b6gch/s6uOCIl+ZkK0OqQq4Onzw9Xr6AQDcvWRPt/v3OCgVssS+l5YBl+cS8uLHYFabPHn eBA1SEpzgjBBxBZETuiPle83vDso5SQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="ARz/99bn"; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158463; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rfkeX8rptI2zZl5dAxALrVG785z0+T7WXF1RMTEorUM=; b=52VFRF4n3t0Q/RuxcgA6YMVEDf3Z7yY2DLAznnkK4ZI4mdQn1r7hrVOcjkmneyVJwsHIuj XY9WpT1v3isEPa8b8XUi46NaPlXlC/32InSkY5jmKrMLjE3YabMFxQQd3QRJrQgb2PeFzg tjr9iRJ2mI+2CoQOWWsbUyaGaG1Dm48= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rfkeX8rptI2zZl5dAxALrVG785z0+T7WXF1RMTEorUM=; b=ARz/99bn8NdjGsNcFp3jDQIAf4 rRwkXNa5ZklzAKKPX6Af9ksde3A8wfzr1CMihyre0nq28rxX8j801kultfDOi/tKfa3NDjksWTYqy nWgpEkLv0ruk65edkZKdGHgWBmDZ/lknHCA6aUb5iQhJP+eUrPgvu/Nhb9QvJu5BXfWAHzcvC62vI kqQ0Bj5WOtpoi/sUILZTRXtd1l3tmSEM/HNqH9CmASMym4sy5XLrHshgkMjFzjMjdDVUjpojHdPG5 1haKywhR6VHHFRvJfeQd7gLyXgcp+yPqcJXXYvt2VMsJJlfAFfrjmS1iXNp/cj3OmBuFtbyVvkunq o/9UdXbA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZb-1zGR; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 1/7] mm: Convert page_to_pgoff() to page_pgoff() Date: Sat, 5 Oct 2024 21:01:12 +0100 Message-ID: <20241005200121.3231142-2-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 9C3FD12000B X-Rspamd-Server: rspam01 X-Stat-Signature: 8ptmo1d376k9dg93hmch67t3ouiwitmj X-HE-Tag: 1728158504-136959 X-HE-Meta: U2FsdGVkX1+y+v2WXHgNjZU0bywry4OWRGh7/S27zcRMYciBz4eUg4HD3+fymdq/b09huG6B8crJg/o/dxe3Dx30Llrj/UK+MzuFMk5rpB2yCjr6ENUltebqZ7zT+YwqXB1TjvXphqtXo4X0lEqM3RYu19xwc2uRuTe6nu78FWMPbusmAc/q+D1SwTvoumzhWKoeT9IDbFJ8cWTYAeKAZKSEL8cj8rUqeRJYdCr2rZ1q/4DtzLvSgYfhuqzHJmVWuhOYdq11Oz3DueRCAJ1mWHVB/zEewqPCYCnDeOmfSsy9kQuAsbqO6Jn1OWuA27NsDHcG7IXNID+GHO4ZapDEfpVaE2WNgOwAd/5sGZx7ssg1nHKA6uMniFmQMs+qunAT/7CbKt2+5Im/4rcI5GN6VMXRxu1WeUhb57q+c9K7XlVuaEbyyCn8bl6IgC5ehXvNZTZkmjrXlTFE3Gypu/BO90Xf6m9PGftvgURN96VGNmbOWzM2jy9KbdsikNlv1FLUMO38Ih4mc/Lx2Odo8+Qk1/1+2UA+Vvqnd1QaYEbpF9oUyETtsTT56pTZifaQoYg+/gXrBOOhcdcYazinpZ0yIwQ152/ZOrEIMo5D4KMNCcoFzBFmerrmZInVqd902QQHOKRPoJfS2xDn0VTjsBWYYQIlifWR/P7gILPpNW1CSY/rFK6LKPIMnRR72XtTOAK9xrrJZMGOOzG0+O/OLdByXdKijpw7b2XHUQQ8dJ4lnYJ0RxivUcioGSHkEJA092b4JtxgzyKWCK4ibW3MpnpChSS272q1vJM/vGvVlt7jX9SYvyI6rEfVNgjQGIQYCVdrGO4L+a+ww2XWBwrnsRfD0JhiBxSCrGOOb61neyyCFFq+IwaceQgCrFb8Of6RdcgMtVSvddTPtMWTwC31GVOwli4KHGuLG0TfQC0vaJf366d//9WLrIEDBnD+yPl2dE4RkKjDp54ALmI9ElUCvZJ AEwakFKt XAIBcuHJQY/6DMMAIRiKPXETrLW0kvQfrvOX9R3Xw3GYrRXvolIyq+ZfVLq9NHWbLW5ZuG0Vj9YC1DDwzfexDCChcYVCBrjKPJRQKmXDIabFjieC4FytFDWrzqhzt8u5nuRljmPxsQiUVaH/2LGxfw0uArGckiLJOvWTZXNKIP4DqYnJh2JQFmk8KFcZkSXl8SNg2q9Zz8IBjDqIC3GbY4+Rc+DYIuD1FbFeSY14MQwFzfUKnT5fzfIdSqPPrmxI90eUY5inp/sxiEg6aC4YIEeslOLlUvog1Ghh7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the function signature to pass in the folio as all three callers have it. This removes a reference to page->index, which we're trying to get rid of. And add kernel-doc. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 2 +- include/linux/pagemap.h | 31 +++++++++++++++++-------------- mm/memory-failure.c | 4 ++-- mm/rmap.c | 2 +- 4 files changed, 21 insertions(+), 18 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ecf63d2b0582..664c01850c87 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1895,7 +1895,7 @@ static inline unsigned long page_to_section(const struct page *page) * * Return: The Page Frame Number of the first page in the folio. */ -static inline unsigned long folio_pfn(struct folio *folio) +static inline unsigned long folio_pfn(const struct folio *folio) { return page_to_pfn(&folio->page); } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 68a5f1ff3301..bcf0865a38ae 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1011,22 +1011,25 @@ static inline struct folio *read_mapping_folio(struct address_space *mapping, return read_cache_folio(mapping, index, NULL, file); } -/* - * Get the offset in PAGE_SIZE (even for hugetlb pages). +/** + * page_pgoff - Calculate the logical page offset of this page. + * @folio: The folio containing this page. + * @page: The page which we need the offset of. + * + * For file pages, this is the offset from the beginning of the file + * in units of PAGE_SIZE. For anonymous pages, this is the offset from + * the beginning of the anon_vma in units of PAGE_SIZE. This will + * return nonsense for KSM pages. + * + * Context: Caller must have a reference on the folio or otherwise + * prevent it from being split or freed. + * + * Return: The offset in units of PAGE_SIZE. */ -static inline pgoff_t page_to_pgoff(struct page *page) +static inline pgoff_t page_pgoff(const struct folio *folio, + const struct page *page) { - struct page *head; - - if (likely(!PageTransTail(page))) - return page->index; - - head = compound_head(page); - /* - * We don't initialize ->index for tail pages: calculate based on - * head page - */ - return head->index + page - head; + return folio->index + folio_page_idx(folio, page); } /* diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 96ce31e5a203..58a3d80961a4 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -617,7 +617,7 @@ static void collect_procs_anon(struct folio *folio, struct page *page, if (av == NULL) /* Not actually mapped anymore */ return; - pgoff = page_to_pgoff(page); + pgoff = page_pgoff(folio, page); rcu_read_lock(); for_each_process(tsk) { struct vm_area_struct *vma; @@ -653,7 +653,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, i_mmap_lock_read(mapping); rcu_read_lock(); - pgoff = page_to_pgoff(page); + pgoff = page_pgoff(folio, page); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, force_early); unsigned long addr; diff --git a/mm/rmap.c b/mm/rmap.c index a8797d1b3d49..3b11f8b6935d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1280,7 +1280,7 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page, */ VM_BUG_ON_FOLIO(folio_anon_vma(folio)->root != vma->anon_vma->root, folio); - VM_BUG_ON_PAGE(page_to_pgoff(page) != linear_page_index(vma, address), + VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address), page); } From patchwork Sat Oct 5 20:01:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E89F2CFB42D for ; Sat, 5 Oct 2024 20:01:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0E4F6B0322; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DBD916B0324; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD3F26B0325; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AF3896B0324 for ; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A6353160B78 for ; Sat, 5 Oct 2024 20:01:34 +0000 (UTC) X-FDA: 82640618508.17.0627FE6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id EC0C740004 for ; Sat, 5 Oct 2024 20:01:32 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Z/SaG68C"; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158347; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r1TD4WBdGO2XPAwJg8Qd5sGpfdJ6iNoLJ7St3AezfDY=; b=FeyVmWjtxt73t8TMXLrYqUJt4q8bjn7YtTfpkTu1rMYktqI/eOau7UsWE1GtlZt3D3zsiB irx1PO9mSXJr1g1LPAByShzNYifnxn2GGSvmzk73iuLRJKG5cKklPsd5kribTKby+lMc3v DhrgwGoucjs4KJZKhhxGkCfljNOlMX4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158347; a=rsa-sha256; cv=none; b=F4MKtAxLEuA2U1xoVfGs3M9crpfvSqXkIF5fS6D0Sz3tf22nemtmXwF/w5i8c+/MlHDBX5 Q/cl/s9//5ZcC4TckOyl7ZCTwnVdLwnDa7Sjcs/mKJJBOIhsU50OsV17uI3pm6obosJrpi mR9Px+4uaMZjZSsoODBWG0PdwL0POBA= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Z/SaG68C"; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=r1TD4WBdGO2XPAwJg8Qd5sGpfdJ6iNoLJ7St3AezfDY=; b=Z/SaG68C9UCwS+ny/MUqnvEoSF kptiyaIY3zPV6i3JPatrJtY9qxJMgfUgVtC2BoFjWbYptcMFujNFR7Ra6dRscg33fvY2JZizUcPhc L59Df4AKbLKRB1Sesw/HsE7nbE/utaRCb1nOLNVBnyo5IrnfUCqQo6SImm4ikH9BoHZq6mgX3I0/F CVc0sweWhlqdjzuTNnJL47Oh87bFg6fJ5gVNXt+07J2fN42TqSj6jeCc0QIj6Acm8WoF7iQqHT9Rg 61Ir7oRyYwmoFIZHuLjPvxo6BrsP2MqosEkFoL3+MAz5Jsd2XnGTbnk4oSDj16CB6kxCNEV2kq3gD SINrFUeg==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZd-2Ntw; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 2/7] mm: Use page_pgoff() in more places Date: Sat, 5 Oct 2024 21:01:13 +0100 Message-ID: <20241005200121.3231142-3-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: EC0C740004 X-Stat-Signature: ygfuns5g5a9efhohqdobt6pyeo47kmua X-Rspam-User: X-HE-Tag: 1728158492-205586 X-HE-Meta: U2FsdGVkX18A4yiw2KlDl1rghzItqe7FKnLDrjW+lfxo2Oc2oUgxxEEUy+zJE86d2Kf1N9/8/886Spwt0lewtg5weZahtK/Yt7PCCsenuV/llem8mWoylpuVF2MtsIlYwvlb2wY+Gh0PyRQWPCJQBBydXzdlPQsZS4L/mb09UhwtzS7rpTvkRuKCPOBVhAER+OHUtfjtSyycP9Lt+w5VXS1fY/lKweRQYT6Jxik0tdKOZ9mMa7fOLfMCaeu33DxkwQEZmF0vPk4U5LdHWFOFK0H63ZCbQtdRl+5SP3NlqA9qzS+3kpeqVxxJdRym3zP4u0qGUmYe91+nnknbxrx7GfrvJz3+IqRRqTPuCkd9ZvfwqnxTJ4gN+AKygkqR/tgYoDR5QAsPp7vJEC7KHl4dRArcI8+RFNRqyjIV4WChihZDzx+m45GTAL5HgkINrPEWNdbtRoS9iAZgmEHX5MYUusJBG9YjQQj3JcOYKjNZqG9WMKLka74xq0hrJxI+Xo6nDMcZbx2ajr8niWWfMpfEsjebjTGqg3gW/j3Q6tquKh3O2ZMzgvfXsVoAkVNA+yczbiEl5H0TJGf8p3sJv4nf/2oxwWhZKDs4ytejws8cfHWMCyIhqLPBnnwcq5/U+xDIupaLYTzudGIZRAoziL5HZSn6Sj2z0D0B1N9BQ0vZsx3d76+nQ61a9GW/JAHapz48hNQEylqDrhNasYGd+V0hTVP0s3TOuadNdLgeMEpZRjHxsRzJSvjoErL92E28chz5i8QlrHzSUT0J0SWOTMHcdZg+LI5CnMtMG2imhozI3OQt24ODJevUpkQnisTFjhSVZdpbPQ1KVK11Dz0Ck6h8fvBB0tu7DdDZSLb6tNUsRXsHZ0HsGR5FreBYM+WS37R68OajxZ+LXFjrvQHTa8xokdn4soNJFOYwFuSsUzksvzTZPy3NUl6EUL7kjOSHld3Z5MtX1KV8lE7K8wsixmG HxJ2QYEz O7qcSRug0l5upM0Wba/RsSzgRTQtcbaz3NbEvRhdgHxxO1XnUORdGy5u2sCgqKe621MowETBBbELBpsj8eYDp9afyixApXoZQ4fo7fpFpEA93jU5hhU7+NcwZcgDsD2LJp8k1ehEs2U0WNC2zc0lwJCSu2GmcFOJsfnClBL8Q2O8/DL4I8dTdKcivedcU0j8LB/aPIUjP35EZMphLdh4uZHshIPMCKPjwbqclXYK6v2m50zsK+dOhVMWrft1k8nWk+4QhWjae4GS/GwHSxP6cRU5TorSHY2IwgRbe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are several places which currently open-code page_pgoff(), convert them to call it. Signed-off-by: Matthew Wilcox (Oracle) --- kernel/futex/core.c | 2 +- mm/page_vma_mapped.c | 3 +-- mm/rmap.c | 4 +--- 3 files changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index 136768ae2637..342dc4dd328b 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -399,7 +399,7 @@ int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key, key->both.offset |= FUT_OFF_INODE; /* inode-based key */ key->shared.i_seq = get_inode_sequence_number(inode); - key->shared.pgoff = folio->index + folio_page_idx(folio, page); + key->shared.pgoff = page_pgoff(folio, page); rcu_read_unlock(); } diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa208..ade3c6833587 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -328,7 +328,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) { struct folio *folio = page_folio(page); - pgoff_t pgoff = folio->index + folio_page_idx(folio, page); struct page_vma_mapped_walk pvmw = { .pfn = page_to_pfn(page), .nr_pages = 1, @@ -336,7 +335,7 @@ unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) .flags = PVMW_SYNC, }; - pvmw.address = vma_address(vma, pgoff, 1); + pvmw.address = vma_address(vma, page_pgoff(folio, page), 1); if (pvmw.address == -EFAULT) goto out; if (!page_vma_mapped_walk(&pvmw)) diff --git a/mm/rmap.c b/mm/rmap.c index 3b11f8b6935d..90df71c640bf 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -775,7 +775,6 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) { struct folio *folio = page_folio(page); - pgoff_t pgoff; if (folio_test_anon(folio)) { struct anon_vma *page__anon_vma = folio_anon_vma(folio); @@ -793,8 +792,7 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) } /* The !page__anon_vma above handles KSM folios */ - pgoff = folio->index + folio_page_idx(folio, page); - return vma_address(vma, pgoff, 1); + return vma_address(vma, page_pgoff(folio, page), 1); } /* From patchwork Sat Oct 5 20:01:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37534CFB42B for ; Sat, 5 Oct 2024 20:01:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92B946B031E; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DCE16B031F; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A43F6B0322; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5C05A6B031E for ; Sat, 5 Oct 2024 16:01:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3232CA019D for ; Sat, 5 Oct 2024 20:01:30 +0000 (UTC) X-FDA: 82640618340.11.F540DE3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id EB5D2100006 for ; Sat, 5 Oct 2024 20:01:26 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dq18neqJ; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158380; a=rsa-sha256; cv=none; b=dRzX7QYwk9l14kfGW+lvJg1GKqlbcejPELtlUf26KmYwF/ilDY7zBhOlt1l50t57x3eI9d xJH18fn2VhLhCNfgOoFtREsq90b7rXyJE3BSYdMfEXf66cs27GPzfqD8+Gph9WqcbZ6k9i CXCILP/isSskCrr+erRiRLfx4OXx7Hg= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dq18neqJ; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158380; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qJW+rbBtSAKersOzagu+5Aywr53YYOEEvCKDOpd7s3U=; b=KZNTHXONX+tH7lEPb8U9JExoucX6Bq0IRQpzFTfbDelotoRlSVXY/KWDSx2dNXjCGMxlNN MaBmYz/NtLm1YgR69yp3etgfr9ll8mHRrkm9w4EibFKz8pSmSybTUArr2O7VJ+yBR1WlJ0 3jVXoGyk2MGhnMu4WSGh5IWUgsoXSao= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qJW+rbBtSAKersOzagu+5Aywr53YYOEEvCKDOpd7s3U=; b=Dq18neqJxFPe1ib5EoXbPg8QCa 24kc67+gFyAbqb+AQNqFdpmfrFiIY2seApVn70oejKYNjvvoCa66IaZCvdsM8R4lSDqbhEVj3IFCC UMg4q80MabrSI+d1JYq4b0UajGRJ8ik2JmeBw7TS0/OWvN67yCyJTRboxWOwvzabPJDwgwevXA+JL Zxjsd7dNxQDM0Tf13Nzu/pnjUlqnrPOePmIGwq8kYSLgBygFGQHnBUOgS6E7qwy+W079gqNnm+tko C9pa/DxQ2/SY2uZxeXtf2pv6tuV8t7hPJJqFsKLwDgCaL77I2HB1WP3x7L0DCYryYrmWEbDpn3hNa prfXuA2Q==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZf-37K3; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 3/7] mm: Renovate page_address_in_vma() Date: Sat, 5 Oct 2024 21:01:14 +0100 Message-ID: <20241005200121.3231142-4-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EB5D2100006 X-Stat-Signature: a6gyxi97d8wddt4bsbtfqbkudc9jibg7 X-Rspam-User: X-HE-Tag: 1728158486-11985 X-HE-Meta: U2FsdGVkX1/0SRWnbp42Hz6kZe9ji/F28Tv+C+d/JYyRAtB3LSVTeVYeAQsbfom0hBY/9slc0dDIEkBQNyn6AhYGGl9t01Kyn9QruaDzf0HDIG0Drm//5bjWlxKNSClGa//MrEEv6KoaURBxBBID/KmFh2adUOqe/KH3zulXm8xjPza0UTcsquzDKkPocasxybJ7r4dOoCrIO9IDm20E79rq6bxCz4dqrvateYkMCW2O1SyIergYesQpVVGJENtYNXxW5IdKJAIfIJf6oTuZ+U/GgUstodDCO7xiqayCDvMjI6+Y4TlL9Fs3PvnNBYR3h8aks4fPpdyaD0P1DwssMRLNtZ58+VXid3qy6nNHko6bfHXDsMsh5OfLOAKmTKRJ9z3zdwWWa0Ep7epT6bsfPn+HaGbMCYPswEGt23qycuGW0MDke55rzYIALkXQo6OmykCAd8ZqJ75Qp4MtYsd8nqBQ6uZGHyZZY4ALf+fstpw1MeDDSeuzO0+K4RW45qOuLDU78q7b9SR9ak9t8LCCDUh9tROr2PTL5D71oxrvtotEnxqdlLnG9X+UF5G62/Trm8PS2HMUQZaKCW3KTWplkpFP/VDhV86j72J53RDHrRUFdV5ewGbinpMiO6HIAzH3Ju9lAzo5vENFikSqYkosTTomsLasnY7QFJifRsS1v4FAII3B6p1Ti7lzf10FXdw5le41XdxXps3RpoWkzJNIgsmDiuT4VlyQ1345kg5c+nCV79EUNjSrHxaRAah2yIIu/4H7Ut860yJPAPPi/nwzhbbqji3nU5N3Y+A91nsWLo/92Xjogo1UD18J63jG31EE+SrMNyN7Zt+/NViELJnkObQa1M0M9JESn6X5a6Lm+OKIMLYM2Qdj/e7Aa6Q9L8vOHZNmKUDqb22/r+IPzVoDO83s9jVqIGF4EXN0tffiEYOh1tAxIaiy0KpHYywDlJ8LwfCX/gtyZOx2OJQ2QYr VZeNrnUt H5HjL8oBZ/l1MH9eTgWH8sutfhpi5Walip2D3Gt7907qT6RBjs9T6l8ffaAy1mw4PSKs0dQFNyrTdEJsoAW0zjmgKmz69OsjQ4FPeh1XzhPEmZ3xNnrlYOI48DOvP4JZnAlpYSAXLGD32DZnQ1TNI3t4KZCXDXX2vOQedMnCC7ZxrryhgYMoSUysOIURrT/BHiP0yUeVUnRg/+6PSzyF9qpnWTpxSYslBYo55m4BoJEkto9IcvXqeG9klTFvD7V8naGQP51qq0vD9YqioSYmba2aK6r24xPqRsYHy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This function doesn't modify any of its arguments, so if we make a few other functions take const pointers, we can make page_address_in_vma() take const pointers too. All of its callers have the containing folio already, so pass that in as an argument instead of recalculating it. Also add kernel-doc Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 7 ++----- mm/internal.h | 4 ++-- mm/ksm.c | 7 +++---- mm/memory-failure.c | 2 +- mm/mempolicy.c | 2 +- mm/rmap.c | 27 ++++++++++++++++++++------- mm/util.c | 2 +- 7 files changed, 30 insertions(+), 21 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index d5e93e44322e..78923015a2e8 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -728,11 +728,8 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) } bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); - -/* - * Used by swapoff to help locate where page is expected in vma. - */ -unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); +unsigned long page_address_in_vma(const struct folio *folio, + const struct page *, const struct vm_area_struct *); /* * Cleans the PTEs of shared mappings. diff --git a/mm/internal.h b/mm/internal.h index 93083bbeeefa..fffa9df41495 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -796,7 +796,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) } /* mm/util.c */ -struct anon_vma *folio_anon_vma(struct folio *folio); +struct anon_vma *folio_anon_vma(const struct folio *folio); #ifdef CONFIG_MMU void unmap_mapping_folio(struct folio *folio); @@ -914,7 +914,7 @@ extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); * If any page in this range is mapped by this VMA, return the first address * where any of these pages appear. Otherwise, return -EFAULT. */ -static inline unsigned long vma_address(struct vm_area_struct *vma, +static inline unsigned long vma_address(const struct vm_area_struct *vma, pgoff_t pgoff, unsigned long nr_pages) { unsigned long address; diff --git a/mm/ksm.c b/mm/ksm.c index a2e2a521df0a..2bbb321f92ac 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1257,7 +1257,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct folio *folio, if (WARN_ON_ONCE(folio_test_large(folio))) return err; - pvmw.address = page_address_in_vma(&folio->page, vma); + pvmw.address = page_address_in_vma(folio, folio_page(folio, 0), vma); if (pvmw.address == -EFAULT) goto out; @@ -1341,7 +1341,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, { struct folio *kfolio = page_folio(kpage); struct mm_struct *mm = vma->vm_mm; - struct folio *folio; + struct folio *folio = page_folio(page); pmd_t *pmd; pmd_t pmde; pte_t *ptep; @@ -1351,7 +1351,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, int err = -EFAULT; struct mmu_notifier_range range; - addr = page_address_in_vma(page, vma); + addr = page_address_in_vma(folio, page, vma); if (addr == -EFAULT) goto out; @@ -1417,7 +1417,6 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, ptep_clear_flush(vma, addr, ptep); set_pte_at(mm, addr, ptep, newpte); - folio = page_folio(page); folio_remove_rmap_pte(folio, page, vma); if (!folio_mapped(folio)) folio_free_swap(folio); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 58a3d80961a4..ea9d883c01c1 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -671,7 +671,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, */ if (vma->vm_mm != t->mm) continue; - addr = page_address_in_vma(page, vma); + addr = page_address_in_vma(folio, page, vma); add_to_kill_anon_file(t, page, vma, to_kill, addr); } } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b646fab3e45e..b92113d27f63 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1367,7 +1367,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (!list_entry_is_head(folio, &pagelist, lru)) { vma_iter_init(&vmi, mm, start); for_each_vma_range(vmi, vma, end) { - addr = page_address_in_vma( + addr = page_address_in_vma(folio, folio_page(folio, 0), vma); if (addr != -EFAULT) break; diff --git a/mm/rmap.c b/mm/rmap.c index 90df71c640bf..a7b4f9ba9a14 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -768,14 +768,27 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) } #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ -/* - * At what user virtual address is page expected in vma? - * Caller should check the page is actually part of the vma. +/** + * page_address_in_vma - The virtual address of a page in this VMA. + * @folio: The folio containing the page. + * @page: The page within the folio. + * @vma: The VMA we need to know the address in. + * + * Calculates the user virtual address of this page in the specified VMA. + * It is the caller's responsibililty to check the page is actually + * within the VMA. There may not currently be a PTE pointing at this + * page, but if a page fault occurs at this address, this is the page + * which will be accessed. + * + * Context: Caller should hold a reference to the folio. Caller should + * hold a lock (eg the i_mmap_lock or the mmap_lock) which keeps the + * VMA from being altered. + * + * Return: The virtual address corresponding to this page in the VMA. */ -unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) +unsigned long page_address_in_vma(const struct folio *folio, + const struct page *page, const struct vm_area_struct *vma) { - struct folio *folio = page_folio(page); - if (folio_test_anon(folio)) { struct anon_vma *page__anon_vma = folio_anon_vma(folio); /* @@ -791,7 +804,7 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) return -EFAULT; } - /* The !page__anon_vma above handles KSM folios */ + /* KSM folios don't reach here because of the !page__anon_vma check */ return vma_address(vma, page_pgoff(folio, page), 1); } diff --git a/mm/util.c b/mm/util.c index 4f1275023eb7..60017d2a9e48 100644 --- a/mm/util.c +++ b/mm/util.c @@ -820,7 +820,7 @@ void *vcalloc_noprof(size_t n, size_t size) } EXPORT_SYMBOL(vcalloc_noprof); -struct anon_vma *folio_anon_vma(struct folio *folio) +struct anon_vma *folio_anon_vma(const struct folio *folio) { unsigned long mapping = (unsigned long)folio->mapping; From patchwork Sat Oct 5 20:01:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB5DFCFB42C for ; Sat, 5 Oct 2024 20:01:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C12E36B0326; Sat, 5 Oct 2024 16:01:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC2C96B0328; Sat, 5 Oct 2024 16:01:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3C536B0329; Sat, 5 Oct 2024 16:01:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 81A876B0326 for ; Sat, 5 Oct 2024 16:01:49 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4AA7EA13EE for ; Sat, 5 Oct 2024 20:01:49 +0000 (UTC) X-FDA: 82640619138.25.073CC2F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 8ADB1C000B for ; Sat, 5 Oct 2024 20:01:47 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hv632I3D; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158376; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DDL9D7cGGKvpmkZP5t1+zKThExNjqx/l2gjBg0/23Cs=; b=V7It+pSf3JIXBSX8Y0xGJp6mpQWAeOUE6yJ6BsjF0X/8bKkqSaQY++tnkiEC4IhFuNl6gi JRt/Bk3AztBlvTVuzu4ZEN6dB3CkkeYF0CWzdoBwPTDG9oMvCiudCMQWcKwQDRcmhE8NO2 qsmUjK9J58Y/KmCd2yU9LdrBSnZfdPc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158376; a=rsa-sha256; cv=none; b=Vg3z1gdTnVJ0KEHPdpab8yLNihepyZBUykkw3cBDBMNAYaF1+IQCEs9BekTh9jdCmNQOqr Z97PHb3Edsk3v5Y/3EgBvKwQHXt+ozYFGMtkKB4gDDLj+iT1j0X58b5Mz/nXrpq4ypyz2B Tjvh0jNHwksvCFQ3AAdSc2I0kVBcmAM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hv632I3D; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DDL9D7cGGKvpmkZP5t1+zKThExNjqx/l2gjBg0/23Cs=; b=hv632I3D+GzcOYgrREWgYWw/d8 lIHnDO2btpmtZF8hqJmWd5gsm18IOux0cxf/n6E2546YhVhcyhRPlfeeQ1axy/ESTwVZQYYJlvNim ZgLSueLTD+gZxFZyUmtACrIg5IPrC4lAq5/lNchgF/W2kkrLur0s8DNJ5hTG+9ajOLwBippwV6YWY m6uu5PqjfB+SNfyKh54zc/oglO41AvTVRPfFGAwhLDsVs1JbGM306IoSlCwDpVWuJr2PnXuHHLiq5 1sOqQZ6SrZhNmN+sMOsDt609xsY2L1WlnpUDKph+6G0SthKqMoRi3R4jtUJM2nSCKrFjhqZ88/cZT bMMkFM8w==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZi-3aS2; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 4/7] mm: Mass constification of folio/page pointers Date: Sat, 5 Oct 2024 21:01:15 +0100 Message-ID: <20241005200121.3231142-5-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8ADB1C000B X-Stat-Signature: h3ebuk6cc71gauu7q1d6x4opm9narxid X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1728158507-549490 X-HE-Meta: U2FsdGVkX1/jyWdiKeQ9JIOxtImMof4KBYnRQgAJvD63COJyktWbuUTh+4JMUcZkLa53dSM6/RWvCIGy3x1k3iaJTqyAFJAWNUCkkD9JbK64BI3r61UG1Z5pq2dULn/C6/eso2p5d636V0L/jMfDKJkaAQteoMOOdvHu+LuzzhNCeUPNmk8janTzudh6BoHNo88ZnpzkSdV/oIaHaFeJvTL4KjVZEXSidgMgM6wHS37GsfKo6KKGJQyc+dOylSnc2p6dQCTnvcpREaHWrobzjLjprBxKmLIFNfTgX7bsVwlvka3xGXcFPfM0wwlzpqoeFyfiC9yVJTWRqJmqSVNkT3AOlb/Eg5nzlKdd3sdGEkgFXTn7h73Sz1jUduWsb99GQpsNOe8CppUisYV1RbRO93tt3vyuQhvIMoQsSuzPr9wZ/JP6G6jPFX9Ig4Cc9MKWLZby5fCrSN/ylJAbqS/Nw7XXkzqTira0YD8KMvBUB4BCvAn0AEv0uFaCFFldKKg3ltQwpw01A3vXB+TDQCpqwgTldZoVJFfAkAlpvHsyYKZeQetNRrnX4e4HwIfQw3XvhJ9fmhOqA8SoZrb04Mq+smC+c8bvf+sebJjPCjxMNFfnmioPDZ64ldZtYb9kb7alM8hnEZ1Xab7AzlgUFtWvXHVxO/gtYhITnqMhkr6a5TcLPvSriCVxTrYQkdMS8VhkQa8qIR5u4M5GO6Xkk7FKC1lKPoYFrlI6f1OGevqE5tkUCEmldZvvqtJOeF8A99r21QjR0xZ5TYrA2iPHwVWI3NXJAbJkGd6+iLSdmiomdQV5Flk5aQNxLvT2ryKcSOLw2A0FJs73svxb1G/kBPrVCoQCBhRA/4MIobC4TSo5lUhtSfAnEdAgbYIAZys3BkgAAsdnGOrGW/JOKwVl6q2yZ0xzdyMtE0Sp+vI7lHoJhQ8QfjWPTIQco9VQQcSgVsSwB1TU7KEvt/3uLfxzbye AtGoZehu midnWClYbC/YDh5oU/Q3J/LzX/4MQePWxmHWFjZ0Vk3lJ8fy+MzwtJmk1IH7GKQEhPYo0D6Ags93F85iGRh+8wvAquZ8ciPS8HyCIRPb4r11y6RAi/KSHOTjRb2mY0JmwHp13qK0/4bnnI5lSGNgHcxiUHdGl3rp8nutxs6hGuEnDQS9wtdX3JRXwBxr20hQ7AT2JqUZww57vlizXl9pMp3TAQYReCdoduZXp59fm5/h6LPA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that page_pgoff() takes const pointers, we can constify the pointers to a lot of functions. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/ksm.h | 7 ++++--- include/linux/rmap.h | 10 +++++----- mm/internal.h | 5 +++-- mm/ksm.c | 5 +++-- mm/memory-failure.c | 24 +++++++++++++----------- mm/page_vma_mapped.c | 5 +++-- mm/rmap.c | 11 ++++++----- 7 files changed, 37 insertions(+), 30 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 11690dacd986..c4a8891f6e7d 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -92,7 +92,7 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); -void collect_procs_ksm(struct folio *folio, struct page *page, +void collect_procs_ksm(const struct folio *folio, const struct page *page, struct list_head *to_kill, int force_early); long ksm_process_profit(struct mm_struct *); @@ -125,8 +125,9 @@ static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte) { } -static inline void collect_procs_ksm(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static inline void collect_procs_ksm(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { } diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 78923015a2e8..683a04088f3f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -171,7 +171,7 @@ static inline void anon_vma_merge(struct vm_area_struct *vma, unlink_anon_vmas(next); } -struct anon_vma *folio_get_anon_vma(struct folio *folio); +struct anon_vma *folio_get_anon_vma(const struct folio *folio); /* RMAP flags, currently only relevant for some anon rmap operations. */ typedef int __bitwise rmap_t; @@ -194,8 +194,8 @@ enum rmap_level { RMAP_LEVEL_PMD, }; -static inline void __folio_rmap_sanity_checks(struct folio *folio, - struct page *page, int nr_pages, enum rmap_level level) +static inline void __folio_rmap_sanity_checks(const struct folio *folio, + const struct page *page, int nr_pages, enum rmap_level level) { /* hugetlb folios are handled separately. */ VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); @@ -771,14 +771,14 @@ struct rmap_walk_control { bool (*rmap_one)(struct folio *folio, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct folio *folio); - struct anon_vma *(*anon_lock)(struct folio *folio, + struct anon_vma *(*anon_lock)(const struct folio *folio, struct rmap_walk_control *rwc); bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc); void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc); -struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, +struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio, struct rmap_walk_control *rwc); #else /* !CONFIG_MMU */ diff --git a/mm/internal.h b/mm/internal.h index fffa9df41495..71a30e779223 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1072,10 +1072,11 @@ void ClearPageHWPoisonTakenOff(struct page *page); bool take_page_off_buddy(struct page *page); bool put_page_back_buddy(struct page *page); struct task_struct *task_early_kill(struct task_struct *tsk, int force_early); -void add_to_kill_ksm(struct task_struct *tsk, struct page *p, +void add_to_kill_ksm(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long ksm_addr); -unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +unsigned long page_mapped_in_vma(const struct page *page, + struct vm_area_struct *vma); #else static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) diff --git a/mm/ksm.c b/mm/ksm.c index 2bbb321f92ac..1fed2e3e01e0 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1052,7 +1052,8 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma, return err; } -static inline struct ksm_stable_node *folio_stable_node(struct folio *folio) +static inline +struct ksm_stable_node *folio_stable_node(const struct folio *folio) { return folio_test_ksm(folio) ? folio_raw_mapping(folio) : NULL; } @@ -3066,7 +3067,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) /* * Collect processes when the error hit an ksm page. */ -void collect_procs_ksm(struct folio *folio, struct page *page, +void collect_procs_ksm(const struct folio *folio, const struct page *page, struct list_head *to_kill, int force_early) { struct ksm_stable_node *stable_node; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index ea9d883c01c1..7ce7ba8586f5 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -445,7 +445,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, * Schedule a process for later kill. * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. */ -static void __add_to_kill(struct task_struct *tsk, struct page *p, +static void __add_to_kill(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -461,7 +461,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p, if (is_zone_device_page(p)) tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr); else - tk->size_shift = page_shift(compound_head(p)); + tk->size_shift = folio_shift(page_folio(p)); /* * Send SIGKILL if "tk->addr == -EFAULT". Also, as @@ -486,7 +486,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p, list_add_tail(&tk->nd, to_kill); } -static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p, +static void add_to_kill_anon_file(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -509,7 +509,7 @@ static bool task_in_to_kill_list(struct list_head *to_kill, return false; } -void add_to_kill_ksm(struct task_struct *tsk, struct page *p, +void add_to_kill_ksm(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -606,8 +606,9 @@ struct task_struct *task_early_kill(struct task_struct *tsk, int force_early) /* * Collect processes when the error hit an anonymous page. */ -static void collect_procs_anon(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static void collect_procs_anon(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { struct task_struct *tsk; struct anon_vma *av; @@ -643,8 +644,9 @@ static void collect_procs_anon(struct folio *folio, struct page *page, /* * Collect processes when the error hit a file mapped page. */ -static void collect_procs_file(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static void collect_procs_file(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { struct vm_area_struct *vma; struct task_struct *tsk; @@ -680,7 +682,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, } #ifdef CONFIG_FS_DAX -static void add_to_kill_fsdax(struct task_struct *tsk, struct page *p, +static void add_to_kill_fsdax(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, pgoff_t pgoff) { @@ -691,7 +693,7 @@ static void add_to_kill_fsdax(struct task_struct *tsk, struct page *p, /* * Collect processes when the error hit a fsdax page. */ -static void collect_procs_fsdax(struct page *page, +static void collect_procs_fsdax(const struct page *page, struct address_space *mapping, pgoff_t pgoff, struct list_head *to_kill, bool pre_remove) { @@ -725,7 +727,7 @@ static void collect_procs_fsdax(struct page *page, /* * Collect the processes who have the corrupted page mapped to kill. */ -static void collect_procs(struct folio *folio, struct page *page, +static void collect_procs(const struct folio *folio, const struct page *page, struct list_head *tokill, int force_early) { if (!folio->mapping) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ade3c6833587..82e20dbbedb7 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -325,9 +325,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * outside the VMA or not present, returns -EFAULT. * Only valid for normal file or anonymous VMAs. */ -unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) +unsigned long page_mapped_in_vma(const struct page *page, + struct vm_area_struct *vma) { - struct folio *folio = page_folio(page); + const struct folio *folio = page_folio(page); struct page_vma_mapped_walk pvmw = { .pfn = page_to_pfn(page), .nr_pages = 1, diff --git a/mm/rmap.c b/mm/rmap.c index a7b4f9ba9a14..2c561b1e52cc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -497,7 +497,7 @@ void __init anon_vma_init(void) * concurrently without folio lock protection). See folio_lock_anon_vma_read() * which has already covered that, and comment above remap_pages(). */ -struct anon_vma *folio_get_anon_vma(struct folio *folio) +struct anon_vma *folio_get_anon_vma(const struct folio *folio) { struct anon_vma *anon_vma = NULL; unsigned long anon_mapping; @@ -541,7 +541,7 @@ struct anon_vma *folio_get_anon_vma(struct folio *folio) * reference like with folio_get_anon_vma() and then block on the mutex * on !rwc->try_lock case. */ -struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, +struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio, struct rmap_walk_control *rwc) { struct anon_vma *anon_vma = NULL; @@ -1275,8 +1275,9 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma, * @vma: the vm area in which the mapping is added * @address: the user virtual address mapped */ -static void __page_check_anon_rmap(struct folio *folio, struct page *page, - struct vm_area_struct *vma, unsigned long address) +static void __page_check_anon_rmap(const struct folio *folio, + const struct page *page, struct vm_area_struct *vma, + unsigned long address) { /* * The page's anon-rmap details (mapping and index) are guaranteed to @@ -2573,7 +2574,7 @@ void __put_anon_vma(struct anon_vma *anon_vma) anon_vma_free(root); } -static struct anon_vma *rmap_walk_anon_lock(struct folio *folio, +static struct anon_vma *rmap_walk_anon_lock(const struct folio *folio, struct rmap_walk_control *rwc) { struct anon_vma *anon_vma; From patchwork Sat Oct 5 20:01:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4106CFB42B for ; Sat, 5 Oct 2024 20:01:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A9436B0324; Sat, 5 Oct 2024 16:01:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 707A46B0325; Sat, 5 Oct 2024 16:01:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CF5D6B0326; Sat, 5 Oct 2024 16:01:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 269776B0324 for ; Sat, 5 Oct 2024 16:01:41 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 40A6F140AF3 for ; Sat, 5 Oct 2024 20:01:37 +0000 (UTC) X-FDA: 82640618634.04.97A0558 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 92F99A0019 for ; Sat, 5 Oct 2024 20:01:35 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=mQz8xSsV; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158398; a=rsa-sha256; cv=none; b=085V5J2PbCN+N5Jgjo4B6aygQiaVoeaLvBcFYIvUiwPK5+mATVoQ+/YwtHnkrkwCYT7dhk qo8Sz2PuiVLhXzW+XgZhJwmBmKx3K9IDmrxQybsTtVKPmvwZvCDDTbYag82Sj8IDSVUIlo 38ji9KfjHxJErMVS7iDQQF+Sole1Iuo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=mQz8xSsV; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qAUhcutSpNzHMfGoUnNFYBLH5l/c9SO2zzfu+lWTDuI=; b=ShxRE6mymBR3SGXzMJ5p6AedHv/Wr9bHNtQCBIrsq7mmePwR08fj12isazrkGuREAhuROh MCWeTUyHPsrlZLw8GcJWWmDNrDLrq7VZHL4i5NmsD5CMbOl3Xx1DmMeE97nPZ7Byd4TKdz VkwdSQ7oRxzgoJekQXH9+KUcvFadJG0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=qAUhcutSpNzHMfGoUnNFYBLH5l/c9SO2zzfu+lWTDuI=; b=mQz8xSsVKTGmDN7cUKnCM65RcM xCth5Gw4chwTaQnL1tN1MQlskHo0Tzjf19bo3kgnSbTbhGDTDhuqWt07EGG8UdK3RzLHD+b8Opp0U iJjWeBb3wZXyeBY5N7ZGuJP/SykUcl1qsggE/3uj10DH6xe9QIMP7bk3upA3qeWeZrBfoqWDqA+pn 4kHHrlnnJTHiaAufNnGqXhcNqXRTzvrq6yJdQsNTo2pw8oUKH2ZinwWPMHMpni3XnhmJSkBloPbCt brjRVz7z7qfsauX4igBstgSGPxCkoREYddmAFFPzDwGWclHou50OI0hjAQjChg+XUyuYbZks7zhc4 StN4zcxA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxw-0000000DYZk-41HX; Sat, 05 Oct 2024 20:01:24 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 5/7] bootmem: Stop using page->index Date: Sat, 5 Oct 2024 21:01:16 +0100 Message-ID: <20241005200121.3231142-6-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 92F99A0019 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: pk9m6nrfwm5bhaeta9p4qu4srojrop87 X-HE-Tag: 1728158495-994419 X-HE-Meta: U2FsdGVkX19ZRXmAmCwSHyygF/wgfgTlrEP6fBXJoK4kOoH4IOunuj7oh23tT6V+uU+Mr4AEgz82homB8Y/uheeZf/tVLfOIFNtzsaYU7uWS/ISe53R+2Rx8YHCG6ROpt0fNsBmEyJ8p0pbwqo07W74vF+DZNdEGVR/LePIB4I4mziHRkd5R6OfNuxKg6Td/uHGFVpW9eZJg2eVljzziIZtQ0lI268Ba48FJ9C9fZDn0niYJU7FVATacn3803vN/lyfte/Ip6k1qX94n6mCdBdYvv6C2f8TSb56rWE+I1ICOR8vaM2FoTgYpmCZsPGEU3ilYlpt0bKFPHQcvCxBE46aAyt8vJ8SgUtA45FBWPH19oRYf+o71L/CIAJY3XZp+dLs89OpIFiwQMDV0BiK0Wi7f2EbWOnSTVX9D87aO6OeJRF5xQLnV0+0klmiKz3r9Jw12T+/d0twvwg1pRRC35Lq1v21pA46Eo/7LcqXGqpINNjBF8I+HQ2AedthUlJxmsy9942d1yjzCAI0tCYbFQbcp3rPKQi8aFPCXUey32wq15xwvAb79qJe2Z85DY3D0WMizoOHtSwErclWhuwlccZn7vsZgrWSdnuuDCD+nD/7yxJwNVLVl6DRC7sy27ESm/78XyEx/AHlfJqbTTql0UDl/b95CPN7PFOlrhXg5ApGJ4qn5KgRZ4ESzwWshq7qn5WgQpjmtmSnaW0m8G+KLbQxrjA3TPO+IxV5zCXDNthuClHUaBgICako3VRS1JE90ZFs5kP3kM28SP/HdbkH76EhICAKf2V5JCnxp9hkSbiLSF/vuNQe0OmW2UC2/4/YWuMMS6ok66KIlW6Rvjxy49cMfjpGQxed1gifYA1bbj2jh4mClsEd5QcUl8pHsqd57GiOZAeCuYRgz1d7PLwHFc6HV9ZLE0D5ySe+xfMM6dQQy4mwTqx1QxJQ9PQ7sjvbrbThomfHiWHbssGkO8Vd l3gYJspo AKt8pwU0WUfT2XWTn9zT8rgBjYQ/rvi3FgRh3D39/rwV6FHvdn+mTKNJOR6gmPk5Yy0qg3S8qWEo3VOphDmuDeHJe/9+vEu4eFwW8htyGYewCokpx0UlZ6oWe5Kyg2M39ZAFmIYWFbweZA/i5Qgm/E/X3JHPrMCGfzKZSelIrky+Id/hG1+UF7RGsYxqjS7QHPfPhv+BdvPZ6er7IFbSt5ZEFkWphw7sdGVZ9Ez8FRLYat20= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Encode the type into the bottom four bits of page->private and the info into the remaining bits. Also turn the bootmem type into a named enum. Signed-off-by: Matthew Wilcox (Oracle) Reported-by: kernel test robot Signed-off-by: Andrew Morton --- arch/x86/mm/init_64.c | 9 ++++----- include/linux/bootmem_info.h | 25 +++++++++++++++++-------- mm/bootmem_info.c | 11 ++++++----- mm/sparse.c | 8 ++++---- 4 files changed, 31 insertions(+), 22 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index ff253648706f..4d5fde324136 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -987,13 +987,12 @@ int arch_add_memory(int nid, u64 start, u64 size, static void __meminit free_pagetable(struct page *page, int order) { - unsigned long magic; - unsigned int nr_pages = 1 << order; - /* bootmem page has reserved flag */ if (PageReserved(page)) { - magic = page->index; - if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { + enum bootmem_type type = bootmem_type(page); + unsigned long nr_pages = 1 << order; + + if (type == SECTION_INFO || type == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); } else diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index cffa38a73618..e2fe5de93dcc 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -6,11 +6,10 @@ #include /* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. + * Types for free bootmem stored in the low bits of page->private. */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, +enum bootmem_type { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 1, SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, MIX_SECTION_INFO, NODE_INFO, @@ -21,9 +20,19 @@ enum { void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type); + enum bootmem_type type); void put_page_bootmem(struct page *page); +static inline enum bootmem_type bootmem_type(const struct page *page) +{ + return (unsigned long)page->private & 0xf; +} + +static inline unsigned long bootmem_info(const struct page *page) +{ + return (unsigned long)page->private >> 4; +} + /* * Any memory allocated via the memblock allocator and not via the * buddy will be marked reserved already in the memmap. For those @@ -31,7 +40,7 @@ void put_page_bootmem(struct page *page); */ static inline void free_bootmem_page(struct page *page) { - unsigned long magic = page->index; + enum bootmem_type type = bootmem_type(page); /* * The reserve_bootmem_region sets the reserved flag on bootmem @@ -39,7 +48,7 @@ static inline void free_bootmem_page(struct page *page) */ VM_BUG_ON_PAGE(page_ref_count(page) != 2, page); - if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + if (type == SECTION_INFO || type == MIX_SECTION_INFO) put_page_bootmem(page); else VM_BUG_ON_PAGE(1, page); @@ -54,7 +63,7 @@ static inline void put_page_bootmem(struct page *page) } static inline void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) + enum bootmem_type type) { } diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index fa7cb0c87c03..95f288169a38 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -14,23 +14,24 @@ #include #include -void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +void get_page_bootmem(unsigned long info, struct page *page, + enum bootmem_type type) { - page->index = type; + BUG_ON(type > 0xf); + BUG_ON(info > (ULONG_MAX >> 4)); SetPagePrivate(page); - set_page_private(page, info); + set_page_private(page, info << 4 | type); page_ref_inc(page); } void put_page_bootmem(struct page *page) { - unsigned long type = page->index; + enum bootmem_type type = bootmem_type(page); BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { - page->index = 0; ClearPagePrivate(page); set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); diff --git a/mm/sparse.c b/mm/sparse.c index dc38539f8560..6ba5354cf2e1 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -720,19 +720,19 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, static void free_map_bootmem(struct page *memmap) { unsigned long maps_section_nr, removing_section_nr, i; - unsigned long magic, nr_pages; + unsigned long type, nr_pages; struct page *page = virt_to_page(memmap); nr_pages = PAGE_ALIGN(PAGES_PER_SECTION * sizeof(struct page)) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = page->index; + type = bootmem_type(page); - BUG_ON(magic == NODE_INFO); + BUG_ON(type == NODE_INFO); maps_section_nr = pfn_to_section_nr(page_to_pfn(page)); - removing_section_nr = page_private(page); + removing_section_nr = bootmem_info(page); /* * When this function is called, the removing section is From patchwork Sat Oct 5 20:01:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1053DCFB42B for ; Sat, 5 Oct 2024 20:01:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A0F86B031F; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 950886B0322; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 818046B0324; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6657C6B031F for ; Sat, 5 Oct 2024 16:01:40 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 30320AACC8 for ; Sat, 5 Oct 2024 20:01:40 +0000 (UTC) X-FDA: 82640618760.10.AF24205 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 940051C001A for ; Sat, 5 Oct 2024 20:01:38 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HmV7JZGS; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158366; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c+X69K6u/WwXJxumePeYsZkY3UAWWSKyHp9tBPIZtv8=; b=5kmmMVePzWUUlSt7mtfDvn1aK+t5Ty/aU71aTsdaZmXMZ5M7PQjp0+8YbpS+9R9klu/i4A 6YMrc3xguonYzsV3sGtUn0L/Ua/9mexoCoS9cN72GZv5rQ69gYvMEDlw8DHbZlc28BThpk 8HSpeTp/Wcbe164xLH/YR8Ei68T9LaI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158366; a=rsa-sha256; cv=none; b=QwMDLF/6JVtV34dAcqTRbUc+9C1XrGKDwOAjEk3tpe+bIVNd6Q6/QmpYGby9eOhRuTarTp rs15sLvCTNr7uIPn9PCcHkoOQZbsv89QNWwhLaMv2EG9s2yXMkPhb04Fjicd369KEwKEAB cvGfl5mYlw6L7wkCbyYDDC4femxWxEg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HmV7JZGS; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=c+X69K6u/WwXJxumePeYsZkY3UAWWSKyHp9tBPIZtv8=; b=HmV7JZGSoI1YqIbgV9rfi0/4Yv xzS7pqvDl9sfzWo1HLDY0BrVOOIa7f0Tq/58VD7x4YIGCmmvR1kp2K7/V4ZQ/A8yeN4cBKpjZ0faZ Zki5niAXwZRZEwYruPV0t6TejkttsskvFNlf+hi0b3aP5Ws8jptMdKSGVoCZAv/ZbEJzXavzTcJmY kki+uPyzqsyrUzlDVSdZsyqHJsDC4xRmtr9HSe87rKjHDMfwYVTX1JvJ8/Lpp6btKDu/rVwrniIcM mBaIPGvYULAt3G7l8BNpS+HAnAjkKTn51If1ZjeGSAz71xg9zuytXOjHWI9iB/qJwQVxEiwTeWefE MdxGJN1w==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxx-0000000DYZm-0DwU; Sat, 05 Oct 2024 20:01:25 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 6/7] mm: Remove references to page->index in huge_memory.c Date: Sat, 5 Oct 2024 21:01:17 +0100 Message-ID: <20241005200121.3231142-7-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 940051C001A X-Stat-Signature: qat5hwa4kog15o4h54koeb3ffdtuagrs X-HE-Tag: 1728158498-654738 X-HE-Meta: U2FsdGVkX1+Bn59KG/jD+F7Hh6/bct7i/oUmUfz6RQDYFRnWxjxX8mWASrgdwk9z2x2JiNd3dKwXM1nZquj2lYym+jVb3QLyoggkF7myuxWFfGBV8nCBSXhTBX4d7qFE1uDMy0F6NeVdZMATQad7ul4keNxr1yiFcFMX8tfHFttPFa2XJ1TjnDLODTkaN031knqeKLXOJr/xZYqKFlNxtprAp15jq9WsznzeoCxb7Pv2aruwm4hMuY84LM5qlQvtJcOGCcwXZCW2pakWOtgqXl/wG1h06F0ed8pAkK45qBvB6xKByp4wapQoIx9xTneF0ZFif8sizr0z4G7GeEAp0JB7zkhn9F8LJxsTJhrFTCb/aFPAEvSjn+t64Vuud0QMvMThMw4GQgaGHsG5Xmg9b5bhSGTpI5HzkL6bqv4/dMWzCUI7OBzwpv87DhYgaowuYPyDkcRYSxhK2Uqw/TU9uPaHg7Vulla961HiCrtRILeBwEsimVUO7FB67HUuVvuTnSgzIxRS41ivhsGyEKRJpLnV8l57ezonh2fEjLcXmpBzh17EVaw2ihXyXVunOsRJnwMZrcmvslyU03X2Mp4E7v8linTrOIOesamv1Ov/OUjWG+JBvHoA2F9cVuqmvRHWRbcS5x9BZIceFX3ItqadA3S1kik7lC3GNIVWqT+RLKL/K3fQ9+0EgoDMjOCu5imo2kNXOeJsnMRIv+bLfIvY9Pk3ZgZWEq73N94xxg4vRPozAB1jBxBfhHG32XHE/tCKMB9CRrKAqM38Ja5md+fbNKwzAFB+TnizaRHM3B67QgagA16OwP5Cntpq0mHAu3jLsIW3xhtrpWQzkDVxHBRznJeCP/Ec1e6ngfm516o/tozt+C/apeM2C0tqMRSpcaKlB1NG2nuJE178UNY+5otJpR5gKif3icoWo2UB0baebGOSHVhCdMx/KjzEpoShsxeuGgbAPjfaTS0D71YFJRw igv+lYTk 3XAyCTNYdI49Odyz1Prd93rmDGJQ3JZG6PrWAArgfJjJwLjo4JvOMNSekcaGuoFc5LrKjyQs/I0zAVwedsI19YWBrHLEeVsCXuMEJM6ZcCOaykAfEj2EcckmEfIO6g6TFVSWiumbyPavqPlSlC90HDt3kIKLd+EhHpSHQTWoPI4kI4LlyXwfdQ9GzKik7SSWiFwt45t9LD5a3Ex4ZGL5W2WJYG3Xr6oFUnIceQldk/tBuaMxtwKQjG0d9w23YlwUTuQKS3XmP7ruJ2sL153XDqZgCeumwWYs245Sp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We already have folios in all these places; it's just a matter of using them instead of the pages. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3ca89e0279a7..812287dd6221 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3135,8 +3135,8 @@ static void __split_huge_page_tail(struct folio *folio, int tail, /* ->mapping in first and second tail page is replaced by other uses */ VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, page_tail); - page_tail->mapping = head->mapping; - page_tail->index = head->index + tail; + new_folio->mapping = folio->mapping; + new_folio->index = folio->index + tail; /* * page->private should not be set in tail pages. Fix up and warn once @@ -3212,11 +3212,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + struct folio *tail; __split_huge_page_tail(folio, i, lruvec, list, new_order); + tail = page_folio(head + i); /* Some pages can be beyond EOF: drop them from page cache */ - if (head[i].index >= end) { - struct folio *tail = page_folio(head + i); - + if (tail->index >= end) { if (shmem_mapping(folio->mapping)) nr_dropped++; else if (folio_test_clear_dirty(tail)) @@ -3224,12 +3224,12 @@ static void __split_huge_page(struct page *page, struct list_head *list, inode_to_wb(folio->mapping->host)); __filemap_remove_folio(tail, NULL); folio_put(tail); - } else if (!PageAnon(page)) { - __xa_store(&folio->mapping->i_pages, head[i].index, - head + i, 0); + } else if (!folio_test_anon(folio)) { + __xa_store(&folio->mapping->i_pages, tail->index, + tail, 0); } else if (swap_cache) { __xa_store(&swap_cache->i_pages, offset + i, - head + i, 0); + tail, 0); } } From patchwork Sat Oct 5 20:01:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9A97CFB42C for ; Sat, 5 Oct 2024 20:01:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF78B6B031B; Sat, 5 Oct 2024 16:01:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA7F26B0325; Sat, 5 Oct 2024 16:01:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D221E6B0326; Sat, 5 Oct 2024 16:01:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A97476B031B for ; Sat, 5 Oct 2024 16:01:43 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2DEC540270 for ; Sat, 5 Oct 2024 20:01:43 +0000 (UTC) X-FDA: 82640618886.13.A6D8206 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 935A84000C for ; Sat, 5 Oct 2024 20:01:41 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jSOPsPeQ; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728158356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g6oDn/FicAOF3nx/DCoyJSGrzf9Eb19AWdRAUdapAas=; b=Foze+i2tqkgjyzOcd4gR9xr7VNmuOQex9mgHApg9Umu5vkaqNFQDi0dU36/3RDCInS1UWm VUXOvzY6rg8ySRiP8g0rKbYacDG03XJ4ZCUV4HjdWhVV11tYaXBg5Z7E3dlDsYsNRE1HY2 6rclDYaRK5GM/qWZSYVccoZ77u7Vr0E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728158356; a=rsa-sha256; cv=none; b=J1wzlGLjNYyOzG6L3RcZ/jqRM/BOGgYk4gf0vKvqCY8Nawn/ZqeSLGQYse1YWed9iJDURg 2IjIv+CnIc94x5xU6rLF5gOx/jcRTfTHQaZIGCuTsh4wPLCPx7SYyHSHXv/2Ix+TVAhN+z f+7/RZwmL9/gFZCJNWOYu6DPPc4Ioxg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jSOPsPeQ; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=g6oDn/FicAOF3nx/DCoyJSGrzf9Eb19AWdRAUdapAas=; b=jSOPsPeQZPlDYbbhej5saC8koT XKnhEFU/wcuFOb0RhPscHQGgkUtqY1yXa6xX6hWZPPR/cHKVe9s4lk252edrAA9szGeyvQHcOcg56 +oL3lVzmX+heZPu9YtnAVW0EE7c4I3qbKspxNGED2Kd1/oeRkv1P1U9PUnTVvBoSPdXoADULIHM8d 6g5C3xXHL4+XFoTATR3Bq15cXFGzFniYGytKVrBjRhJ36uADOgUXqpytP03qouCIFuYvZfRONNgPi 4MA50kjZVeL+Qiy0md3ZdiIari8xddmMmgHxT2p64TbEoEtIAo8gbK09wWjE2PP72MYildy42NNOZ OCAFfyew==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sxAxx-0000000DYZp-0bw3; Sat, 05 Oct 2024 20:01:25 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 7/7] mm: Use page->private instead of page->index in percpu Date: Sat, 5 Oct 2024 21:01:18 +0100 Message-ID: <20241005200121.3231142-8-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005200121.3231142-1-willy@infradead.org> References: <20241005200121.3231142-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 935A84000C X-Stat-Signature: i9soqa7shnxihb61c6y987n47hz83t1k X-Rspam-User: X-HE-Tag: 1728158501-707008 X-HE-Meta: U2FsdGVkX18klQWufq9ccCNfJ2RSy/x/sZNqV5n4+H1KYUoKINkqb8y0haAQLGTUktq8eFMtOUfZ20t0AnYtpVzY0+eQ19HXrJnG3m/jyO3gPMo/YdzhtJYIOTnvgOwYpW6VTJRCMRPIGRT3g+M9KGe/cB2BUie3mVly77pSGrizqZ5zvGUENh4WWUgWVtKo6TdiDbIbpNuQjtMgmflrHg+hNn2JIkEVAEIUg/mEbvt2lPrSoLJyM54EbE94W/QCvNjRU3qx2rQC+iPjI2aBL1FEVI57ZDI7NAGIRQ8bp1sEtEz36HLTifsexVFRrIs8YxoRTVMs1OKnshRWyv4T0KBd73YA2+Mt62DCbdhaEn7A1B7mz693TdsTzNA+PlnUZmJ5WYtiaNDoxXET1OUOot02B5L3z2oOJsi7sI7c5St/j1jA9GQZxGBtsuxwZHR+XSEnCMbyboxsU6GhGvxgUaYZUxfKhNQ2rut2Qt5ghpi6Jrjy8vHRUEWqJIm36Hz127D7peLEiP+p8W+ICkeQnh6B9rbVQME+b/2o3jsEFT6x4CGNej0Yn3NhrGzBQDGy2cWia9Ur5VzuXVcjNczfQFD6UEdx1c9GAvdNIbqV3Xo1qQ2ZRGM1dp/Uhc+HDteAJTqJqty9cpr6lUkosFlsbsIqwOSNgX2yXn5jKblk4+TIM1BSEf4pD8Q/Nas+XcsVQeUEHZkbfembx5qBmx4Hdbia0BSp9ysF3IT/ywpVETiMAth+uFDdTkvFxc1OavBVutkuR4Wg8UP/fJz52yLpAiFQexp2me9z4Vl4WnxzMul/s1RMmgmP3omMVGrL6/vsmHEQPkgI9EQ0yGvVZ8pW4LuFPwes3DwosYBbrYyrODExYNS6OhAAEKPZEh09KKhDmlVYRWhnqT5f0VOOF/pEzcN42Q0CvTAwQU+Wpp+ChCRE+UJQEf7Bhh1LlnJyCRlRD9i+AqBw3r1bhvi11O0 DjCs02Zv hKXx/Z9zEzEsCQXfPAMWzMRnbTyUv7buVRrVOKxPO7ufchB4p8TrWiQlUxBkFRFNirx5/fkTHa9kLX2BWgUcW0tY0abgGn3c2SgksM1BrdsP36+xq2s1eRtG1SoZJ68bRUfjvrdxCkEXbQfnR2JqVf/K4XjaIcw3YzglUpN1y8xSG8oCyjdhkyVSRqzPn4luAqAtKk2tTz5ANtHb2PDV3hzu/xOEjbnKvC1vO8DCET1cke8g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The percpu allocator only uses one field in struct page, just change it from page->index to page->private. Signed-off-by: Matthew Wilcox (Oracle) --- mm/percpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/percpu.c b/mm/percpu.c index da21680ff294..0d3e6b76e873 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -253,13 +253,13 @@ static int pcpu_chunk_slot(const struct pcpu_chunk *chunk) /* set the pointer to a chunk in a page struct */ static void pcpu_set_page_chunk(struct page *page, struct pcpu_chunk *pcpu) { - page->index = (unsigned long)pcpu; + page->private = (unsigned long)pcpu; } /* obtain pointer to a chunk from a page struct */ static struct pcpu_chunk *pcpu_get_page_chunk(struct page *page) { - return (struct pcpu_chunk *)page->index; + return (struct pcpu_chunk *)page->private; } static int __maybe_unused pcpu_page_idx(unsigned int cpu, int page_idx)