From patchwork Mon Mar 3 16:29:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13999209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22A56C282CD for ; Mon, 3 Mar 2025 16:32:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9BB4280016; Mon, 3 Mar 2025 11:32:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A4FBC280015; Mon, 3 Mar 2025 11:32:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8737B280016; Mon, 3 Mar 2025 11:32:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6739F280015 for ; Mon, 3 Mar 2025 11:32:06 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id ACC9180C5B for ; Mon, 3 Mar 2025 16:32:05 +0000 (UTC) X-FDA: 83180781810.26.2966C66 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id E060110000F for ; Mon, 3 Mar 2025 16:30:37 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PbpiXwcx; spf=pass (imf05.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741019438; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0VohxUkhEcl68WNDuA0eZYZJxa5JX4ZCeZ0UqfdwFdI=; b=f+eyqNf6p34f9lzGLdbNhTwCMJeZYkXIadMDBSNSXdqH2/RS7J67FFtNuSGDoW/Yp63rj6 6OXBMJ1MiTc1azSykWD6inf3bW3sR+lUt4gd8vA3JX/ZmSrT6ijuiC+15i0JJAF4vB2qyN BvR0p2XjQ+owhEAtwM6UP1z9cS4h/8I= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PbpiXwcx; spf=pass (imf05.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741019438; a=rsa-sha256; cv=none; b=yyujMnAppSP/7IvfuoRfVzWy2tleFrt9tIRCoxXet+4WleH0trAHHDX60y3HtLjvyNucuz enKiwPSqeODe+COgkpqlmo2y2O6ZLAtEMmqLc2G7cdscvakmjvXmw6TEhWfSvb2y57Txrl FjfmN4QpJ7dP4mI72EntJDY9rqcfs+w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741019437; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0VohxUkhEcl68WNDuA0eZYZJxa5JX4ZCeZ0UqfdwFdI=; b=PbpiXwcxv6cKk6gx3p57BbKEk+ZUwMR42sRFSC7bnahyr83ODThUFOdP/xwL7jtWuloM+t l8LodgSEpip/FBHHMp7kZDsOEbrcvCXph1pY0eFAQKczvuYVNk4Z/ZioejAjcipxc2vvcC FN7B48CaG/mWAXyA1OAQoF9B9qBqvEM= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-571-NuJxteS0NouTchkgPMZFSQ-1; Mon, 03 Mar 2025 11:30:25 -0500 X-MC-Unique: NuJxteS0NouTchkgPMZFSQ-1 X-Mimecast-MFC-AGG-ID: NuJxteS0NouTchkgPMZFSQ_1741019424 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43ba50406fcso27671415e9.3 for ; Mon, 03 Mar 2025 08:30:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741019424; x=1741624224; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0VohxUkhEcl68WNDuA0eZYZJxa5JX4ZCeZ0UqfdwFdI=; b=C/SooBsqA1RCSqhL5v0GHQJqOrM76py9wPZasGSmXWuN8vTgm6pKm6kIvtoZjk9qhb anBcgd2fHgZ0k0CDmKRS34EM5cjYAE93CdInLQXo/KoHnuo/QFqrL2hpshufCbGc1sYR 7y/aJ5G7CaQikgnH+nBY91XFbVXpaIrHCXcjqNXFNgsKDUOQoLgwIKbQWXXcXXXj4niQ +Cn7m1n5Od47Bb9qyHUPKcKR3II8jqoph1uoybUC7RP5dTgeShJzE4CDJPZ/rAx5fnDh 9uumEuPE1Q+NOLlzWoEh3kb8LmWs8X+cqJQvP3RQfCXP74kVz2J9h1hNelqZDCNo2qdt e47g== X-Forwarded-Encrypted: i=1; AJvYcCXpw1OPRP7PzkHFTMgVk3fjOn0cFiNPSI8c3G6Ey57jcf/g3zTsWjJORop5BIY2ENOhppHX0NRTAw==@kvack.org X-Gm-Message-State: AOJu0YzEA2xNZu9QQYS19Inxxx9ZXaztN27E4a1R5gUMcUs+oc4Cf454 KgtHJBzqdYdVpN6bOVbpxsq4PSdPWMKB8j1JV2gVcAQjAzku1FuhjpqxGunV8YZ4k5V5i/ymOVj htZrGPH0POpwO2d0g8cj2l0hFMHjPKLTCehTmZERV/4zBXWg9 X-Gm-Gg: ASbGncsAjYwF2wXDODnG390WJhoT0QXaUf5LddHD0NhyQ5iMUSuh0M8kbkAAmA95NUb 29CEFIe0/WhLDItO/edHJVLQd/BLcGc0bTTdNWcCet0rI5DJ04E/Mk0Y9wTrE73kKCWiBTAAAnQ coemdcOrqRkrLSHcuS86rqV/PET/os74pACP93tcx+leRi3z8YGN9iO7QJb7uxoxqYZiZFICxs4 WAK8uD9SeAsNG90+vdTRK/5/cFFQuBdAc7Sgr368PWSIqCbui1cNaduxkKcEYne5WMqOuWYhxvO /2FPT9vHjjApi4DRugunudTGaY8IwCbGE2zZ3Qhkrsx2KKmMxk/N38DA6NMSXZnJYGIrVA/fZRI E X-Received: by 2002:a05:600c:3596:b0:439:9494:318 with SMTP id 5b1f17b1804b1-43ba66e252dmr97507635e9.1.1741019423941; Mon, 03 Mar 2025 08:30:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IEA60b0YZJ7pQqMV6rPSAHKqjeBG5mk2kZJ3T/m9wf5+mg4ZPniiyD7qwD32/lnlnL6IXv1QQ== X-Received: by 2002:a05:600c:3596:b0:439:9494:318 with SMTP id 5b1f17b1804b1-43ba66e252dmr97507315e9.1.1741019423515; Mon, 03 Mar 2025 08:30:23 -0800 (PST) Received: from localhost (p200300cbc7349600af274326a2162bfb.dip0.t-ipconnect.de. [2003:cb:c734:9600:af27:4326:a216:2bfb]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-390e485ddedsm14691606f8f.89.2025.03.03.08.30.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 Mar 2025 08:30:22 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , "Kirill A. Shutemov" Subject: [PATCH v3 03/20] mm: let _folio_nr_pages overlay memcg_data in first tail page Date: Mon, 3 Mar 2025 17:29:56 +0100 Message-ID: <20250303163014.1128035-4-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250303163014.1128035-1-david@redhat.com> References: <20250303163014.1128035-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: L7Sufx5R5i73fzdl3MGeoey_kiFAY9gADaJgZD0s7PY_1741019424 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Stat-Signature: mrjhcj17gzqh7uwkqnzntiutbe389ir7 X-Rspamd-Queue-Id: E060110000F X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1741019437-16229 X-HE-Meta: U2FsdGVkX1+0eKIc4q84txz4PnMeyKBP+KhmjvPX16vNgs25liEZn4NwFaNPIGXyqph9DKv7+LuYbVB3mPfw8PY3XCs4UT2U/dNa5TcQI/8Q3sWr96oJEwQk/lWQhMmMU1spWvy0o5idf4dcOBQWC5cNNvoD97h32gKapwxFJOVy4P7kTXHnki1G0dSjpu34jgSUumpBB2ajgNTBxwlhyNMiMzRq/mwuIovEhVCAOsmLiu0iz9Gmi9CXEhQxJ5QtuAugehnLSOKsezVbjwEmsnZgX9wkI9O5sjSwqXk/dQgcAck2z8yAm8SEoD8o4cO20VgMf/k0M3xR1E54c0vjItXb86FdKb9S0g4yUYXdlZaoCUUHyO08gEe/4z45tTfqxfQLRmHHsEcAbs2WOQzVr6DPpeujpeOlDWBs5OFug/Qd1gGsY5Vvj1yT2X/k51iNWWg6qVozuoahXPi46qIUC+cLoEDWUH+hfQY4ixcIt7ig6YCtgtVwr1J/urnqoSSKEacM9fwsjvQkAlNhwSeFZOwzspIeLI+m7+tIfZv3NoMyH8vGxuUObzrI4SET3o7ij1SO0+uhHgF1Fkwtd0NBzVdvMNBF8eYxb0QQ/MvGTgiBUQGB+OXaM+/LpLF7dLXFboJnawOiLYP08ADgfYxegvWfihwjnSoe+ow9qWdt3QZZOi4wBiojXfeGajXol65W8inGIGvyBK620j8BBnA5FfRtb4IOHod3Sssty7Tj/QNeMFtdUeVlFjSNInvkTtW7aFxAVbNtW8VLir0g9tQQuKLve10F3Y3uaGDTkKhPP2iT7JZPq1uIC2hTJtelD6uRyFpRUOlwmHokpsgRaNDLuyJomD5BYbczUcETREEtrkKCSlgVRCKsMjoR+up3MUFd1S8ScOSst/zcM44tLxBV/gLlzROnVAOvcsm+7ZlcENm4f/cO1buE/SiiC/2/vNNImCDTcl14rbvZ62T6uMk 0bC/XCwZ UA9OFKjJ3MhlmXCc9Z6Dsl2kC6xcZ/jGoKEANhcmHseS9lts4HpKBme/b/PHvGLc2o0vWxpxNUAuA3Df9Vkc8Cz7MubnXSL4THU1ntFwJJ2evH00yKFLgJJ2mzvM2wDAQCAqIBOTZhakfWTOKyOQ3jsTGZTpzauH3+Ukm8Yj5B/Op299dKrcJR+wKHpKqeUFPEV0Y1mRRVsiy6VkvqQTZmnsSoDUSolzO3IKUAmMUKm1xpSf7Bu1DgO3fU7+/4TnoGgbhx6j+JWoZBZBERdYBzISiVJOoFuPbppthH2CwoXKRQdIKpdGoypICrBNm8SNZtRfSVaw0oaGWub5vOAcmYuLJFYAqbAwOMv4h0zhsw9fXJjMbABxP1nw3sIpFD0I8qq5mD+85PpL/yCs/3iaxqZzIEwSXIvV97eLAFiSQj7Znbm7XdwpINRzK2EZdpf8Ha3ArDY4XKVUtlt5DSisXbXEW0zj9zep+T1UcctP3rgHpUPFD/kYxfsddZfGPoSOI4NMZUkVgXB/1lGVMjXWl6keZFu8aF89NFJTMeaMecwy3Dfo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's free up some more of the "unconditionally available on 64BIT" space in order-1 folios by letting _folio_nr_pages overlay memcg_data in the first tail page (second folio page). Consequently, we have the optimization now whenever we have CONFIG_MEMCG, independent of 64BIT. We have to make sure that page->memcg on tail pages does not return "surprises". page_memcg_check() already properly refuses PageTail(). Let's do that earlier in print_page_owner_memcg() to avoid printing wrong "Slab cache page" information. No other code should touch that field on tail pages of compound pages. Reset the "_nr_pages" to 0 when splitting folios, or when freeing them back to the buddy (to avoid false page->memcg_data "bad page" reports). Note that in __split_huge_page(), folio_nr_pages() would stop working already as soon as we start messing with the subpages. Most kernel configs should have at least CONFIG_MEMCG enabled, even if disabled at runtime. 64byte "struct memmap" is what we usually have on 64BIT. While at it, rename "_folio_nr_pages" to "_nr_pages". Hopefully memdescs / dynamically allocating "strut folio" in the future will further clean this up, e.g., making _nr_pages available in all configs and maybe even in small folios. Doing that should be fairly easy on top of this change. Reviewed-by: Kirill A. Shutemov Signed-off-by: David Hildenbrand --- include/linux/mm.h | 4 ++-- include/linux/mm_types.h | 30 ++++++++++++++++++++++-------- mm/huge_memory.c | 16 +++++++++++++--- mm/internal.h | 4 ++-- mm/page_alloc.c | 6 +++++- mm/page_owner.c | 2 +- 6 files changed, 45 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a743321dc1a5d..694704217df8a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1199,10 +1199,10 @@ static inline unsigned int folio_large_order(const struct folio *folio) return folio->_flags_1 & 0xff; } -#ifdef CONFIG_64BIT +#ifdef NR_PAGES_IN_LARGE_FOLIO static inline long folio_large_nr_pages(const struct folio *folio) { - return folio->_folio_nr_pages; + return folio->_nr_pages; } #else static inline long folio_large_nr_pages(const struct folio *folio) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 689b2a7461892..e81be20bbabc6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -287,6 +287,11 @@ typedef struct { unsigned long val; } swp_entry_t; +#if defined(CONFIG_MEMCG) || defined(CONFIG_SLAB_OBJ_EXT) +/* We have some extra room after the refcount in tail pages. */ +#define NR_PAGES_IN_LARGE_FOLIO +#endif + /** * struct folio - Represents a contiguous set of bytes. * @flags: Identical to the page flags. @@ -312,7 +317,7 @@ typedef struct { * @_large_mapcount: Do not use directly, call folio_mapcount(). * @_nr_pages_mapped: Do not use outside of rmap and debug code. * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). - * @_folio_nr_pages: Do not use directly, call folio_nr_pages(). + * @_nr_pages: Do not use directly, call folio_nr_pages(). * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h. * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. @@ -377,13 +382,20 @@ struct folio { unsigned long _flags_1; unsigned long _head_1; /* public: */ - atomic_t _large_mapcount; - atomic_t _entire_mapcount; - atomic_t _nr_pages_mapped; - atomic_t _pincount; -#ifdef CONFIG_64BIT - unsigned int _folio_nr_pages; -#endif + union { + struct { + atomic_t _large_mapcount; + atomic_t _entire_mapcount; + atomic_t _nr_pages_mapped; + atomic_t _pincount; + }; + unsigned long _usable_1[4]; + }; + atomic_t _mapcount_1; + atomic_t _refcount_1; +#ifdef NR_PAGES_IN_LARGE_FOLIO + unsigned int _nr_pages; +#endif /* NR_PAGES_IN_LARGE_FOLIO */ /* private: the union with struct page is transitional */ }; struct page __page_1; @@ -435,6 +447,8 @@ FOLIO_MATCH(_last_cpupid, _last_cpupid); offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); FOLIO_MATCH(compound_head, _head_1); +FOLIO_MATCH(_mapcount, _mapcount_1); +FOLIO_MATCH(_refcount, _refcount_1); #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) == \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6ac6d468af0d4..07d43ca6db1c6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3307,10 +3307,11 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) * It splits @folio into @new_order folios and copies the @folio metadata to * all the resulting folios. */ -static void __split_folio_to_order(struct folio *folio, int new_order) +static void __split_folio_to_order(struct folio *folio, int old_order, + int new_order) { - long nr_pages = folio_nr_pages(folio); long new_nr_pages = 1 << new_order; + long nr_pages = 1 << old_order; long index; /* @@ -3528,12 +3529,21 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, } } + /* + * Reset any memcg data overlay in the tail pages. + * folio_nr_pages() is unreliable until prep_compound_page() + * was called again. + */ +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 0; +#endif + /* complete memcg works before add pages to LRU */ split_page_memcg(&folio->page, old_order, split_order); split_page_owner(&folio->page, old_order, split_order); pgalloc_tag_split(folio, old_order, split_order); - __split_folio_to_order(folio, split_order); + __split_folio_to_order(folio, old_order, split_order); after_split: /* diff --git a/mm/internal.h b/mm/internal.h index bb9f3624cf952..bcda1f604038f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -682,8 +682,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) return; folio->_flags_1 = (folio->_flags_1 & ~0xffUL) | order; -#ifdef CONFIG_64BIT - folio->_folio_nr_pages = 1U << order; +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 1U << order; #endif } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index dd7e280a61c69..ae0f2a2e87369 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1178,8 +1178,12 @@ __always_inline bool free_pages_prepare(struct page *page, if (unlikely(order)) { int i; - if (compound) + if (compound) { page[1].flags &= ~PAGE_FLAGS_SECOND; +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 0; +#endif + } for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_page_prepare(page, page + i); diff --git a/mm/page_owner.c b/mm/page_owner.c index 2d6360eaccbb6..a409e2561a8fd 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -507,7 +507,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, rcu_read_lock(); memcg_data = READ_ONCE(page->memcg_data); - if (!memcg_data) + if (!memcg_data || PageTail(page)) goto out_unlock; if (memcg_data & MEMCG_DATA_OBJEXTS)