diff mbox series

[v1,04/17] mm: let _folio_nr_pages overlay memcg_data in first tail page

Message ID 20240829165627.2256514-5-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT | expand

Commit Message

David Hildenbrand Aug. 29, 2024, 4:56 p.m. UTC
Let's free up some more of the "unconditionally available on 64BIT"
space in order-1 folios by letting _folio_nr_pages overlay memcg_data in
the first tail page (second folio page). Consequently, we have the
optimization now whenever we have CONFIG_MEMCG, independent of 64BIT.

We have to make sure that page->memcg on tail pages does not return
"surprises". page_memcg_check() already properly refuses PageTail().
Let's do that earlier in print_page_owner_memcg() to avoid printing
wrong "Slab cache page" information. No other code should touch that
field on tail pages of compound pages.

Reset the "_nr_pages" to 0 when splitting folios, or when freeing them
back to the buddy (to avoid false page->memcg_data "bad page" reports).

Note that in __split_huge_page(), folio_nr_pages() would stop working
already as soon as we start messing with the subpages.

Most kernel configs should have at least CONFIG_MEMCG enabled, even if
disabled at runtime. 64byte "struct memmap" is what we usually have
on 64BIT.

While at it, rename "_folio_nr_pages" to "_nr_pages".

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h       |  4 ++--
 include/linux/mm_types.h | 30 ++++++++++++++++++++++--------
 mm/huge_memory.c         |  8 ++++++++
 mm/internal.h            |  4 ++--
 mm/page_alloc.c          |  6 +++++-
 mm/page_owner.c          |  2 +-
 6 files changed, 40 insertions(+), 14 deletions(-)

Comments

Kirill A. Shutemov Oct. 23, 2024, 11:38 a.m. UTC | #1
On Thu, Aug 29, 2024 at 06:56:07PM +0200, David Hildenbrand wrote:
> Let's free up some more of the "unconditionally available on 64BIT"
> space in order-1 folios by letting _folio_nr_pages overlay memcg_data in
> the first tail page (second folio page). Consequently, we have the
> optimization now whenever we have CONFIG_MEMCG, independent of 64BIT.
> 
> We have to make sure that page->memcg on tail pages does not return
> "surprises". page_memcg_check() already properly refuses PageTail().
> Let's do that earlier in print_page_owner_memcg() to avoid printing
> wrong "Slab cache page" information. No other code should touch that
> field on tail pages of compound pages.
> 
> Reset the "_nr_pages" to 0 when splitting folios, or when freeing them
> back to the buddy (to avoid false page->memcg_data "bad page" reports).
> 
> Note that in __split_huge_page(), folio_nr_pages() would stop working
> already as soon as we start messing with the subpages.
> 
> Most kernel configs should have at least CONFIG_MEMCG enabled, even if
> disabled at runtime. 64byte "struct memmap" is what we usually have
> on 64BIT.
> 
> While at it, rename "_folio_nr_pages" to "_nr_pages".
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

BTW, have anybody evaluated how much (if anything) do we gain we a
separate _nr_pages field in struct folio comparing to calculating it
based on the order in _flags_1? Mask+shift should be pretty cheap.
David Hildenbrand Oct. 23, 2024, 11:40 a.m. UTC | #2
On 23.10.24 13:38, Kirill A. Shutemov wrote:
> On Thu, Aug 29, 2024 at 06:56:07PM +0200, David Hildenbrand wrote:
>> Let's free up some more of the "unconditionally available on 64BIT"
>> space in order-1 folios by letting _folio_nr_pages overlay memcg_data in
>> the first tail page (second folio page). Consequently, we have the
>> optimization now whenever we have CONFIG_MEMCG, independent of 64BIT.
>>
>> We have to make sure that page->memcg on tail pages does not return
>> "surprises". page_memcg_check() already properly refuses PageTail().
>> Let's do that earlier in print_page_owner_memcg() to avoid printing
>> wrong "Slab cache page" information. No other code should touch that
>> field on tail pages of compound pages.
>>
>> Reset the "_nr_pages" to 0 when splitting folios, or when freeing them
>> back to the buddy (to avoid false page->memcg_data "bad page" reports).
>>
>> Note that in __split_huge_page(), folio_nr_pages() would stop working
>> already as soon as we start messing with the subpages.
>>
>> Most kernel configs should have at least CONFIG_MEMCG enabled, even if
>> disabled at runtime. 64byte "struct memmap" is what we usually have
>> on 64BIT.
>>
>> While at it, rename "_folio_nr_pages" to "_nr_pages".
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> 
> BTW, have anybody evaluated how much (if anything) do we gain we a
> separate _nr_pages field in struct folio comparing to calculating it
> based on the order in _flags_1? Mask+shift should be pretty cheap.

I recall that Willy did, and it's mostly getting rid of a single 
instruction in loads of places.

$ git grep folio_nr_pages | wc -l
254


[my first intuition was also to just remove it, but this way seems easy 
to just maintain it for now]
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index fa8b6ce54235c..98411e53da916 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1078,8 +1078,8 @@  static inline unsigned int folio_large_order(const struct folio *folio)
 
 static inline long folio_large_nr_pages(const struct folio *folio)
 {
-#ifdef CONFIG_64BIT
-	return folio->_folio_nr_pages;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+	return folio->_nr_pages;
 #else
 	return 1L << folio_large_order(folio);
 #endif
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6e3bdf8e38bca..480548552ea54 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -283,6 +283,11 @@  typedef struct {
 	unsigned long val;
 } swp_entry_t;
 
+#if defined(CONFIG_MEMCG) || defined(CONFIG_SLAB_OBJ_EXT)
+/* We have some extra room after the refcount in tail pages. */
+#define NR_PAGES_IN_LARGE_FOLIO
+#endif
+
 /**
  * struct folio - Represents a contiguous set of bytes.
  * @flags: Identical to the page flags.
@@ -305,7 +310,7 @@  typedef struct {
  * @_large_mapcount: Do not use directly, call folio_mapcount().
  * @_nr_pages_mapped: Do not use outside of rmap and debug code.
  * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
- * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
+ * @_nr_pages: Do not use directly, call folio_nr_pages().
  * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h.
  * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h.
  * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h.
@@ -366,13 +371,20 @@  struct folio {
 			unsigned long _flags_1;
 			unsigned long _head_1;
 	/* public: */
-			atomic_t _large_mapcount;
-			atomic_t _entire_mapcount;
-			atomic_t _nr_pages_mapped;
-			atomic_t _pincount;
-#ifdef CONFIG_64BIT
-			unsigned int _folio_nr_pages;
-#endif
+			union {
+				struct {
+					atomic_t _large_mapcount;
+					atomic_t _entire_mapcount;
+					atomic_t _nr_pages_mapped;
+					atomic_t _pincount;
+				};
+				unsigned long _usable_1[4];
+			};
+			atomic_t _mapcount_1;
+			atomic_t _refcount_1;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+			unsigned int _nr_pages;
+#endif /* NR_PAGES_IN_LARGE_FOLIO */
 	/* private: the union with struct page is transitional */
 		};
 		struct page __page_1;
@@ -424,6 +436,8 @@  FOLIO_MATCH(_last_cpupid, _last_cpupid);
 			offsetof(struct page, pg) + sizeof(struct page))
 FOLIO_MATCH(flags, _flags_1);
 FOLIO_MATCH(compound_head, _head_1);
+FOLIO_MATCH(_mapcount, _mapcount_1);
+FOLIO_MATCH(_refcount, _refcount_1);
 #undef FOLIO_MATCH
 #define FOLIO_MATCH(pg, fl)						\
 	static_assert(offsetof(struct folio, fl) ==			\
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 15418ffdd3774..28d12573fcf8c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3171,6 +3171,14 @@  static void __split_huge_page(struct page *page, struct list_head *list,
 	int order = folio_order(folio);
 	unsigned int nr = 1 << order;
 
+	/*
+	 * Reset any memcg data overlay in the tail pages. folio_nr_pages()
+	 * is unreliable after this point.
+	 */
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+	folio->_nr_pages = 0;
+#endif
+
 	/* complete memcg works before add pages to LRU */
 	split_page_memcg(head, order, new_order);
 
diff --git a/mm/internal.h b/mm/internal.h
index 97d6b94429ebd..f627fd2200464 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -625,8 +625,8 @@  static inline void folio_set_order(struct folio *folio, unsigned int order)
 		return;
 
 	folio->_flags_1 = (folio->_flags_1 & ~0xffUL) | order;
-#ifdef CONFIG_64BIT
-	folio->_folio_nr_pages = 1U << order;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+	folio->_nr_pages = 1U << order;
 #endif
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c2ffccf9d2131..e276cbaf97054 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1077,8 +1077,12 @@  __always_inline bool free_pages_prepare(struct page *page,
 	if (unlikely(order)) {
 		int i;
 
-		if (compound)
+		if (compound) {
 			page[1].flags &= ~PAGE_FLAGS_SECOND;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+			((struct folio *)page)->_nr_pages = 0;
+#endif
+		}
 		for (i = 1; i < (1 << order); i++) {
 			if (compound)
 				bad += free_tail_page_prepare(page, page + i);
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 2d6360eaccbb6..a409e2561a8fd 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -507,7 +507,7 @@  static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
 
 	rcu_read_lock();
 	memcg_data = READ_ONCE(page->memcg_data);
-	if (!memcg_data)
+	if (!memcg_data || PageTail(page))
 		goto out_unlock;
 
 	if (memcg_data & MEMCG_DATA_OBJEXTS)