@@ -684,6 +684,11 @@ Where:
node locality page counters (N0 == node0, N1 == node1, ...) and the kernel page
size, in KB, that is backing the mapping up.
+Note that some kernel configurations do not track the precise number of times
+a page part of a larger allocation (e.g., THP) is mapped. In these
+configurations, "mapmax" might corresponds to the average number of mappings
+per page in such a larger allocation instead.
+
1.2 Kernel data
---------------
@@ -2872,7 +2872,13 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
unsigned long nr_pages)
{
struct folio *folio = page_folio(page);
- int count = folio_precise_page_mapcount(folio, page);
+ int count;
+
+#ifdef CONFIG_PAGE_MAPCOUNT
+ count = folio_precise_page_mapcount(folio, page);
+#else
+ count = min_t(int, folio_average_page_mapcount(folio), 1);
+#endif
md->pages += nr_pages;
if (pte_dirty || folio_test_dirty(folio))
Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. For calculating "mapmax", we now use the average per-page mapcount in a large folio instead of the per-page mapcount. For hugetlb folios and folios that are not partially mapped into MMs, there is no change. Likely, this change will not matter much in practice, and an alternative might be to simple remove this stat with CONFIG_NO_PAGE_MAPCOUNT. However, there might be value to it, so let's keep it like that and document the behavior. Signed-off-by: David Hildenbrand <david@redhat.com> --- Documentation/filesystems/proc.rst | 5 +++++ fs/proc/task_mmu.c | 8 +++++++- 2 files changed, 12 insertions(+), 1 deletion(-)