diff mbox series

[mm-unstable,v1,2/2] mm, swap: add pages allocated for struct swap_cgroup to vmstat

Message ID 20241031224551.1736113-3-kinseyho@google.com (mailing list archive)
State New
Headers show
Series Track pages allocated for struct | expand

Commit Message

Kinsey Ho Oct. 31, 2024, 10:45 p.m. UTC
Export the number of pages allocated for storing struct swap_cgroup in
vmstat using global system-wide counters.

Signed-off-by: Kinsey Ho <kinseyho@google.com>
---
 include/linux/vmstat.h | 3 +++
 mm/swap_cgroup.c       | 3 +++
 mm/vmstat.c            | 3 +++
 3 files changed, 9 insertions(+)

Comments

Michal Koutný Nov. 4, 2024, 4:22 p.m. UTC | #1
Hello Kinsey.

On Thu, Oct 31, 2024 at 10:45:51PM GMT, Kinsey Ho <kinseyho@google.com> wrote:
> Export the number of pages allocated for storing struct swap_cgroup in
> vmstat using global system-wide counters.

This consumption is quite static (it only changes between swapon/swapoff
switches). The resulting value can be calculated as a linear combination
of entries in /proc/swaps already (if you know the right coefficients).

I'm not sure this warrants the new entry (or is my assumption about
static-ness wrong?)

Michal
diff mbox series

Patch

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index ac4d42c4fabd..227e951d1219 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -41,6 +41,9 @@  enum vm_stat_item {
 	NR_DIRTY_BG_THRESHOLD,
 	NR_MEMMAP_PAGES,	/* page metadata allocated through buddy allocator */
 	NR_MEMMAP_BOOT_PAGES,	/* page metadata allocated through boot allocator */
+#if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP)
+	NR_SWAP_CGROUP_PAGES,	/* allocated to store struct swap_cgroup */
+#endif
 	NR_VM_STAT_ITEMS,
 };
 
diff --git a/mm/swap_cgroup.c b/mm/swap_cgroup.c
index da1278f0563b..82eda8a3efe1 100644
--- a/mm/swap_cgroup.c
+++ b/mm/swap_cgroup.c
@@ -53,6 +53,8 @@  static int swap_cgroup_prepare(int type)
 		if (!(idx % SWAP_CLUSTER_MAX))
 			cond_resched();
 	}
+	mod_global_page_state(NR_SWAP_CGROUP_PAGES, ctrl->length);
+
 	return 0;
 not_enough_page:
 	max = idx;
@@ -228,6 +230,7 @@  void swap_cgroup_swapoff(int type)
 			if (!(i % SWAP_CLUSTER_MAX))
 				cond_resched();
 		}
+		mod_global_page_state(NR_SWAP_CGROUP_PAGES, -length);
 		vfree(map);
 	}
 }
diff --git a/mm/vmstat.c b/mm/vmstat.c
index e5a6dd5106c2..259574261ec1 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1295,6 +1295,9 @@  const char * const vmstat_text[] = {
 	"nr_dirty_background_threshold",
 	"nr_memmap_pages",
 	"nr_memmap_boot_pages",
+#if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP)
+	"nr_swap_cgroup_pages",
+#endif
 
 #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)
 	/* enum vm_event_item counters */