diff mbox series

[2/2] mm/vmscan: Fix pgdemote_* accounting with lru_gen_enabled

Message ID 20250110122133.423481-2-lizhijian@fujitsu.com (mailing list archive)
State New
Headers show
Series [1/2] mm/vmscan: Accumulate nr_demoted for accurate demotion statistics | expand

Commit Message

Zhijian Li (Fujitsu) Jan. 10, 2025, 12:21 p.m. UTC
Commit f77f0c751478 ("mm,memcg: provide per-cgroup counters for NUMA balancing operations")
moved the accounting of PGDEMOTE_* statistics to shrink_inactive_list().
However, shrink_inactive_list() is not called when lrugen_enabled is true,
leading to incorrect demotion statistics despite actual demotion events
occurring.

Add the PGDEMOTE_* accounting in evict_folios(), ensuring that demotion
statistics are correctly updated regardless of the lru_gen_enabled state.
This fix is crucial for systems that rely on accurate NUMA balancing
metrics for performance tuning and resource management.

Fixes: f77f0c751478 ("mm,memcg: provide per-cgroup counters for NUMA balancing operations")
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
---
 mm/vmscan.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 430d580e37dd..f2d279de06c4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4642,6 +4642,8 @@  static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
 		reset_batch_size(walk);
 	}
 
+	__mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(),
+			   stat.nr_demoted);
 	item = PGSTEAL_KSWAPD + reclaimer_offset();
 	if (!cgroup_reclaim(sc))
 		__count_vm_events(item, reclaimed);