diff mbox series

[v3] mm/vmscan: take min_slab_pages into account when try to call shrink_node

Message ID 20220425112118.20924-1-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series [v3] mm/vmscan: take min_slab_pages into account when try to call shrink_node | expand

Commit Message

Miaohe Lin April 25, 2022, 11:21 a.m. UTC
Since commit 6b4f7799c6a5 ("mm: vmscan: invoke slab shrinkers from
shrink_zone()"), slab reclaim and lru page reclaim are done together
in the shrink_node. So we should take min_slab_pages into account
when try to call shrink_node.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
---
v3:
  This patch is pending verifying. Split it out to make it easier
  to move forward.
---
 mm/vmscan.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a2752e8fc879..1049f5324765 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4718,7 +4718,8 @@  static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
 	noreclaim_flag = memalloc_noreclaim_save();
 	set_task_reclaim_state(p, &sc.reclaim_state);
 
-	if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) {
+	if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages ||
+	    node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B) > pgdat->min_slab_pages) {
 		/*
 		 * Free memory by calling shrink node with increasing
 		 * priorities until we have enough memory freed.