Message ID | 1559467380-8549-4-git-send-email-laoar.shao@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: improvement in shrink slab | expand |
On Sun, Jun 02, 2019 at 05:23:00PM +0800, Yafang Shao wrote: > In the node reclaim, may_shrinkslab is 0 by default, > hence shrink_slab will never be performed in it. > While shrik_slab should be performed if the relcaimable slab is over > min slab limit. > > If reclaimable pagecache is less than min_unmapped_pages while > reclaimable slab is greater than min_slab_pages, we only shrink slab. > Otherwise the min_unmapped_pages will be useless under this condition. > > reclaim_state.reclaimed_slab is to tell us how many pages are > reclaimed in shrink slab. > > This issue is very easy to produce, first you continuously cat a random > non-exist file to produce more and more dentry, then you read big file > to produce page cache. And finally you will find that the denty will > never be shrunk. > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > --- > mm/vmscan.c | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index e0c5669..d52014f 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4157,6 +4157,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > p->reclaim_state = &reclaim_state; > > if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) { > + sc.may_shrinkslab = (pgdat->min_slab_pages < > + node_page_state(pgdat, NR_SLAB_RECLAIMABLE)); > /* > * Free memory by calling shrink node with increasing > * priorities until we have enough memory freed. > @@ -4164,6 +4166,28 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > do { > shrink_node(pgdat, &sc); > } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); > + } else { > + /* > + * If the reclaimable pagecache is not greater than > + * min_unmapped_pages, only reclaim the slab. > + */ > + struct mem_cgroup *memcg; > + struct mem_cgroup_reclaim_cookie reclaim = { > + .pgdat = pgdat, > + }; > + > + do { > + reclaim.priority = sc.priority; > + memcg = mem_cgroup_iter(NULL, NULL, &reclaim); > + do { > + shrink_slab(sc.gfp_mask, pgdat->node_id, > + memcg, sc.priority); > + } while ((memcg = mem_cgroup_iter(NULL, memcg, > + &reclaim))); > + > + sc.nr_reclaimed += reclaim_state.reclaimed_slab; > + reclaim_state.reclaimed_slab = 0; > + } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); > } > > p->reclaim_state = NULL; > -- > 1.8.3.1 > Hi Yafang, Just a few questions regarding this patch. Don't you want to check if the number of slab reclaimable pages is greater than pgdat->min_slab_pages before reclaiming from slab in your else statement? Where is the check to see whether number of reclaimable slab pages is greater than pgdat->min_slab_pages? It looks like your shrinking slab on the condition if (node_pagecache_reclaimable(pgdata) > min_unmapped_pages) is false, Not if (pgdat->min_slab_pages < node_page_state(pgdat, NR_SLAB_RECLAIMABLE))? What do you think? Also would it be better if we update sc.may_shrinkslab outside the if statement of checking min_unmapped_pages? I think it may look better? Would it be better if we move updating sc.may_shrinkslab outside the if statement where we check min_unmapped_pages and add a else if (sc.may_shrinkslab) rather than an else and then start shrinking the slab? Thank you Bharath
On Sun, Jun 2, 2019 at 9:58 PM Bharath Vedartham <linux.bhar@gmail.com> wrote: > > On Sun, Jun 02, 2019 at 05:23:00PM +0800, Yafang Shao wrote: > > In the node reclaim, may_shrinkslab is 0 by default, > > hence shrink_slab will never be performed in it. > > While shrik_slab should be performed if the relcaimable slab is over > > min slab limit. > > > > If reclaimable pagecache is less than min_unmapped_pages while > > reclaimable slab is greater than min_slab_pages, we only shrink slab. > > Otherwise the min_unmapped_pages will be useless under this condition. > > > > reclaim_state.reclaimed_slab is to tell us how many pages are > > reclaimed in shrink slab. > > > > This issue is very easy to produce, first you continuously cat a random > > non-exist file to produce more and more dentry, then you read big file > > to produce page cache. And finally you will find that the denty will > > never be shrunk. > > > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > > --- > > mm/vmscan.c | 24 ++++++++++++++++++++++++ > > 1 file changed, 24 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index e0c5669..d52014f 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -4157,6 +4157,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > > p->reclaim_state = &reclaim_state; > > > > if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) { > > + sc.may_shrinkslab = (pgdat->min_slab_pages < > > + node_page_state(pgdat, NR_SLAB_RECLAIMABLE)); > > /* > > * Free memory by calling shrink node with increasing > > * priorities until we have enough memory freed. > > @@ -4164,6 +4166,28 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > > do { > > shrink_node(pgdat, &sc); > > } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); > > + } else { > > + /* > > + * If the reclaimable pagecache is not greater than > > + * min_unmapped_pages, only reclaim the slab. > > + */ > > + struct mem_cgroup *memcg; > > + struct mem_cgroup_reclaim_cookie reclaim = { > > + .pgdat = pgdat, > > + }; > > + > > + do { > > + reclaim.priority = sc.priority; > > + memcg = mem_cgroup_iter(NULL, NULL, &reclaim); > > + do { > > + shrink_slab(sc.gfp_mask, pgdat->node_id, > > + memcg, sc.priority); > > + } while ((memcg = mem_cgroup_iter(NULL, memcg, > > + &reclaim))); > > + > > + sc.nr_reclaimed += reclaim_state.reclaimed_slab; > > + reclaim_state.reclaimed_slab = 0; > > + } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); > > } > > > > p->reclaim_state = NULL; > > -- > > 1.8.3.1 > > > > Hi Yafang, > > Just a few questions regarding this patch. > > Don't you want to check if the number of slab reclaimable pages is > greater than pgdat->min_slab_pages before reclaiming from slab in your > else statement? Where is the check to see whether number of > reclaimable slab pages is greater than pgdat->min_slab_pages? It looks like your > shrinking slab on the condition if (node_pagecache_reclaimable(pgdata) > > min_unmapped_pages) is false, Not if (pgdat->min_slab_pages < > node_page_state(pgdat, NR_SLAB_RECLAIMABLE))? What do you think? > Hi Bharath, Because in __node_reclaim(), if node_pagecache_reclaimable(pgdat) is not greater than pgdat->min_unmapped_pages, then reclaimable slab pages must be greater than pgdat->min_slab_pages, so we don't need to check it again. Pls. see the code in node_reclaim(): node_reclaim if (node_pagecache_reclaimable(pgdat) <= pgdat->min_unmapped_pages && node_page_state(pgdat, NR_SLAB_RECLAIMABLE) <= pgdat->min_slab_pages) return NODE_RECLAIM_FULL; __node_reclaim(); > Also would it be better if we update sc.may_shrinkslab outside the if > statement of checking min_unmapped_pages? I think it may look better? > > Would it be better if we move updating sc.may_shrinkslab outside the > if statement where we check min_unmapped_pages and add a else if > (sc.may_shrinkslab) rather than an else and then start shrinking the slab? > Because sc.may_shrinkslab is used in shrink_node() only, while it will not be used in the else statement, so we don't need to update sc.may_shrinkslab outside the if statement. Hope it could clarify. Feel free to ask me it you still have any questions. Thanks Yafang
diff --git a/mm/vmscan.c b/mm/vmscan.c index e0c5669..d52014f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4157,6 +4157,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in p->reclaim_state = &reclaim_state; if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) { + sc.may_shrinkslab = (pgdat->min_slab_pages < + node_page_state(pgdat, NR_SLAB_RECLAIMABLE)); /* * Free memory by calling shrink node with increasing * priorities until we have enough memory freed. @@ -4164,6 +4166,28 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in do { shrink_node(pgdat, &sc); } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); + } else { + /* + * If the reclaimable pagecache is not greater than + * min_unmapped_pages, only reclaim the slab. + */ + struct mem_cgroup *memcg; + struct mem_cgroup_reclaim_cookie reclaim = { + .pgdat = pgdat, + }; + + do { + reclaim.priority = sc.priority; + memcg = mem_cgroup_iter(NULL, NULL, &reclaim); + do { + shrink_slab(sc.gfp_mask, pgdat->node_id, + memcg, sc.priority); + } while ((memcg = mem_cgroup_iter(NULL, memcg, + &reclaim))); + + sc.nr_reclaimed += reclaim_state.reclaimed_slab; + reclaim_state.reclaimed_slab = 0; + } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); } p->reclaim_state = NULL;
In the node reclaim, may_shrinkslab is 0 by default, hence shrink_slab will never be performed in it. While shrik_slab should be performed if the relcaimable slab is over min slab limit. If reclaimable pagecache is less than min_unmapped_pages while reclaimable slab is greater than min_slab_pages, we only shrink slab. Otherwise the min_unmapped_pages will be useless under this condition. reclaim_state.reclaimed_slab is to tell us how many pages are reclaimed in shrink slab. This issue is very easy to produce, first you continuously cat a random non-exist file to produce more and more dentry, then you read big file to produce page cache. And finally you will find that the denty will never be shrunk. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> --- mm/vmscan.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)