Message ID | 1560156147-12314-1-git-send-email-laoar.shao@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/vmscan: call vmpressure_prio() in kswapd reclaim path | expand |
On Mon, 10 Jun 2019 16:42:27 +0800 Yafang Shao <laoar.shao@gmail.com> wrote: > Once the reclaim scanning depth goes too deep, it always mean we are > under memory pressure now. > This behavior should be captured by vmpressure_prio(), which should run > every time when the vmscan's reclaiming priority (scanning depth) > changes. > It's possible the scanning depth goes deep in kswapd reclaim path, > so vmpressure_prio() should be called in this path. > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> What effect does this change have upon userspace? Presumably you observed some behaviour(?) and that behaviour was undesirable(?) and the patch changed that behaviour to something else(?) and this new behaviour is better for some reason(?).
On Tue, Jun 11, 2019 at 5:12 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Mon, 10 Jun 2019 16:42:27 +0800 Yafang Shao <laoar.shao@gmail.com> wrote: > > > Once the reclaim scanning depth goes too deep, it always mean we are > > under memory pressure now. > > This behavior should be captured by vmpressure_prio(), which should run > > every time when the vmscan's reclaiming priority (scanning depth) > > changes. > > It's possible the scanning depth goes deep in kswapd reclaim path, > > so vmpressure_prio() should be called in this path. > > > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com> > > What effect does this change have upon userspace? > > Presumably you observed some behaviour(?) and that behaviour was > undesirable(?) and the patch changed that behaviour to something > else(?) and this new behaviour is better for some reason(?). > When there're few free memory, the usespace can receive the critical memory pressure event earlier, because when we try to do direct reclaim we always wakeup the kswapd first. Currently the vmpressure work (vmpressure_work_fn) can only be scheduled in direct reclaim path, and with this change, the vmpressure work can be scheduled in kswapd reclaim path. I think receiving the critical memory pressure event earlier can give the userspace more chance to do something to prevent random OOM. With this change, the vmpressure work will be scheduled more frequent than before when the system is under memory pressure. Thanks Yafang
diff --git a/mm/vmscan.c b/mm/vmscan.c index b79f584..1fbd3be 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3609,8 +3609,11 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) if (nr_boost_reclaim && !nr_reclaimed) break; - if (raise_priority || !nr_reclaimed) + if (raise_priority || !nr_reclaimed) { + vmpressure_prio(sc.gfp_mask, sc.target_mem_cgroup, + sc.priority); sc.priority--; + } } while (sc.priority >= 1); if (!sc.nr_reclaimed)
Once the reclaim scanning depth goes too deep, it always mean we are under memory pressure now. This behavior should be captured by vmpressure_prio(), which should run every time when the vmscan's reclaiming priority (scanning depth) changes. It's possible the scanning depth goes deep in kswapd reclaim path, so vmpressure_prio() should be called in this path. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> --- mm/vmscan.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)