Message ID | 20241209124233.3543f237@fangorn (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: allow exiting processes to exceed the memory.max limit | expand |
On Mon 09-12-24 12:42:33, Rik van Riel wrote: > It is possible for programs to get stuck in exit, when their > memcg is at or above the memory.max limit, and things like > the do_futex() call from mm_release() need to page memory in. > > This can hang forever, but it really doesn't have to. Are you sure this is really happening? > > The amount of memory that the exit path will page into memory > should be relatively small, and letting exit proceed faster > will free up memory faster. > > Allow PF_EXITING tasks to bypass the cgroup memory.max limit > the same way PF_MEMALLOC already does. > > Signed-off-by: Rik van Riel <riel@surriel.com> > --- > mm/memcontrol.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 7b3503d12aaf..d1abef1138ff 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2218,11 +2218,12 @@ int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, > > /* > * Prevent unbounded recursion when reclaim operations need to > - * allocate memory. This might exceed the limits temporarily, > - * but we prefer facilitating memory reclaim and getting back > - * under the limit over triggering OOM kills in these cases. > + * allocate memory, or the process is exiting. This might exceed > + * the limits temporarily, but we prefer facilitating memory reclaim > + * and getting back under the limit over triggering OOM kills in > + * these cases. > */ > - if (unlikely(current->flags & PF_MEMALLOC)) > + if (unlikely(current->flags & (PF_MEMALLOC | PF_EXITING))) > goto force; We already have task_is_dying() bail out. Why is that insufficient? It is currently hitting when the oom situation is triggered while your patch is triggering this much earlier. We used to do that in the past but this got changed by a4ebf1b6ca1e ("memcg: prohibit unconditional exceeding the limit of dying tasks"). I believe the situation in vmalloc has changed since then but I suspect the fundamental problem that the amount of memory dying tasks could allocate a lot of memory stays. There is still this : It has been observed that it is not really hard to trigger these : bypasses and cause global OOM situation. that really needs to be re-evaluated.
On Mon, 2024-12-09 at 19:08 +0100, Michal Hocko wrote: > On Mon 09-12-24 12:42:33, Rik van Riel wrote: > > It is possible for programs to get stuck in exit, when their > > memcg is at or above the memory.max limit, and things like > > the do_futex() call from mm_release() need to page memory in. > > > > This can hang forever, but it really doesn't have to. > > Are you sure this is really happening? It turns out it wasn't really forever. After about a day the zombie task I was bpftracing, to figure out exactly what was going wrong, finally succeeded in exiting. I got as far as seeing try_to_free_mem_cgroup_pages return 0 many, times in a row, looping in try_charge_memcg, which occasionally returned -ENOMEM to the caller, who then retried several times. Each invocation of try_to_free_mem_cgroup_pages also saw a large number of unsuccessful calls to shrink_folio_list. It looks like what might be happening instead is that faultin_page() returns 0 after getting back VM_FAULT_OOM from handle_mm_fault, causing __get_user_pages() to loop. Let me send a patch to fix that, instead!
On Mon, 2024-12-09 at 19:08 +0100, Michal Hocko wrote: > On Mon 09-12-24 12:42:33, Rik van Riel wrote: > > It is possible for programs to get stuck in exit, when their > > memcg is at or above the memory.max limit, and things like > > the do_futex() call from mm_release() need to page memory in. > > > > This can hang forever, but it really doesn't have to. > > Are you sure this is really happening? The stuck is happening, albeit not stuck forever, but exit taking hours before finally completing. However, the fix may be to just allow the exiting task to bypass "zswap no writeback" settings and write some of the memory of its own cgroup to swap to get out of the livelock: https://lkml.org/lkml/2024/12/11/10102
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7b3503d12aaf..d1abef1138ff 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2218,11 +2218,12 @@ int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, /* * Prevent unbounded recursion when reclaim operations need to - * allocate memory. This might exceed the limits temporarily, - * but we prefer facilitating memory reclaim and getting back - * under the limit over triggering OOM kills in these cases. + * allocate memory, or the process is exiting. This might exceed + * the limits temporarily, but we prefer facilitating memory reclaim + * and getting back under the limit over triggering OOM kills in + * these cases. */ - if (unlikely(current->flags & PF_MEMALLOC)) + if (unlikely(current->flags & (PF_MEMALLOC | PF_EXITING))) goto force; if (unlikely(task_in_memcg_oom(current)))
It is possible for programs to get stuck in exit, when their memcg is at or above the memory.max limit, and things like the do_futex() call from mm_release() need to page memory in. This can hang forever, but it really doesn't have to. The amount of memory that the exit path will page into memory should be relatively small, and letting exit proceed faster will free up memory faster. Allow PF_EXITING tasks to bypass the cgroup memory.max limit the same way PF_MEMALLOC already does. Signed-off-by: Rik van Riel <riel@surriel.com> --- mm/memcontrol.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)