diff mbox series

[v2,2/2] mm, memcg: don't try to kill a process if memcg is not populated

Message ID 20200504042621.10334-3-laoar.shao@gmail.com (mailing list archive)
State New, archived
Headers show
Series memcg oom: don't try to kill a process if there is no process | expand

Commit Message

Yafang Shao May 4, 2020, 4:26 a.m. UTC
Recently Shakeel reported a issue which also confused me several months
earlier. Bellow is his report -
Lowering memory.max can trigger an oom-kill if the reclaim does not
succeed. However if oom-killer does not find a process for killing, it
dumps a lot of warnings.
Deleting a memcg does not reclaim memory from it and the memory can
linger till there is a memory pressure. One normal way to proactively
reclaim such memory is to set memory.max to 0 just before deleting the
memcg. However if some of the memcg's memory is pinned by others, this
operation can trigger an oom-kill without any process and thus can log a
lot of un-needed warnings. So, ignore all such warnings from memory.max.

A better way to avoid this issue is to avoid trying to kill a process if
memcg is not populated.
Note that OOM is different from OOM kill. OOM is a status that the
system or memcg is out of memory, while OOM kill is a result that a
process inside this memcg is killed when this memcg is in OOM status.
That is the same reason why there're both MEMCG_OOM event and
MEMCG_OOM_KILL event. If we have already known that there's nothing to
kill, i.e. the memcg is not populated, then we don't need a try.

Basically why setting memory.max to 0 is better than setting memory.high to
0 before deletion. The reason is remote charging. High reclaim does not
work for remote memcg and the usage can go till max or global pressure.

[shakeelb@google.com: improve commit log]
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Greg Thelen <gthelen@google.com>
---
 mm/memcontrol.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Michal Hocko May 4, 2020, 8:18 a.m. UTC | #1
[It would be really great if a newer version was posted only after there
was a wider consensus on the approach.]

On Mon 04-05-20 00:26:21, Yafang Shao wrote:
> Recently Shakeel reported a issue which also confused me several months
> earlier. Bellow is his report -
> Lowering memory.max can trigger an oom-kill if the reclaim does not
> succeed. However if oom-killer does not find a process for killing, it
> dumps a lot of warnings.
> Deleting a memcg does not reclaim memory from it and the memory can
> linger till there is a memory pressure. One normal way to proactively
> reclaim such memory is to set memory.max to 0 just before deleting the
> memcg. However if some of the memcg's memory is pinned by others, this
> operation can trigger an oom-kill without any process and thus can log a
> lot of un-needed warnings. So, ignore all such warnings from memory.max.
> 
> A better way to avoid this issue is to avoid trying to kill a process if
> memcg is not populated.
> Note that OOM is different from OOM kill. OOM is a status that the
> system or memcg is out of memory, while OOM kill is a result that a
> process inside this memcg is killed when this memcg is in OOM status.

Agreed.

> That is the same reason why there're both MEMCG_OOM event and
> MEMCG_OOM_KILL event. If we have already known that there's nothing to
> kill, i.e. the memcg is not populated, then we don't need a try.

OK, but you are not explaining why a silent failure is really better
than no oom report under oom situation. With your patch, there is
no failure reported to the user and there is also no sign that there
might be a problem that memcg leaves memory behind that is not bound to
any (killable) process. This could be an important information.

Besides that I really do not see any actual problem that this would be
fixing. Reducing the hard limit is an operation which might trigger the
oom killer and leave an oom report behind. Having an OOM without any
tasks is pretty much a corner case and making it silent just makes
it harder to debug.

> Basically why setting memory.max to 0 is better than setting memory.high to
> 0 before deletion. The reason is remote charging. High reclaim does not
> work for remote memcg and the usage can go till max or global pressure.
> 
> [shakeelb@google.com: improve commit log]
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Greg Thelen <gthelen@google.com>
> ---
>  mm/memcontrol.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 985edce98491..29afe3df9d98 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6102,6 +6102,10 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
>  		}
>  
>  		memcg_memory_event(memcg, MEMCG_OOM);
> +
> +		if (!cgroup_is_populated(memcg->css.cgroup))
> +			break;
> +
>  		if (!mem_cgroup_oom_kill(memcg, GFP_KERNEL, 0))
>  			break;
>  	}
> -- 
> 2.18.2
Yafang Shao May 4, 2020, 12:34 p.m. UTC | #2
On Mon, May 4, 2020 at 4:18 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> [It would be really great if a newer version was posted only after there
> was a wider consensus on the approach.]
>
> On Mon 04-05-20 00:26:21, Yafang Shao wrote:
> > Recently Shakeel reported a issue which also confused me several months
> > earlier. Bellow is his report -
> > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > succeed. However if oom-killer does not find a process for killing, it
> > dumps a lot of warnings.
> > Deleting a memcg does not reclaim memory from it and the memory can
> > linger till there is a memory pressure. One normal way to proactively
> > reclaim such memory is to set memory.max to 0 just before deleting the
> > memcg. However if some of the memcg's memory is pinned by others, this
> > operation can trigger an oom-kill without any process and thus can log a
> > lot of un-needed warnings. So, ignore all such warnings from memory.max.
> >
> > A better way to avoid this issue is to avoid trying to kill a process if
> > memcg is not populated.
> > Note that OOM is different from OOM kill. OOM is a status that the
> > system or memcg is out of memory, while OOM kill is a result that a
> > process inside this memcg is killed when this memcg is in OOM status.
>
> Agreed.
>
> > That is the same reason why there're both MEMCG_OOM event and
> > MEMCG_OOM_KILL event. If we have already known that there's nothing to
> > kill, i.e. the memcg is not populated, then we don't need a try.
>
> OK, but you are not explaining why a silent failure is really better
> than no oom report under oom situation. With your patch, there is
> no failure reported to the user and there is also no sign that there
> might be a problem that memcg leaves memory behind that is not bound to
> any (killable) process. This could be an important information.
>

That is not a silent failure. An oom event will be reported.
The user can get this event by memory.events or memory.events.local if
he really care about it.
Especially when the admin set memory.max to 0 to drop all the caches,
many oom logs are a noise, besides that there are some side effect,
for example two many oom logs printed to a slow console may cause some
latency spike.


> Besides that I really do not see any actual problem that this would be
> fixing.

Avoid printing two many oom logs.

> Reducing the hard limit is an operation which might trigger the
> oom killer and leave an oom report behind. Having an OOM without any
> tasks is pretty much a corner case and making it silent just makes
> it harder to debug.
>

This can only happen when the admin reduces memory.max, and the admin
should have the ability to know how to get the result, for example by
memory.events.

> > Basically why setting memory.max to 0 is better than setting memory.high to
> > 0 before deletion. The reason is remote charging. High reclaim does not
> > work for remote memcg and the usage can go till max or global pressure.
> >
> > [shakeelb@google.com: improve commit log]
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Roman Gushchin <guro@fb.com>
> > Cc: Greg Thelen <gthelen@google.com>
> > ---
> >  mm/memcontrol.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 985edce98491..29afe3df9d98 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6102,6 +6102,10 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
> >               }
> >
> >               memcg_memory_event(memcg, MEMCG_OOM);
> > +
> > +             if (!cgroup_is_populated(memcg->css.cgroup))
> > +                     break;
> > +
> >               if (!mem_cgroup_oom_kill(memcg, GFP_KERNEL, 0))
> >                       break;
> >       }
> > --
> > 2.18.2
>
> --
> Michal Hocko
> SUSE Labs
Michal Hocko May 4, 2020, 12:46 p.m. UTC | #3
On Mon 04-05-20 20:34:01, Yafang Shao wrote:
> On Mon, May 4, 2020 at 4:18 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > [It would be really great if a newer version was posted only after there
> > was a wider consensus on the approach.]
> >
> > On Mon 04-05-20 00:26:21, Yafang Shao wrote:
> > > Recently Shakeel reported a issue which also confused me several months
> > > earlier. Bellow is his report -
> > > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > > succeed. However if oom-killer does not find a process for killing, it
> > > dumps a lot of warnings.
> > > Deleting a memcg does not reclaim memory from it and the memory can
> > > linger till there is a memory pressure. One normal way to proactively
> > > reclaim such memory is to set memory.max to 0 just before deleting the
> > > memcg. However if some of the memcg's memory is pinned by others, this
> > > operation can trigger an oom-kill without any process and thus can log a
> > > lot of un-needed warnings. So, ignore all such warnings from memory.max.
> > >
> > > A better way to avoid this issue is to avoid trying to kill a process if
> > > memcg is not populated.
> > > Note that OOM is different from OOM kill. OOM is a status that the
> > > system or memcg is out of memory, while OOM kill is a result that a
> > > process inside this memcg is killed when this memcg is in OOM status.
> >
> > Agreed.
> >
> > > That is the same reason why there're both MEMCG_OOM event and
> > > MEMCG_OOM_KILL event. If we have already known that there's nothing to
> > > kill, i.e. the memcg is not populated, then we don't need a try.
> >
> > OK, but you are not explaining why a silent failure is really better
> > than no oom report under oom situation. With your patch, there is
> > no failure reported to the user and there is also no sign that there
> > might be a problem that memcg leaves memory behind that is not bound to
> > any (killable) process. This could be an important information.
> >
> 
> That is not a silent failure. An oom event will be reported.
> The user can get this event by memory.events or memory.events.local if
> he really care about it.

You are right. The oom situation will be reported (somehow) but the
reason why no task has been killed might be several and there is no way
to report no eligible tasks.

> Especially when the admin set memory.max to 0 to drop all the caches,
> many oom logs are a noise, besides that there are some side effect,
> for example two many oom logs printed to a slow console may cause some
> latency spike.

But the oom situation and the oom report is simply something an admin
has to expect especially when the hard limit is set to 0. With kmem
accounting there is no guarantee that the target will be met.
> 
> 
> > Besides that I really do not see any actual problem that this would be
> > fixing.
> 
> Avoid printing two many oom logs.

There is only a single oom report printed so I disagree this is really a
proper justification.

Unless you can come up with a better justification I am against this
patch. It unnecessarily reduce debugging tools while it doesn't really
provide any huge advantage. Changing the hard limit to impossible target
is known to trigger the oom kernel and the oom report is a part of that.
If the oom report is too noisy then we can discuss on how to make it
more compact but making ad-hoc exceptions like this one is not a good
solution.
Yafang Shao May 4, 2020, 3:24 p.m. UTC | #4
On Mon, May 4, 2020 at 8:46 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 04-05-20 20:34:01, Yafang Shao wrote:
> > On Mon, May 4, 2020 at 4:18 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > [It would be really great if a newer version was posted only after there
> > > was a wider consensus on the approach.]
> > >
> > > On Mon 04-05-20 00:26:21, Yafang Shao wrote:
> > > > Recently Shakeel reported a issue which also confused me several months
> > > > earlier. Bellow is his report -
> > > > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > > > succeed. However if oom-killer does not find a process for killing, it
> > > > dumps a lot of warnings.
> > > > Deleting a memcg does not reclaim memory from it and the memory can
> > > > linger till there is a memory pressure. One normal way to proactively
> > > > reclaim such memory is to set memory.max to 0 just before deleting the
> > > > memcg. However if some of the memcg's memory is pinned by others, this
> > > > operation can trigger an oom-kill without any process and thus can log a
> > > > lot of un-needed warnings. So, ignore all such warnings from memory.max.
> > > >
> > > > A better way to avoid this issue is to avoid trying to kill a process if
> > > > memcg is not populated.
> > > > Note that OOM is different from OOM kill. OOM is a status that the
> > > > system or memcg is out of memory, while OOM kill is a result that a
> > > > process inside this memcg is killed when this memcg is in OOM status.
> > >
> > > Agreed.
> > >
> > > > That is the same reason why there're both MEMCG_OOM event and
> > > > MEMCG_OOM_KILL event. If we have already known that there's nothing to
> > > > kill, i.e. the memcg is not populated, then we don't need a try.
> > >
> > > OK, but you are not explaining why a silent failure is really better
> > > than no oom report under oom situation. With your patch, there is
> > > no failure reported to the user and there is also no sign that there
> > > might be a problem that memcg leaves memory behind that is not bound to
> > > any (killable) process. This could be an important information.
> > >
> >
> > That is not a silent failure. An oom event will be reported.
> > The user can get this event by memory.events or memory.events.local if
> > he really care about it.
>
> You are right. The oom situation will be reported (somehow) but the
> reason why no task has been killed might be several and there is no way
> to report no eligible tasks.
>
> > Especially when the admin set memory.max to 0 to drop all the caches,
> > many oom logs are a noise, besides that there are some side effect,
> > for example two many oom logs printed to a slow console may cause some
> > latency spike.
>
> But the oom situation and the oom report is simply something an admin
> has to expect especially when the hard limit is set to 0. With kmem
> accounting there is no guarantee that the target will be met.

I'm always wondering that why not moving the kmem from this memcg to
the root_mem_cgroup in this situation ?
Then this memcg can be easily reclaimed.

> >
> >
> > > Besides that I really do not see any actual problem that this would be
> > > fixing.
> >
> > Avoid printing two many oom logs.
>
> There is only a single oom report printed so I disagree this is really a
> proper justification.
>
> Unless you can come up with a better justification I am against this
> patch. It unnecessarily reduce debugging tools while it doesn't really
> provide any huge advantage. Changing the hard limit to impossible target
> is known to trigger the oom kernel and the oom report is a part of that.
> If the oom report is too noisy then we can discuss on how to make it
> more compact but making ad-hoc exceptions like this one is not a good
> solution.
> --

No better justification yet. But I think more memcg users will
complaining about it.
Michal Hocko May 4, 2020, 4:11 p.m. UTC | #5
On Mon 04-05-20 23:24:35, Yafang Shao wrote:
> On Mon, May 4, 2020 at 8:46 PM Michal Hocko <mhocko@kernel.org> wrote:
[...]
> > But the oom situation and the oom report is simply something an admin
> > has to expect especially when the hard limit is set to 0. With kmem
> > accounting there is no guarantee that the target will be met.
> 
> I'm always wondering that why not moving the kmem from this memcg to
> the root_mem_cgroup in this situation ?
> Then this memcg can be easily reclaimed.

Roman was playing with kmem charges reparenting. But please note that
this alone would be sufficient. Even LRU pages are not guaranteed to be
reclaimable - think of the full swap space, memory might be pinned etc.
Roman Gushchin May 4, 2020, 5:04 p.m. UTC | #6
On Mon, May 04, 2020 at 06:11:13PM +0200, Michal Hocko wrote:
> On Mon 04-05-20 23:24:35, Yafang Shao wrote:
> > On Mon, May 4, 2020 at 8:46 PM Michal Hocko <mhocko@kernel.org> wrote:
> [...]
> > > But the oom situation and the oom report is simply something an admin
> > > has to expect especially when the hard limit is set to 0. With kmem
> > > accounting there is no guarantee that the target will be met.
> > 
> > I'm always wondering that why not moving the kmem from this memcg to
> > the root_mem_cgroup in this situation ?
> > Then this memcg can be easily reclaimed.

It's not that trivial: there are many objects which are keeping a reference
to a memory cgroup. We don't even have a comprehensive list of them.
And we should somehow reassign them to a different cgroup without too much
overhead.
Also it's better to move it to the parent instead of root.

> 
> Roman was playing with kmem charges reparenting.

Slabs are already reparenting. Other objects, which are allocated directly
by the page allocator (e.g. vmallocs) are not. But it will be relatively
easy to cover them after landing my slab controller rework patchset:
https://lore.kernel.org/lkml/20200422204708.2176080-1-guro@fb.com/ .
Basically it provides a framework for charging kernel objects in a way
that provides inexpensive reparenting.

Thanks!
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 985edce98491..29afe3df9d98 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6102,6 +6102,10 @@  static ssize_t memory_max_write(struct kernfs_open_file *of,
 		}
 
 		memcg_memory_event(memcg, MEMCG_OOM);
+
+		if (!cgroup_is_populated(memcg->css.cgroup))
+			break;
+
 		if (!mem_cgroup_oom_kill(memcg, GFP_KERNEL, 0))
 			break;
 	}