diff mbox series

[1/2] mm: memcontrol: flush percpu vmstats before releasing memcg

Message ID 20190812222911.2364802-2-guro@fb.com (mailing list archive)
State New, archived
Headers show
Series flush percpu vmstats | expand

Commit Message

Roman Gushchin Aug. 12, 2019, 10:29 p.m. UTC
Percpu caching of local vmstats with the conditional propagation
by the cgroup tree leads to an accumulation of errors on non-leaf
levels.

Let's imagine two nested memory cgroups A and A/B. Say, a process
belonging to A/B allocates 100 pagecache pages on the CPU 0.
The percpu cache will spill 3 times, so that 32*3=96 pages will be
accounted to A/B and A atomic vmstat counters, 4 pages will remain
in the percpu cache.

Imagine A/B is nearby memory.max, so that every following allocation
triggers a direct reclaim on the local CPU. Say, each such attempt
will free 16 pages on a new cpu. That means every percpu cache will
have -16 pages, except the first one, which will have 4 - 16 = -12.
A/B and A atomic counters will not be touched at all.

Now a user removes A/B. All percpu caches are freed and corresponding
vmstat numbers are forgotten. A has 96 pages more than expected.

As memory cgroups are created and destroyed, errors do accumulate.
Even 1-2 pages differences can accumulate into large numbers.

To fix this issue let's accumulate and propagate percpu vmstat
values before releasing the memory cgroup. At this point these
numbers are stable and cannot be changed.

Since on cpu hotplug we do flush percpu vmstats anyway, we can
iterate only over online cpus.

Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

Comments

Andrew Morton Aug. 13, 2019, 9:27 p.m. UTC | #1
On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote:

> Percpu caching of local vmstats with the conditional propagation
> by the cgroup tree leads to an accumulation of errors on non-leaf
> levels.
> 
> Let's imagine two nested memory cgroups A and A/B. Say, a process
> belonging to A/B allocates 100 pagecache pages on the CPU 0.
> The percpu cache will spill 3 times, so that 32*3=96 pages will be
> accounted to A/B and A atomic vmstat counters, 4 pages will remain
> in the percpu cache.
> 
> Imagine A/B is nearby memory.max, so that every following allocation
> triggers a direct reclaim on the local CPU. Say, each such attempt
> will free 16 pages on a new cpu. That means every percpu cache will
> have -16 pages, except the first one, which will have 4 - 16 = -12.
> A/B and A atomic counters will not be touched at all.
> 
> Now a user removes A/B. All percpu caches are freed and corresponding
> vmstat numbers are forgotten. A has 96 pages more than expected.
> 
> As memory cgroups are created and destroyed, errors do accumulate.
> Even 1-2 pages differences can accumulate into large numbers.
> 
> To fix this issue let's accumulate and propagate percpu vmstat
> values before releasing the memory cgroup. At this point these
> numbers are stable and cannot be changed.
> 
> Since on cpu hotplug we do flush percpu vmstats anyway, we can
> iterate only over online cpus.
> 
> Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")

Is this not serious enough for a cc:stable?
Roman Gushchin Aug. 13, 2019, 9:46 p.m. UTC | #2
On Tue, Aug 13, 2019 at 02:27:52PM -0700, Andrew Morton wrote:
> On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > Percpu caching of local vmstats with the conditional propagation
> > by the cgroup tree leads to an accumulation of errors on non-leaf
> > levels.
> > 
> > Let's imagine two nested memory cgroups A and A/B. Say, a process
> > belonging to A/B allocates 100 pagecache pages on the CPU 0.
> > The percpu cache will spill 3 times, so that 32*3=96 pages will be
> > accounted to A/B and A atomic vmstat counters, 4 pages will remain
> > in the percpu cache.
> > 
> > Imagine A/B is nearby memory.max, so that every following allocation
> > triggers a direct reclaim on the local CPU. Say, each such attempt
> > will free 16 pages on a new cpu. That means every percpu cache will
> > have -16 pages, except the first one, which will have 4 - 16 = -12.
> > A/B and A atomic counters will not be touched at all.
> > 
> > Now a user removes A/B. All percpu caches are freed and corresponding
> > vmstat numbers are forgotten. A has 96 pages more than expected.
> > 
> > As memory cgroups are created and destroyed, errors do accumulate.
> > Even 1-2 pages differences can accumulate into large numbers.
> > 
> > To fix this issue let's accumulate and propagate percpu vmstat
> > values before releasing the memory cgroup. At this point these
> > numbers are stable and cannot be changed.
> > 
> > Since on cpu hotplug we do flush percpu vmstats anyway, we can
> > iterate only over online cpus.
> > 
> > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
> 
> Is this not serious enough for a cc:stable?

I hope the "Fixes" tag will work, but yeah, my bad, cc:stable is definitely
a good idea here.

Added stable@ to cc.

Thanks!
Michal Hocko Aug. 14, 2019, 11:26 a.m. UTC | #3
On Mon 12-08-19 15:29:10, Roman Gushchin wrote:
> Percpu caching of local vmstats with the conditional propagation
> by the cgroup tree leads to an accumulation of errors on non-leaf
> levels.
> 
> Let's imagine two nested memory cgroups A and A/B. Say, a process
> belonging to A/B allocates 100 pagecache pages on the CPU 0.
> The percpu cache will spill 3 times, so that 32*3=96 pages will be
> accounted to A/B and A atomic vmstat counters, 4 pages will remain
> in the percpu cache.
> 
> Imagine A/B is nearby memory.max, so that every following allocation
> triggers a direct reclaim on the local CPU. Say, each such attempt
> will free 16 pages on a new cpu. That means every percpu cache will
> have -16 pages, except the first one, which will have 4 - 16 = -12.
> A/B and A atomic counters will not be touched at all.
> 
> Now a user removes A/B. All percpu caches are freed and corresponding
> vmstat numbers are forgotten. A has 96 pages more than expected.
> 
> As memory cgroups are created and destroyed, errors do accumulate.
> Even 1-2 pages differences can accumulate into large numbers.
> 
> To fix this issue let's accumulate and propagate percpu vmstat
> values before releasing the memory cgroup. At this point these
> numbers are stable and cannot be changed.

It is worth spending a word or two on why this doesn't matter during the
memcg life time.

> Since on cpu hotplug we do flush percpu vmstats anyway, we can
> iterate only over online cpus.
> 
> Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 3e821f34399f..348f685ab94b 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3412,6 +3412,41 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
>  	return 0;
>  }
>  
> +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
> +{
> +	unsigned long stat[MEMCG_NR_STAT];
> +	struct mem_cgroup *mi;
> +	int node, cpu, i;
> +
> +	for (i = 0; i < MEMCG_NR_STAT; i++)
> +		stat[i] = 0;
> +
> +	for_each_online_cpu(cpu)
> +		for (i = 0; i < MEMCG_NR_STAT; i++)
> +			stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
> +
> +	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
> +		for (i = 0; i < MEMCG_NR_STAT; i++)
> +			atomic_long_add(stat[i], &mi->vmstats[i]);
> +
> +	for_each_node(node) {
> +		struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> +		struct mem_cgroup_per_node *pi;
> +
> +		for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> +			stat[i] = 0;
> +
> +		for_each_online_cpu(cpu)
> +			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> +				stat[i] += raw_cpu_read(
> +					pn->lruvec_stat_cpu->count[i]);
> +
> +		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
> +			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> +				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
> +	}
> +}
> +
>  static void memcg_offline_kmem(struct mem_cgroup *memcg)
>  {
>  	struct cgroup_subsys_state *css;
> @@ -4805,6 +4840,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
>  {
>  	int node;
>  
> +	/*
> +	 * Flush percpu vmstats to guarantee the value correctness
> +	 * on parent's and all ancestor levels.
> +	 */
> +	memcg_flush_percpu_vmstats(memcg);
>  	for_each_node(node)
>  		free_mem_cgroup_per_node_info(memcg, node);
>  	free_percpu(memcg->vmstats_percpu);
> -- 
> 2.21.0
Yang Shi Sept. 1, 2020, 3:32 p.m. UTC | #4
This report is kind of late, hope everyone still remembers the context.

I just happened to see a similar problem on our v4.19 kernel, please
see the below output from memory.stat:

total_cache 7361626112
total_rss 8268165120
total_rss_huge 0
total_shmem 0
total_mapped_file 4154929152
total_dirty 389689344
total_writeback 101376000
...
[snip]
...
total_inactive_anon 4096
total_active_anon 1638400
total_inactive_file 208990208
total_active_file 275030016

And memory.usage_in_bytes:
1248215040

The total_* counters are way bigger than the counters of LRUs and usage.

Some ephemeral cgroups were created/deleted frequently under this
problematic cgroup. And this host has been up for more than 200 days.
I didn't see such problems on shorter uptime hosts (the other 4.19
host is up for 19 days) and v5.4 hosts.

v4.19 also updates stats from per-cpu caches, and total_* sum all sub
cgroups together. So it seems this is the same problem.

Anyway this is not a significant problem since we can get the correct
numbers from other counters, i.e. LRUs, but just confusing. Not sure
if it is worth backporting the fix to v4.19.

On Tue, Aug 13, 2019 at 2:46 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Tue, Aug 13, 2019 at 02:27:52PM -0700, Andrew Morton wrote:
> > On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote:
> >
> > > Percpu caching of local vmstats with the conditional propagation
> > > by the cgroup tree leads to an accumulation of errors on non-leaf
> > > levels.
> > >
> > > Let's imagine two nested memory cgroups A and A/B. Say, a process
> > > belonging to A/B allocates 100 pagecache pages on the CPU 0.
> > > The percpu cache will spill 3 times, so that 32*3=96 pages will be
> > > accounted to A/B and A atomic vmstat counters, 4 pages will remain
> > > in the percpu cache.
> > >
> > > Imagine A/B is nearby memory.max, so that every following allocation
> > > triggers a direct reclaim on the local CPU. Say, each such attempt
> > > will free 16 pages on a new cpu. That means every percpu cache will
> > > have -16 pages, except the first one, which will have 4 - 16 = -12.
> > > A/B and A atomic counters will not be touched at all.
> > >
> > > Now a user removes A/B. All percpu caches are freed and corresponding
> > > vmstat numbers are forgotten. A has 96 pages more than expected.
> > >
> > > As memory cgroups are created and destroyed, errors do accumulate.
> > > Even 1-2 pages differences can accumulate into large numbers.
> > >
> > > To fix this issue let's accumulate and propagate percpu vmstat
> > > values before releasing the memory cgroup. At this point these
> > > numbers are stable and cannot be changed.
> > >
> > > Since on cpu hotplug we do flush percpu vmstats anyway, we can
> > > iterate only over online cpus.
> > >
> > > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
> >
> > Is this not serious enough for a cc:stable?
>
> I hope the "Fixes" tag will work, but yeah, my bad, cc:stable is definitely
> a good idea here.
>
> Added stable@ to cc.
>
> Thanks!
>
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3e821f34399f..348f685ab94b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3412,6 +3412,41 @@  static int memcg_online_kmem(struct mem_cgroup *memcg)
 	return 0;
 }
 
+static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
+{
+	unsigned long stat[MEMCG_NR_STAT];
+	struct mem_cgroup *mi;
+	int node, cpu, i;
+
+	for (i = 0; i < MEMCG_NR_STAT; i++)
+		stat[i] = 0;
+
+	for_each_online_cpu(cpu)
+		for (i = 0; i < MEMCG_NR_STAT; i++)
+			stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
+
+	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
+		for (i = 0; i < MEMCG_NR_STAT; i++)
+			atomic_long_add(stat[i], &mi->vmstats[i]);
+
+	for_each_node(node) {
+		struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
+		struct mem_cgroup_per_node *pi;
+
+		for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			stat[i] = 0;
+
+		for_each_online_cpu(cpu)
+			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+				stat[i] += raw_cpu_read(
+					pn->lruvec_stat_cpu->count[i]);
+
+		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
+			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
+	}
+}
+
 static void memcg_offline_kmem(struct mem_cgroup *memcg)
 {
 	struct cgroup_subsys_state *css;
@@ -4805,6 +4840,11 @@  static void __mem_cgroup_free(struct mem_cgroup *memcg)
 {
 	int node;
 
+	/*
+	 * Flush percpu vmstats to guarantee the value correctness
+	 * on parent's and all ancestor levels.
+	 */
+	memcg_flush_percpu_vmstats(memcg);
 	for_each_node(node)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);