diff mbox series

mm: Sync percpu mm RSS counters before querying

Message ID 20230616180718.17725-1-mkoutny@suse.com (mailing list archive)
State New
Headers show
Series mm: Sync percpu mm RSS counters before querying | expand

Commit Message

Michal Koutný June 16, 2023, 6:07 p.m. UTC
An issue was observed with stats collected in struct rusage on ppc64le
with 64kB pages. The percpu counters use batching with
	percpu_counter_batch = max(32, nr*2) # in PAGE_SIZE
i.e. with larger pages but similar RSS consumption (bytes), there'll be
less flushes and error more noticeable.

In this given case (getting consumption of exited child), we can request
percpu counter's flush without worrying about contention with updaters.

Fortunately, the commit f1a7941243c1 ("mm: convert mm's rss stats into
percpu_counter") didn't eradicate all traces of SPLIT_RSS_COUNTING and
this mechanism already provided some synchronization points before
reading stats.
Therefore, use sync_mm_rss as carrier for percpu counters refreshes and
forget SPLIT_RSS_COUNTING macro for good.

Impact of summing on a 8 CPU machine:
Benchmark 1: taskset -c 1 ./shell-bench.sh

Before
  Time (mean ± σ):      9.950 s ±  0.052 s    [User: 7.773 s, System: 2.023 s]

After
  Time (mean ± σ):      9.990 s ±  0.070 s    [User: 7.825 s, System: 2.011 s]

cat >shell-bench.sh <<EOD
for (( i = 0; i < 20000; i++ )); do
	/bin/true
done
EOD

The script is meant to stress fork-exit path (exit is where sync_mm_rss
is most called, add_mm_rss_vec should be covered in fork).

Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")
Reported-by: Adam Majer <amajer@suse.com>
Signed-off-by: Michal Koutný <mkoutny@suse.com>
---

Notes:
    - dummy approach to fix correctness, performance should be treated separately
    - add percpu_counter_set to really update the value
    - RFC https://lore.kernel.org/r/20230608171256.17827-1-mkoutny@suse.com/

 include/linux/mm.h | 7 +++----
 kernel/fork.c      | 4 ----
 2 files changed, 3 insertions(+), 8 deletions(-)

Comments

Andrew Morton June 16, 2023, 7:31 p.m. UTC | #1
On Fri, 16 Jun 2023 20:07:18 +0200 Michal Koutný <mkoutny@suse.com> wrote:

> An issue was observed with stats collected in struct rusage on ppc64le
> with 64kB pages. The percpu counters use batching with
> 	percpu_counter_batch = max(32, nr*2) # in PAGE_SIZE
> i.e. with larger pages but similar RSS consumption (bytes), there'll be
> less flushes and error more noticeable.

A fully detailed description of the issue would be helpful.  Obviously
"inaccuracy", but how bad?

> In this given case (getting consumption of exited child), we can request
> percpu counter's flush without worrying about contention with updaters.
> 
> Fortunately, the commit f1a7941243c1 ("mm: convert mm's rss stats into
> percpu_counter") didn't eradicate all traces of SPLIT_RSS_COUNTING and
> this mechanism already provided some synchronization points before
> reading stats.
> Therefore, use sync_mm_rss as carrier for percpu counters refreshes and
> forget SPLIT_RSS_COUNTING macro for good.
> 
> Impact of summing on a 8 CPU machine:
> Benchmark 1: taskset -c 1 ./shell-bench.sh
> 
> Before
>   Time (mean ± σ):      9.950 s ±  0.052 s    [User: 7.773 s, System: 2.023 s]
> 
> After
>   Time (mean ± σ):      9.990 s ±  0.070 s    [User: 7.825 s, System: 2.011 s]
> 
> cat >shell-bench.sh <<EOD
> for (( i = 0; i < 20000; i++ )); do
> 	/bin/true
> done
> EOD
> 
> The script is meant to stress fork-exit path (exit is where sync_mm_rss
> is most called, add_mm_rss_vec should be covered in fork).
> 
> ...
>
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2547,13 +2547,12 @@ static inline void setmax_mm_hiwater_rss(unsigned long *maxrss,
>  		*maxrss = hiwater_rss;
>  }
>  
> -#if defined(SPLIT_RSS_COUNTING)
> -void sync_mm_rss(struct mm_struct *mm);
> -#else
>  static inline void sync_mm_rss(struct mm_struct *mm)
>  {
> +	for (int i = 0; i < NR_MM_COUNTERS; ++i)
> +		percpu_counter_set(&mm->rss_stat[i],
> +				   percpu_counter_sum(&mm->rss_stat[i]));
>  }
> -#endif

Far too large to be inlined!  For six callsites it adds 1kb of text.

Why even modify the counter?  Can't <whatever this issue is> be
addressed by using percpu_counter_sum() in an appropriate place?

For unknown reasons percpu_counter_set() uses for_each_possible_cpu(). 
Probably just a mistake - percpu_counters are hotplug-aware and
for_each_online_cpu should suffice.

I'm really not liking percpu_counter_set().  It's only safe in
situations where the caller knows that no other CPU can be modifying
the counter.  I wonder if all the callers know that.  This situation
isn't aided by the lack of any documentation.
Michal Koutný June 19, 2023, 12:53 p.m. UTC | #2
On Fri, Jun 16, 2023 at 12:31:44PM -0700, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Fri, 16 Jun 2023 20:07:18 +0200 Michal Koutný <mkoutny@suse.com> wrote:
> 
> > An issue was observed with stats collected in struct rusage on ppc64le
> > with 64kB pages. The percpu counters use batching with
> > 	percpu_counter_batch = max(32, nr*2) # in PAGE_SIZE
> > i.e. with larger pages but similar RSS consumption (bytes), there'll be
> > less flushes and error more noticeable.
> 
> A fully detailed description of the issue would be helpful.  Obviously
> "inaccuracy", but how bad?

Any reader of get_mm_counter() could see the inaccuracy given by the
formula.
In this particular case it is detected by a testsuite of time(1) utility
that feeds from rusage:

> FAIL: tests/time-max-rss
> ========================
> 
> time(1) failed to detect 5MB allcoation.
>   mem-baseline(kb): 0
>   mem-5MB(kb):      4096
>   delta(kb):        4096
> FAIL tests/time-max-rss.sh (exit status: 1)

(i.e. 1MB missing)

> Far too large to be inlined!  For six callsites it adds 1kb of text.

Ah, thanks, I can change that.

> Why even modify the counter?  Can't <whatever this issue is> be
> addressed by using percpu_counter_sum() in an appropriate place?

I considered modifying get_mm_counter(), however, I decided not to put
the per-cpu summing there as it'd incur the impact to many more places
than sync_mm_rss().

> For unknown reasons percpu_counter_set() uses for_each_possible_cpu(). 
> Probably just a mistake - percpu_counters are hotplug-aware and
> for_each_online_cpu should suffice.

Yeah, that could be cleaned up in another patch (cf mask in
__percpu_counter_sum).

> I'm really not liking percpu_counter_set().  It's only safe in
> situations where the caller knows that no other CPU can be modifying
> the counter.  I wonder if all the callers know that.

I admit I only considered the do_exit() path (and even that isn't
granted in a multithreaded process) -- so I don't like
percpu_counter_set() in this current form neither.
I will need to review effects of parallel updates more.

Thanks,
Michal
Michal Hocko June 19, 2023, 1:10 p.m. UTC | #3
On Fri 16-06-23 20:07:18, Michal Koutny wrote:
[...]
> Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")

Please note there is a different regression from f1a7941243c1 reported
by Jan http://lkml.kernel.org/r/20230608111408.s2minsenlcjow7q3@quack3
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27ce77080c79..c7ac1cbc6fb3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2547,13 +2547,12 @@  static inline void setmax_mm_hiwater_rss(unsigned long *maxrss,
 		*maxrss = hiwater_rss;
 }
 
-#if defined(SPLIT_RSS_COUNTING)
-void sync_mm_rss(struct mm_struct *mm);
-#else
 static inline void sync_mm_rss(struct mm_struct *mm)
 {
+	for (int i = 0; i < NR_MM_COUNTERS; ++i)
+		percpu_counter_set(&mm->rss_stat[i],
+				   percpu_counter_sum(&mm->rss_stat[i]));
 }
-#endif
 
 #ifndef CONFIG_ARCH_HAS_PTE_SPECIAL
 static inline int pte_special(pte_t pte)
diff --git a/kernel/fork.c b/kernel/fork.c
index 81cba91f30bb..e030eb902e4b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2412,10 +2412,6 @@  __latent_entropy struct task_struct *copy_process(
 	p->io_uring = NULL;
 #endif
 
-#if defined(SPLIT_RSS_COUNTING)
-	memset(&p->rss_stat, 0, sizeof(p->rss_stat));
-#endif
-
 	p->default_timer_slack_ns = current->timer_slack_ns;
 
 #ifdef CONFIG_PSI