Message ID | 20210315234100.64307-1-hannes@cmpxchg.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: memcontrol: switch to rstat fix | expand |
On Mon, Mar 15, 2021 at 4:41 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > Fix a sleep in atomic section problem: wb_writeback() takes a spinlock > and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular > rstat flushing function called from in there does lockbreaking and may > sleep. Switch to the atomic variant, cgroup_rstat_irqsafe(). > > To be consistent with other memcg flush calls, but without adding > another memcg wrapper, inline and drop memcg_flush_vmstats() instead. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Shakeel Butt <shakeelb@google.com>
On 16/3/21 10:41 am, Johannes Weiner wrote: > Fix a sleep in atomic section problem: wb_writeback() takes a spinlock > and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular > rstat flushing function called from in there does lockbreaking and may > sleep. Switch to the atomic variant, cgroup_rstat_irqsafe(). > > To be consistent with other memcg flush calls, but without adding > another memcg wrapper, inline and drop memcg_flush_vmstats() instead. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > --- The patch make sense, but it does break any notion of abstraction we had about controllers have some independence in their strategy to maintain their own counters and stats. It now couples writeback with rstat instead of just memcg. Acked-by: Balbir Singh <bsingharora@gmail.com>
On Mon 15-03-21 19:41:00, Johannes Weiner wrote: > Fix a sleep in atomic section problem: wb_writeback() takes a spinlock > and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular > rstat flushing function called from in there does lockbreaking and may > sleep. Switch to the atomic variant, cgroup_rstat_irqsafe(). > > To be consistent with other memcg flush calls, but without adding > another memcg wrapper, inline and drop memcg_flush_vmstats() instead. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/memcontrol.c | 15 +++++---------- > 1 file changed, 5 insertions(+), 10 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index f7fb12d3c2fc..9091913ec877 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -757,11 +757,6 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) > return mz; > } > > -static void memcg_flush_vmstats(struct mem_cgroup *memcg) > -{ > - cgroup_rstat_flush(memcg->css.cgroup); > -} > - > /** > * __mod_memcg_state - update cgroup memory statistics > * @memcg: the memory cgroup > @@ -1572,7 +1567,7 @@ static char *memory_stat_format(struct mem_cgroup *memcg) > * > * Current memory state: > */ > - memcg_flush_vmstats(memcg); > + cgroup_rstat_flush(memcg->css.cgroup); > > for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { > u64 size; > @@ -3523,7 +3518,7 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) > unsigned long val; > > if (mem_cgroup_is_root(memcg)) { > - memcg_flush_vmstats(memcg); > + cgroup_rstat_flush(memcg->css.cgroup); > val = memcg_page_state(memcg, NR_FILE_PAGES) + > memcg_page_state(memcg, NR_ANON_MAPPED); > if (swap) > @@ -3925,7 +3920,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) > int nid; > struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > - memcg_flush_vmstats(memcg); > + cgroup_rstat_flush(memcg->css.cgroup); > > for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { > seq_printf(m, "%s=%lu", stat->name, > @@ -3997,7 +3992,7 @@ static int memcg_stat_show(struct seq_file *m, void *v) > > BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); > > - memcg_flush_vmstats(memcg); > + cgroup_rstat_flush(memcg->css.cgroup); > > for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { > unsigned long nr; > @@ -4500,7 +4495,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, > struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); > struct mem_cgroup *parent; > > - memcg_flush_vmstats(memcg); > + cgroup_rstat_flush_irqsafe(memcg->css.cgroup); > > *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); > *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); > -- > 2.30.1
On Mon, Mar 15, 2021 at 07:41:00PM -0400, Johannes Weiner <hannes@cmpxchg.org> wrote: > Switch to the atomic variant, cgroup_rstat_irqsafe(). Congratulations(?), the first use of cgroup_rstat_irqsafe(). Reviewed-by: Michal Koutný <mkoutny@suse.com>
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f7fb12d3c2fc..9091913ec877 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -757,11 +757,6 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) return mz; } -static void memcg_flush_vmstats(struct mem_cgroup *memcg) -{ - cgroup_rstat_flush(memcg->css.cgroup); -} - /** * __mod_memcg_state - update cgroup memory statistics * @memcg: the memory cgroup @@ -1572,7 +1567,7 @@ static char *memory_stat_format(struct mem_cgroup *memcg) * * Current memory state: */ - memcg_flush_vmstats(memcg); + cgroup_rstat_flush(memcg->css.cgroup); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -3523,7 +3518,7 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) unsigned long val; if (mem_cgroup_is_root(memcg)) { - memcg_flush_vmstats(memcg); + cgroup_rstat_flush(memcg->css.cgroup); val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) @@ -3925,7 +3920,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - memcg_flush_vmstats(memcg); + cgroup_rstat_flush(memcg->css.cgroup); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -3997,7 +3992,7 @@ static int memcg_stat_show(struct seq_file *m, void *v) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - memcg_flush_vmstats(memcg); + cgroup_rstat_flush(memcg->css.cgroup); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -4500,7 +4495,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - memcg_flush_vmstats(memcg); + cgroup_rstat_flush_irqsafe(memcg->css.cgroup); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
Fix a sleep in atomic section problem: wb_writeback() takes a spinlock and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular rstat flushing function called from in there does lockbreaking and may sleep. Switch to the atomic variant, cgroup_rstat_irqsafe(). To be consistent with other memcg flush calls, but without adding another memcg wrapper, inline and drop memcg_flush_vmstats() instead. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/memcontrol.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)