Message ID | 20220217094802.3644569-5-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/memcg: Address PREEMPT_RT problems instead of disabling it. | expand |
On Thu, Feb 17, 2022 at 1:48 AM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > From: Johannes Weiner <hannes@cmpxchg.org> > > Provide the inner part of refill_stock() as __refill_stock() without > disabling interrupts. This eases the integration of local_lock_t where > recursive locking must be avoided. > Open code obj_cgroup_uncharge_pages() in drain_obj_stock() and use > __refill_stock(). The caller of drain_obj_stock() already disables > interrupts. > > [bigeasy: Patch body around Johannes' diff ] > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Shakeel Butt <shakeelb@google.com>
On Thu, Feb 17, 2022 at 10:48:01AM +0100, Sebastian Andrzej Siewior wrote: > From: Johannes Weiner <hannes@cmpxchg.org> > > Provide the inner part of refill_stock() as __refill_stock() without > disabling interrupts. This eases the integration of local_lock_t where > recursive locking must be avoided. > Open code obj_cgroup_uncharge_pages() in drain_obj_stock() and use > __refill_stock(). The caller of drain_obj_stock() already disables > interrupts. > > [bigeasy: Patch body around Johannes' diff ] > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Roman Gushchin <guro@fb.com>
On Thu 17-02-22 10:48:01, Sebastian Andrzej Siewior wrote: > From: Johannes Weiner <hannes@cmpxchg.org> > > Provide the inner part of refill_stock() as __refill_stock() without > disabling interrupts. This eases the integration of local_lock_t where > recursive locking must be avoided. > Open code obj_cgroup_uncharge_pages() in drain_obj_stock() and use > __refill_stock(). The caller of drain_obj_stock() already disables > interrupts. > > [bigeasy: Patch body around Johannes' diff ] > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Michal Hocko <mhocko@suse.com> Thanks! > --- > mm/memcontrol.c | 24 ++++++++++++++++++------ > 1 file changed, 18 insertions(+), 6 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 36ab3660f2c6d..a3225501cce36 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2224,12 +2224,9 @@ static void drain_local_stock(struct work_struct *dummy) > * Cache charges(val) to local per_cpu area. > * This will be consumed by consume_stock() function, later. > */ > -static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) > +static void __refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) > { > struct memcg_stock_pcp *stock; > - unsigned long flags; > - > - local_irq_save(flags); > > stock = this_cpu_ptr(&memcg_stock); > if (stock->cached != memcg) { /* reset if necessary */ > @@ -2241,7 +2238,14 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) > > if (stock->nr_pages > MEMCG_CHARGE_BATCH) > drain_stock(stock); > +} > > +static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) > +{ > + unsigned long flags; > + > + local_irq_save(flags); > + __refill_stock(memcg, nr_pages); > local_irq_restore(flags); > } > > @@ -3158,8 +3162,16 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) > unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; > unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); > > - if (nr_pages) > - obj_cgroup_uncharge_pages(old, nr_pages); > + if (nr_pages) { > + struct mem_cgroup *memcg; > + > + memcg = get_mem_cgroup_from_objcg(old); > + > + memcg_account_kmem(memcg, -nr_pages); > + __refill_stock(memcg, nr_pages); > + > + css_put(&memcg->css); > + } > > /* > * The leftover is flushed to the centralized per-memcg value. > -- > 2.34.1
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 36ab3660f2c6d..a3225501cce36 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2224,12 +2224,9 @@ static void drain_local_stock(struct work_struct *dummy) * Cache charges(val) to local per_cpu area. * This will be consumed by consume_stock() function, later. */ -static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) +static void __refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) { struct memcg_stock_pcp *stock; - unsigned long flags; - - local_irq_save(flags); stock = this_cpu_ptr(&memcg_stock); if (stock->cached != memcg) { /* reset if necessary */ @@ -2241,7 +2238,14 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) if (stock->nr_pages > MEMCG_CHARGE_BATCH) drain_stock(stock); +} +static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + unsigned long flags; + + local_irq_save(flags); + __refill_stock(memcg, nr_pages); local_irq_restore(flags); } @@ -3158,8 +3162,16 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); - if (nr_pages) - obj_cgroup_uncharge_pages(old, nr_pages); + if (nr_pages) { + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(old); + + memcg_account_kmem(memcg, -nr_pages); + __refill_stock(memcg, nr_pages); + + css_put(&memcg->css); + } /* * The leftover is flushed to the centralized per-memcg value.