diff mbox series

[v2] memcg: Remove the lockdep assert from __mod_objcg_mlstate().

Message ID 20240528141341.rz_rytN_@linutronix.de (mailing list archive)
State New
Headers show
Series [v2] memcg: Remove the lockdep assert from __mod_objcg_mlstate(). | expand

Commit Message

Sebastian Andrzej Siewior May 28, 2024, 2:13 p.m. UTC
The assert was introduced in the commit cited below as an insurance that
the semantic is the same after the local_irq_save() has been removed and
the function has been made static.

The original requirement to disable interrupt was due the modification
of per-CPU counters which require interrupts to be disabled because the
counter update operation is not atomic and some of the counters are
updated from interrupt context.

All callers of __mod_objcg_mlstate() acquire a lock
(memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
disabled and the assert triggers.

The safety of the counter update is already ensured by
VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
does not require yet another check.

Remove the lockdep assert from __mod_objcg_mlstate().

Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
On 2024-05-28 15:44:51 [+0200], Vlastimil Babka (SUSE) wrote:
> I think just s/memcg_stats_lock()/__mod_memcg_lruvec_state()/ in your
> phrasing, since we are removing the lockdep assert from path that calls
> __mod_memcg_lruvec_state() and not memcg_stats_lock()?
> Or am I missing something?

Yeah, makes sense.

 mm/memcontrol.c |    2 --
 1 file changed, 2 deletions(-)

Comments

Vlastimil Babka (SUSE) May 28, 2024, 2:19 p.m. UTC | #1
On 5/28/24 4:13 PM, Sebastian Andrzej Siewior wrote:
> The assert was introduced in the commit cited below as an insurance that
> the semantic is the same after the local_irq_save() has been removed and
> the function has been made static.
> 
> The original requirement to disable interrupt was due the modification
> of per-CPU counters which require interrupts to be disabled because the
> counter update operation is not atomic and some of the counters are
> updated from interrupt context.
> 
> All callers of __mod_objcg_mlstate() acquire a lock
> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
> disabled and the assert triggers.
> 
> The safety of the counter update is already ensured by
> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
> does not require yet another check.
> 
> Remove the lockdep assert from __mod_objcg_mlstate().
> 
> Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
> Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

mm-hotfixes as it's a rc1 regression

> ---
> On 2024-05-28 15:44:51 [+0200], Vlastimil Babka (SUSE) wrote:
>> I think just s/memcg_stats_lock()/__mod_memcg_lruvec_state()/ in your
>> phrasing, since we are removing the lockdep assert from path that calls
>> __mod_memcg_lruvec_state() and not memcg_stats_lock()?
>> Or am I missing something?
> 
> Yeah, makes sense.
> 
>  mm/memcontrol.c |    2 --
>  1 file changed, 2 deletions(-)
> 
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3147,8 +3147,6 @@ static inline void __mod_objcg_mlstate(s
>  	struct mem_cgroup *memcg;
>  	struct lruvec *lruvec;
>  
> -	lockdep_assert_irqs_disabled();
> -
>  	rcu_read_lock();
>  	memcg = obj_cgroup_memcg(objcg);
>  	lruvec = mem_cgroup_lruvec(memcg, pgdat);
Shakeel Butt May 28, 2024, 2:59 p.m. UTC | #2
On Tue, May 28, 2024 at 04:13:41PM GMT, Sebastian Andrzej Siewior wrote:
> The assert was introduced in the commit cited below as an insurance that
> the semantic is the same after the local_irq_save() has been removed and
> the function has been made static.
> 
> The original requirement to disable interrupt was due the modification
> of per-CPU counters which require interrupts to be disabled because the
> counter update operation is not atomic and some of the counters are
> updated from interrupt context.
> 
> All callers of __mod_objcg_mlstate() acquire a lock
> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
> disabled and the assert triggers.
> 
> The safety of the counter update is already ensured by
> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
> does not require yet another check.

One question on VM_WARN_ON_IRQS_ENABLED() in __mod_memcg_lruvec_state().
On a PREEMPT_RT kernel with CONFIG_DEBUG_VM, will that
VM_WARN_ON_IRQS_ENABLED() cause a splat or VM_WARN_ON_IRQS_ENABLED is
special on PREEMPT_RT kernels?

> 
> Remove the lockdep assert from __mod_objcg_mlstate().
> 
> Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
> Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> On 2024-05-28 15:44:51 [+0200], Vlastimil Babka (SUSE) wrote:
> > I think just s/memcg_stats_lock()/__mod_memcg_lruvec_state()/ in your
> > phrasing, since we are removing the lockdep assert from path that calls
> > __mod_memcg_lruvec_state() and not memcg_stats_lock()?
> > Or am I missing something?
> 
> Yeah, makes sense.
> 
>  mm/memcontrol.c |    2 --
>  1 file changed, 2 deletions(-)
> 
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3147,8 +3147,6 @@ static inline void __mod_objcg_mlstate(s
>  	struct mem_cgroup *memcg;
>  	struct lruvec *lruvec;
>  
> -	lockdep_assert_irqs_disabled();
> -
>  	rcu_read_lock();
>  	memcg = obj_cgroup_memcg(objcg);
>  	lruvec = mem_cgroup_lruvec(memcg, pgdat);
Vlastimil Babka (SUSE) May 28, 2024, 3:07 p.m. UTC | #3
On 5/28/24 4:59 PM, Shakeel Butt wrote:
> On Tue, May 28, 2024 at 04:13:41PM GMT, Sebastian Andrzej Siewior wrote:
>> The assert was introduced in the commit cited below as an insurance that
>> the semantic is the same after the local_irq_save() has been removed and
>> the function has been made static.
>> 
>> The original requirement to disable interrupt was due the modification
>> of per-CPU counters which require interrupts to be disabled because the
>> counter update operation is not atomic and some of the counters are
>> updated from interrupt context.
>> 
>> All callers of __mod_objcg_mlstate() acquire a lock
>> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
>> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
>> disabled and the assert triggers.
>> 
>> The safety of the counter update is already ensured by
>> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
>> does not require yet another check.
> 
> One question on VM_WARN_ON_IRQS_ENABLED() in __mod_memcg_lruvec_state().
> On a PREEMPT_RT kernel with CONFIG_DEBUG_VM, will that
> VM_WARN_ON_IRQS_ENABLED() cause a splat or VM_WARN_ON_IRQS_ENABLED is
> special on PREEMPT_RT kernels?

It only does something with CONFIG_DEBUG_VM_IRQSOFF and that's disabled by
dependencies on PREEMPT_RT :)

>> 
>> Remove the lockdep assert from __mod_objcg_mlstate().
>> 
>> Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
>> Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
>> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>> ---
>> On 2024-05-28 15:44:51 [+0200], Vlastimil Babka (SUSE) wrote:
>> > I think just s/memcg_stats_lock()/__mod_memcg_lruvec_state()/ in your
>> > phrasing, since we are removing the lockdep assert from path that calls
>> > __mod_memcg_lruvec_state() and not memcg_stats_lock()?
>> > Or am I missing something?
>> 
>> Yeah, makes sense.
>> 
>>  mm/memcontrol.c |    2 --
>>  1 file changed, 2 deletions(-)
>> 
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -3147,8 +3147,6 @@ static inline void __mod_objcg_mlstate(s
>>  	struct mem_cgroup *memcg;
>>  	struct lruvec *lruvec;
>>  
>> -	lockdep_assert_irqs_disabled();
>> -
>>  	rcu_read_lock();
>>  	memcg = obj_cgroup_memcg(objcg);
>>  	lruvec = mem_cgroup_lruvec(memcg, pgdat);
Sebastian Andrzej Siewior May 28, 2024, 3:08 p.m. UTC | #4
On 2024-05-28 07:59:57 [-0700], Shakeel Butt wrote:
> One question on VM_WARN_ON_IRQS_ENABLED() in __mod_memcg_lruvec_state().
> On a PREEMPT_RT kernel with CONFIG_DEBUG_VM, will that
> VM_WARN_ON_IRQS_ENABLED() cause a splat or VM_WARN_ON_IRQS_ENABLED is
> special on PREEMPT_RT kernels?

we have the following in the header file:

| #ifdef CONFIG_DEBUG_VM_IRQSOFF
| #define VM_WARN_ON_IRQS_ENABLED() WARN_ON_ONCE(!irqs_disabled())
| #else
| #define VM_WARN_ON_IRQS_ENABLED() do { } while (0)
| #endif

and this in Kconfig:
| config DEBUG_VM_IRQSOFF
|         def_bool DEBUG_VM && !PREEMPT_RT
|

which means on PREEMPT_RT we end up with "do {…"

Sebastian
Shakeel Butt May 28, 2024, 4:20 p.m. UTC | #5
On Tue, May 28, 2024 at 05:08:56PM GMT, Sebastian Andrzej Siewior wrote:
> On 2024-05-28 07:59:57 [-0700], Shakeel Butt wrote:
> > One question on VM_WARN_ON_IRQS_ENABLED() in __mod_memcg_lruvec_state().
> > On a PREEMPT_RT kernel with CONFIG_DEBUG_VM, will that
> > VM_WARN_ON_IRQS_ENABLED() cause a splat or VM_WARN_ON_IRQS_ENABLED is
> > special on PREEMPT_RT kernels?
> 
> we have the following in the header file:
> 
> | #ifdef CONFIG_DEBUG_VM_IRQSOFF
> | #define VM_WARN_ON_IRQS_ENABLED() WARN_ON_ONCE(!irqs_disabled())
> | #else
> | #define VM_WARN_ON_IRQS_ENABLED() do { } while (0)
> | #endif
> 
> and this in Kconfig:
> | config DEBUG_VM_IRQSOFF
> |         def_bool DEBUG_VM && !PREEMPT_RT
> |
> 
> which means on PREEMPT_RT we end up with "do {…"

Thanks for the explanation.
Shakeel Butt May 28, 2024, 4:21 p.m. UTC | #6
On Tue, May 28, 2024 at 05:07:06PM GMT, Vlastimil Babka (SUSE) wrote:
> On 5/28/24 4:59 PM, Shakeel Butt wrote:
> > On Tue, May 28, 2024 at 04:13:41PM GMT, Sebastian Andrzej Siewior wrote:
> >> The assert was introduced in the commit cited below as an insurance that
> >> the semantic is the same after the local_irq_save() has been removed and
> >> the function has been made static.
> >> 
> >> The original requirement to disable interrupt was due the modification
> >> of per-CPU counters which require interrupts to be disabled because the
> >> counter update operation is not atomic and some of the counters are
> >> updated from interrupt context.
> >> 
> >> All callers of __mod_objcg_mlstate() acquire a lock
> >> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
> >> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
> >> disabled and the assert triggers.
> >> 
> >> The safety of the counter update is already ensured by
> >> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
> >> does not require yet another check.
> > 
> > One question on VM_WARN_ON_IRQS_ENABLED() in __mod_memcg_lruvec_state().
> > On a PREEMPT_RT kernel with CONFIG_DEBUG_VM, will that
> > VM_WARN_ON_IRQS_ENABLED() cause a splat or VM_WARN_ON_IRQS_ENABLED is
> > special on PREEMPT_RT kernels?
> 
> It only does something with CONFIG_DEBUG_VM_IRQSOFF and that's disabled by
> dependencies on PREEMPT_RT :)

Thanks for the explanation.
Shakeel Butt May 28, 2024, 4:21 p.m. UTC | #7
On Tue, May 28, 2024 at 04:13:41PM GMT, Sebastian Andrzej Siewior wrote:
> The assert was introduced in the commit cited below as an insurance that
> the semantic is the same after the local_irq_save() has been removed and
> the function has been made static.
> 
> The original requirement to disable interrupt was due the modification
> of per-CPU counters which require interrupts to be disabled because the
> counter update operation is not atomic and some of the counters are
> updated from interrupt context.
> 
> All callers of __mod_objcg_mlstate() acquire a lock
> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
> disabled and the assert triggers.
> 
> The safety of the counter update is already ensured by
> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
> does not require yet another check.
> 
> Remove the lockdep assert from __mod_objcg_mlstate().
> 
> Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
> Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Michal Hocko May 29, 2024, 9:16 a.m. UTC | #8
On Tue 28-05-24 16:13:41, Sebastian Andrzej Siewior wrote:
> The assert was introduced in the commit cited below as an insurance that
> the semantic is the same after the local_irq_save() has been removed and
> the function has been made static.
> 
> The original requirement to disable interrupt was due the modification
> of per-CPU counters which require interrupts to be disabled because the
> counter update operation is not atomic and some of the counters are
> updated from interrupt context.
> 
> All callers of __mod_objcg_mlstate() acquire a lock
> (memcg_stock.stock_lock) which disables interrupts on !PREEMPT_RT and
> the lockdep assert is satisfied. On PREEMPT_RT the interrupts are not
> disabled and the assert triggers.
> 
> The safety of the counter update is already ensured by
> VM_WARN_ON_IRQS_ENABLED() which is part of __mod_memcg_lruvec_state() and
> does not require yet another check.
> 
> Remove the lockdep assert from __mod_objcg_mlstate().
> 
> Fixes: 91882c1617c15 ("memcg: simple cleanup of stats update functions")
> Link: https://lore.kernel.org/r/20240528121928.i-Gu7Jvg@linutronix.de
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Michal Hocko <mhocko@suse.com>
diff mbox series

Patch

--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3147,8 +3147,6 @@  static inline void __mod_objcg_mlstate(s
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
-	lockdep_assert_irqs_disabled();
-
 	rcu_read_lock();
 	memcg = obj_cgroup_memcg(objcg);
 	lruvec = mem_cgroup_lruvec(memcg, pgdat);