diff mbox series

[v2] mm: workingset: replace IRQ-off check with a lockdep assert.

Message ID 20190211113829.sqf6bdi4c4cdd3rp@linutronix.de (mailing list archive)
State New, archived
Headers show
Series [v2] mm: workingset: replace IRQ-off check with a lockdep assert. | expand

Commit Message

Sebastian Andrzej Siewior Feb. 11, 2019, 11:38 a.m. UTC
Commit

  68d48e6a2df57 ("mm: workingset: add vmstat counter for shadow nodes")

introduced an IRQ-off check to ensure that a lock is held which also
disabled interrupts. This does not work the same way on -RT because none
of the locks, that are held, disable interrupts.
Replace this check with a lockdep assert which ensures that the lock is
held.

Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v1…v2: lockdep_is_held() => lockdep_assert_held()

 mm/workingset.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Johannes Weiner Feb. 11, 2019, 6:53 p.m. UTC | #1
On Mon, Feb 11, 2019 at 12:38:29PM +0100, Sebastian Andrzej Siewior wrote:
> Commit
> 
>   68d48e6a2df57 ("mm: workingset: add vmstat counter for shadow nodes")
> 
> introduced an IRQ-off check to ensure that a lock is held which also
> disabled interrupts. This does not work the same way on -RT because none
> of the locks, that are held, disable interrupts.
> Replace this check with a lockdep assert which ensures that the lock is
> held.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

I'm not against checking for the lock, but if IRQs aren't disabled,
what ensures __mod_lruvec_state() is safe? I'm guessing it's because
preemption is disabled and irq handlers are punted to process context.

That said, it seems weird to me that

	spin_lock_irqsave();
	BUG_ON(!irqs_disabled());
	spin_unlock_irqrestore();

would trigger. Wouldn't it make sense to have a raw_irqs_disabled() or
something and keep the irqs_disabled() abstraction layer intact?
Sebastian Andrzej Siewior Feb. 11, 2019, 7:13 p.m. UTC | #2
On 2019-02-11 13:53:18 [-0500], Johannes Weiner wrote:
> On Mon, Feb 11, 2019 at 12:38:29PM +0100, Sebastian Andrzej Siewior wrote:
> > Commit
> > 
> >   68d48e6a2df57 ("mm: workingset: add vmstat counter for shadow nodes")
> > 
> > introduced an IRQ-off check to ensure that a lock is held which also
> > disabled interrupts. This does not work the same way on -RT because none
> > of the locks, that are held, disable interrupts.
> > Replace this check with a lockdep assert which ensures that the lock is
> > held.
> > 
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> I'm not against checking for the lock, but if IRQs aren't disabled,
> what ensures __mod_lruvec_state() is safe?

how do you define safe? I've been looking for dependencies of
__mod_lruvec_state() but found only that the lock is held during the RMW
operation with WORKINGSET_NODES idx.

>                                            I'm guessing it's because
> preemption is disabled and irq handlers are punted to process context.
preemption is enabled and IRQ are processed in forced-threaded mode.

> That said, it seems weird to me that
> 
> 	spin_lock_irqsave();
> 	BUG_ON(!irqs_disabled());
> 	spin_unlock_irqrestore();
> 
> would trigger. Wouldn't it make sense to have a raw_irqs_disabled() or
> something and keep the irqs_disabled() abstraction layer intact?

maybe if I know why interrupts should be disabled in the first place.
The ->i_pages lock is never acquired with disabled interrupts so it
should be safe to proceed as-is. Should there be a spot in -RT where the
lock is acquired with disabled interrupts then lockdep would scream. And
then we would have to decide to either move everything raw_ locks (and
live with the consequences) or avoid acquiring the lock with disabled
interrupts.

Sebastian
Matthew Wilcox Feb. 11, 2019, 7:17 p.m. UTC | #3
On Mon, Feb 11, 2019 at 08:13:45PM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-02-11 13:53:18 [-0500], Johannes Weiner wrote:
> > I'm not against checking for the lock, but if IRQs aren't disabled,
> > what ensures __mod_lruvec_state() is safe?
> 
> how do you define safe? I've been looking for dependencies of
> __mod_lruvec_state() but found only that the lock is held during the RMW
> operation with WORKINGSET_NODES idx.
> 
> >                                            I'm guessing it's because
> > preemption is disabled and irq handlers are punted to process context.
> preemption is enabled and IRQ are processed in forced-threaded mode.
> 
> > That said, it seems weird to me that
> > 
> > 	spin_lock_irqsave();
> > 	BUG_ON(!irqs_disabled());
> > 	spin_unlock_irqrestore();
> > 
> > would trigger. Wouldn't it make sense to have a raw_irqs_disabled() or
> > something and keep the irqs_disabled() abstraction layer intact?
> 
> maybe if I know why interrupts should be disabled in the first place.
> The ->i_pages lock is never acquired with disabled interrupts so it
> should be safe to proceed as-is. Should there be a spot in -RT where the
> lock is acquired with disabled interrupts then lockdep would scream. And
> then we would have to decide to either move everything raw_ locks (and
> live with the consequences) or avoid acquiring the lock with disabled
> interrupts.

I think you mean 'the i_pages lock is never acquired with interrupts
enabled".  Lockdep would scream if it were -- you'd be in a situation
where an interrupt handler which acquired the i_pages lock could deadlock
against you.
Sebastian Andrzej Siewior Feb. 11, 2019, 7:41 p.m. UTC | #4
On 2019-02-11 11:17:45 [-0800], Matthew Wilcox wrote:
> On Mon, Feb 11, 2019 at 08:13:45PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2019-02-11 13:53:18 [-0500], Johannes Weiner wrote:
> > > I'm not against checking for the lock, but if IRQs aren't disabled,
> > > what ensures __mod_lruvec_state() is safe?
> > 
> > how do you define safe? I've been looking for dependencies of
> > __mod_lruvec_state() but found only that the lock is held during the RMW
> > operation with WORKINGSET_NODES idx.
> > 
> > >                                            I'm guessing it's because
> > > preemption is disabled and irq handlers are punted to process context.
> > preemption is enabled and IRQ are processed in forced-threaded mode.
> > 
> > > That said, it seems weird to me that
> > > 
> > > 	spin_lock_irqsave();
> > > 	BUG_ON(!irqs_disabled());
> > > 	spin_unlock_irqrestore();
> > > 
> > > would trigger. Wouldn't it make sense to have a raw_irqs_disabled() or
> > > something and keep the irqs_disabled() abstraction layer intact?
> > 
> > maybe if I know why interrupts should be disabled in the first place.
> > The ->i_pages lock is never acquired with disabled interrupts so it
> > should be safe to proceed as-is. Should there be a spot in -RT where the
> > lock is acquired with disabled interrupts then lockdep would scream. And
> > then we would have to decide to either move everything raw_ locks (and
> > live with the consequences) or avoid acquiring the lock with disabled
> > interrupts.
> 
> I think you mean 'the i_pages lock is never acquired with interrupts
> enabled".  Lockdep would scream if it were -- you'd be in a situation
> where an interrupt handler which acquired the i_pages lock could deadlock
> against you.
With RT enabled the i_pages lock is always acquired with interrupts
enabled because spin_lock_irq() does not disable interrupts.

Sebastian
Johannes Weiner Feb. 11, 2019, 9:02 p.m. UTC | #5
On Mon, Feb 11, 2019 at 08:13:45PM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-02-11 13:53:18 [-0500], Johannes Weiner wrote:
> > On Mon, Feb 11, 2019 at 12:38:29PM +0100, Sebastian Andrzej Siewior wrote:
> > > Commit
> > > 
> > >   68d48e6a2df57 ("mm: workingset: add vmstat counter for shadow nodes")
> > > 
> > > introduced an IRQ-off check to ensure that a lock is held which also
> > > disabled interrupts. This does not work the same way on -RT because none
> > > of the locks, that are held, disable interrupts.
> > > Replace this check with a lockdep assert which ensures that the lock is
> > > held.
> > > 
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > 
> > I'm not against checking for the lock, but if IRQs aren't disabled,
> > what ensures __mod_lruvec_state() is safe?
> 
> how do you define safe? I've been looking for dependencies of
> __mod_lruvec_state() but found only that the lock is held during the RMW
> operation with WORKINGSET_NODES idx.

These stat functions are not allowed to nest, and the executing thread
cannot migrate to another CPU during the operation, otherwise they
corrupt the state they're modifying.

They are called from interrupt handlers, such as when NR_WRITEBACK is
decreased. Thus workingset_node_update() must exclude preemption from
irq handlers on the local CPU.

They rely on IRQ-disabling to also disable CPU migration.

> >                                            I'm guessing it's because
> > preemption is disabled and irq handlers are punted to process context.
> preemption is enabled and IRQ are processed in forced-threaded mode.

That doesn't sound safe.
Sebastian Andrzej Siewior Feb. 13, 2019, 9:27 a.m. UTC | #6
On 2019-02-11 16:02:08 [-0500], Johannes Weiner wrote:
> > how do you define safe? I've been looking for dependencies of
> > __mod_lruvec_state() but found only that the lock is held during the RMW
> > operation with WORKINGSET_NODES idx.
> 
> These stat functions are not allowed to nest, and the executing thread
> cannot migrate to another CPU during the operation, otherwise they
> corrupt the state they're modifying.

If everyone is taking the same lock (like i_pages.xa_lock) then there
will not be two instances updating the same stat. The owner of the
(sleeping)-spinlock will not be migrated to another CPU.

> They are called from interrupt handlers, such as when NR_WRITEBACK is
> decreased. Thus workingset_node_update() must exclude preemption from
> irq handlers on the local CPU.

Do you have an example for a code path to check NR_WRITEBACK?
 
> They rely on IRQ-disabling to also disable CPU migration.
The spinlock disables CPU migration. 

> > >                                            I'm guessing it's because
> > > preemption is disabled and irq handlers are punted to process context.
> > preemption is enabled and IRQ are processed in forced-threaded mode.
> 
> That doesn't sound safe.

Do you have test-case or something I could throw at it and verify that
this still works? So far nothing complains…

Sebastian
Johannes Weiner Feb. 13, 2019, 2:56 p.m. UTC | #7
On Wed, Feb 13, 2019 at 10:27:54AM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-02-11 16:02:08 [-0500], Johannes Weiner wrote:
> > > how do you define safe? I've been looking for dependencies of
> > > __mod_lruvec_state() but found only that the lock is held during the RMW
> > > operation with WORKINGSET_NODES idx.
> > 
> > These stat functions are not allowed to nest, and the executing thread
> > cannot migrate to another CPU during the operation, otherwise they
> > corrupt the state they're modifying.
> 
> If everyone is taking the same lock (like i_pages.xa_lock) then there
> will not be two instances updating the same stat. The owner of the
> (sleeping)-spinlock will not be migrated to another CPU.

This might be true for this particular stat item, but they are general
VM statistics. They're assuredly not all taking the xa_lock.

> > They are called from interrupt handlers, such as when NR_WRITEBACK is
> > decreased. Thus workingset_node_update() must exclude preemption from
> > irq handlers on the local CPU.
> 
> Do you have an example for a code path to check NR_WRITEBACK?

end_page_writeback()
 test_clear_page_writeback()
   dec_lruvec_state(lruvec, NR_WRITEBACK)

> > They rely on IRQ-disabling to also disable CPU migration.
> The spinlock disables CPU migration. 
> 
> > > >                                            I'm guessing it's because
> > > > preemption is disabled and irq handlers are punted to process context.
> > > preemption is enabled and IRQ are processed in forced-threaded mode.
> > 
> > That doesn't sound safe.
> 
> Do you have test-case or something I could throw at it and verify that
> this still works? So far nothing complains…

It's not easy to get the timing right on purpose, but we've seen in
production what happens when you don't protect these counter updates
from interrupts. See c3cc39118c36 ("mm: memcontrol: fix NR_WRITEBACK
leak in memcg and system stats").
Sebastian Andrzej Siewior Aug. 21, 2019, 11:21 a.m. UTC | #8
sorry, I somehow forgot about this…

On 2019-02-13 09:56:56 [-0500], Johannes Weiner wrote:
> On Wed, Feb 13, 2019 at 10:27:54AM +0100, Sebastian Andrzej Siewior wrote:
> > On 2019-02-11 16:02:08 [-0500], Johannes Weiner wrote:
> > > > how do you define safe? I've been looking for dependencies of
> > > > __mod_lruvec_state() but found only that the lock is held during the RMW
> > > > operation with WORKINGSET_NODES idx.
> > > 
> > > These stat functions are not allowed to nest, and the executing thread
> > > cannot migrate to another CPU during the operation, otherwise they
> > > corrupt the state they're modifying.
> > 
> > If everyone is taking the same lock (like i_pages.xa_lock) then there
> > will not be two instances updating the same stat. The owner of the
> > (sleeping)-spinlock will not be migrated to another CPU.
> 
> This might be true for this particular stat item, but they are general
> VM statistics. They're assuredly not all taking the xa_lock.

This one in particular does and my guess is that the interrupts are
disabled here because of xa_lock. So the question is why should the
interrupts be disabled? Is this due to the lock that should have been
acquired (and as such disable interrupts) _or_ because of the
*_lruvec_slab_state() operation.

> > > They are called from interrupt handlers, such as when NR_WRITEBACK is
> > > decreased. Thus workingset_node_update() must exclude preemption from
> > > irq handlers on the local CPU.
> > 
> > Do you have an example for a code path to check NR_WRITEBACK?
> 
> end_page_writeback()
>  test_clear_page_writeback()
>    dec_lruvec_state(lruvec, NR_WRITEBACK)

So with a warning in dec_lruvec_state() I found only a call path from
softirq (like scsi_io_completion() / bio_endio()). Having lockdep
annotation instead "just" preempt_disable() would have helped :)

> > > They rely on IRQ-disabling to also disable CPU migration.
> > The spinlock disables CPU migration. 
> > 
> > > > >                                            I'm guessing it's because
> > > > > preemption is disabled and irq handlers are punted to process context.
> > > > preemption is enabled and IRQ are processed in forced-threaded mode.
> > > 
> > > That doesn't sound safe.
> > 
> > Do you have test-case or something I could throw at it and verify that
> > this still works? So far nothing complains…
> 
> It's not easy to get the timing right on purpose, but we've seen in
> production what happens when you don't protect these counter updates
> from interrupts. See c3cc39118c36 ("mm: memcontrol: fix NR_WRITEBACK
> leak in memcg and system stats").

Based on the looking code I'm looking at, it looks fine. Should I just
resubmit the patch?

Sebastian
Johannes Weiner Aug. 21, 2019, 3:21 p.m. UTC | #9
On Wed, Aug 21, 2019 at 01:21:16PM +0200, Sebastian Andrzej Siewior wrote:
> sorry, I somehow forgot about this…
> 
> On 2019-02-13 09:56:56 [-0500], Johannes Weiner wrote:
> > On Wed, Feb 13, 2019 at 10:27:54AM +0100, Sebastian Andrzej Siewior wrote:
> > > On 2019-02-11 16:02:08 [-0500], Johannes Weiner wrote:
> > > > > how do you define safe? I've been looking for dependencies of
> > > > > __mod_lruvec_state() but found only that the lock is held during the RMW
> > > > > operation with WORKINGSET_NODES idx.
> > > > 
> > > > These stat functions are not allowed to nest, and the executing thread
> > > > cannot migrate to another CPU during the operation, otherwise they
> > > > corrupt the state they're modifying.
> > > 
> > > If everyone is taking the same lock (like i_pages.xa_lock) then there
> > > will not be two instances updating the same stat. The owner of the
> > > (sleeping)-spinlock will not be migrated to another CPU.
> > 
> > This might be true for this particular stat item, but they are general
> > VM statistics. They're assuredly not all taking the xa_lock.
> 
> This one in particular does and my guess is that the interrupts are
> disabled here because of xa_lock. So the question is why should the
> interrupts be disabled? Is this due to the lock that should have been
> acquired (and as such disable interrupts) _or_ because of the
> *_lruvec_slab_state() operation.
> 
> > > > They are called from interrupt handlers, such as when NR_WRITEBACK is
> > > > decreased. Thus workingset_node_update() must exclude preemption from
> > > > irq handlers on the local CPU.
> > > 
> > > Do you have an example for a code path to check NR_WRITEBACK?
> > 
> > end_page_writeback()
> >  test_clear_page_writeback()
> >    dec_lruvec_state(lruvec, NR_WRITEBACK)
> 
> So with a warning in dec_lruvec_state() I found only a call path from
> softirq (like scsi_io_completion() / bio_endio()). Having lockdep
> annotation instead "just" preempt_disable() would have helped :)
> 
> > > > They rely on IRQ-disabling to also disable CPU migration.
> > > The spinlock disables CPU migration. 
> > > 
> > > > > >                                            I'm guessing it's because
> > > > > > preemption is disabled and irq handlers are punted to process context.
> > > > > preemption is enabled and IRQ are processed in forced-threaded mode.
> > > > 
> > > > That doesn't sound safe.
> > > 
> > > Do you have test-case or something I could throw at it and verify that
> > > this still works? So far nothing complains…
> > 
> > It's not easy to get the timing right on purpose, but we've seen in
> > production what happens when you don't protect these counter updates
> > from interrupts. See c3cc39118c36 ("mm: memcontrol: fix NR_WRITEBACK
> > leak in memcg and system stats").
> 
> Based on the looking code I'm looking at, it looks fine. Should I just
> resubmit the patch?

No, NAK to this patch and others like it for the mm code.

The serialization scheme for the vmstats facilty is that stats can be
modified from interrupt context, and so they rely on interrupts being
disabled. This check is correct.

If you want to comprehensively change the scheme, you're of course
welcome to propose that, and I won't be in your way. But that includes
review and update of *all* participants, from the mutation points that
disable irqs (mod_zone_page_state() and friends) to the execution
context of all callstacks, including the full block layer.

What we are NOT doing is eliminating checks that correctly verify the
current locking scheme. We've seen race conditions in this code that
took millions of machine hours to trigger when the rules were broken,
so we rely on explicit checks during code development. It's also not
surprising that they're the only thing that triggers in your testing.

Making this work correctly for RT needs a more thoughtful approach.
diff mbox series

Patch

--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -368,6 +368,8 @@  static struct list_lru shadow_nodes;
 
 void workingset_update_node(struct xa_node *node)
 {
+	struct address_space *mapping;
+
 	/*
 	 * Track non-empty nodes that contain only shadow entries;
 	 * unlink those that contain pages or are being freed.
@@ -376,7 +378,8 @@  void workingset_update_node(struct xa_no
 	 * already where they should be. The list_empty() test is safe
 	 * as node->private_list is protected by the i_pages lock.
 	 */
-	VM_WARN_ON_ONCE(!irqs_disabled());  /* For __inc_lruvec_page_state */
+	mapping = container_of(node->array, struct address_space, i_pages);
+	lockdep_assert_held(&mapping->i_pages.xa_lock);
 
 	if (node->count && node->count == node->nr_values) {
 		if (list_empty(&node->private_list)) {