mbox series

[0/2] fsnotify: fix softlockups iterating over d_subdirs

Message ID 20221018041233.376977-1-stephen.s.brennan@oracle.com (mailing list archive)
Headers show
Series fsnotify: fix softlockups iterating over d_subdirs | expand

Message

Stephen Brennan Oct. 18, 2022, 4:12 a.m. UTC
Hi Jan, Amir, Al,

Here's my first shot at implementing what we discussed. I tested it using the
negative dentry creation tool I mentioned in my previous message, with a similar
workflow. Rather than having a bunch of threads accessing the directory to
create that "thundering herd" of CPUs in __fsnotify_update_child_dentry_flags, I
just started a lot of inotifywait tasks:

1. Create 100 million negative dentries in a dir
2. Use trace-cmd to watch __fsnotify_update_child_dentry_flags:
   trace-cmd start -p function_graph -l __fsnotify_update_child_dentry_flags
   sudo cat /sys/kernel/debug/tracing/trace_pipe
3. Run a lot of inotifywait tasks: for i in {1..10} inotifywait $dir & done

With step #3, I see only one execution of __fsnotify_update_child_dentry_flags.
Once that completes, all the inotifywait tasks say "Watches established".
Similarly, once an access occurs in the directory, a single
__fsnotify_update_child_dentry_flags execution occurs, and all the tasks exit.
In short: it works great!

However, while testing this, I've observed a dentry still in use warning during
unmount of rpc_pipefs on the "nfs" dentry during shutdown. NFS is of course in
use, and I assume that fsnotify must have been used to trigger this. The error
is not there on mainline without my patch so it's definitely caused by this
code. I'll continue debugging it but I wanted to share my first take on this so
you could take a look.

[ 1595.197339] BUG: Dentry 000000005f5e7197{i=67,n=nfs}  still in use (2) [unmount of rpc_pipefs rpc_pipefs]

Thanks!
Stephen

Stephen Brennan (2):
  fsnotify: Protect i_fsnotify_mask and child flags with inode rwsem
  fsnotify: allow sleepable child flag update

 fs/notify/fsnotify.c | 84 +++++++++++++++++++++++++++++++-------------
 fs/notify/mark.c     | 55 ++++++++++++++++++++++-------
 2 files changed, 103 insertions(+), 36 deletions(-)

Comments

Amir Goldstein Oct. 18, 2022, 8:07 a.m. UTC | #1
On Tue, Oct 18, 2022 at 7:12 AM Stephen Brennan
<stephen.s.brennan@oracle.com> wrote:
>
> Hi Jan, Amir, Al,
>
> Here's my first shot at implementing what we discussed. I tested it using the
> negative dentry creation tool I mentioned in my previous message, with a similar
> workflow. Rather than having a bunch of threads accessing the directory to
> create that "thundering herd" of CPUs in __fsnotify_update_child_dentry_flags, I
> just started a lot of inotifywait tasks:
>
> 1. Create 100 million negative dentries in a dir
> 2. Use trace-cmd to watch __fsnotify_update_child_dentry_flags:
>    trace-cmd start -p function_graph -l __fsnotify_update_child_dentry_flags
>    sudo cat /sys/kernel/debug/tracing/trace_pipe
> 3. Run a lot of inotifywait tasks: for i in {1..10} inotifywait $dir & done
>
> With step #3, I see only one execution of __fsnotify_update_child_dentry_flags.
> Once that completes, all the inotifywait tasks say "Watches established".
> Similarly, once an access occurs in the directory, a single
> __fsnotify_update_child_dentry_flags execution occurs, and all the tasks exit.
> In short: it works great!
>
> However, while testing this, I've observed a dentry still in use warning during
> unmount of rpc_pipefs on the "nfs" dentry during shutdown. NFS is of course in
> use, and I assume that fsnotify must have been used to trigger this. The error
> is not there on mainline without my patch so it's definitely caused by this
> code. I'll continue debugging it but I wanted to share my first take on this so
> you could take a look.
>
> [ 1595.197339] BUG: Dentry 000000005f5e7197{i=67,n=nfs}  still in use (2) [unmount of rpc_pipefs rpc_pipefs]
>

Hmm, the assumption we made about partial stability of d_subdirs
under dir inode lock looks incorrect for rpc_pipefs.
None of the functions that update the rpc_pipefs dcache take the parent
inode lock.

The assumption looks incorrect for other pseudo fs as well.

The other side of the coin is that we do not really need to worry
about walking a huge list of pseudo fs children.

The question is how to classify those pseudo fs and whether there
are other cases like this that we missed.

Perhaps having simple_dentry_operationsis a good enough
clue, but perhaps it is not enough. I am not sure.

It covers all the cases of pseudo fs that I know about, so you
can certainly use this clue to avoid going to sleep in the
update loop as a first approximation.

I can try to figure this out, but I prefer that Al will chime in to
provide reliable answers to those questions.

Thanks,
Amir.
Stephen Brennan Oct. 18, 2022, 11:52 p.m. UTC | #2
Amir Goldstein <amir73il@gmail.com> writes:
> On Tue, Oct 18, 2022 at 7:12 AM Stephen Brennan
> <stephen.s.brennan@oracle.com> wrote:
>>
>> Hi Jan, Amir, Al,
>>
>> Here's my first shot at implementing what we discussed. I tested it using the
>> negative dentry creation tool I mentioned in my previous message, with a similar
>> workflow. Rather than having a bunch of threads accessing the directory to
>> create that "thundering herd" of CPUs in __fsnotify_update_child_dentry_flags, I
>> just started a lot of inotifywait tasks:
>>
>> 1. Create 100 million negative dentries in a dir
>> 2. Use trace-cmd to watch __fsnotify_update_child_dentry_flags:
>>    trace-cmd start -p function_graph -l __fsnotify_update_child_dentry_flags
>>    sudo cat /sys/kernel/debug/tracing/trace_pipe
>> 3. Run a lot of inotifywait tasks: for i in {1..10} inotifywait $dir & done
>>
>> With step #3, I see only one execution of __fsnotify_update_child_dentry_flags.
>> Once that completes, all the inotifywait tasks say "Watches established".
>> Similarly, once an access occurs in the directory, a single
>> __fsnotify_update_child_dentry_flags execution occurs, and all the tasks exit.
>> In short: it works great!
>>
>> However, while testing this, I've observed a dentry still in use warning during
>> unmount of rpc_pipefs on the "nfs" dentry during shutdown. NFS is of course in
>> use, and I assume that fsnotify must have been used to trigger this. The error
>> is not there on mainline without my patch so it's definitely caused by this
>> code. I'll continue debugging it but I wanted to share my first take on this so
>> you could take a look.
>>
>> [ 1595.197339] BUG: Dentry 000000005f5e7197{i=67,n=nfs}  still in use (2) [unmount of rpc_pipefs rpc_pipefs]
>>
>
> Hmm, the assumption we made about partial stability of d_subdirs
> under dir inode lock looks incorrect for rpc_pipefs.
> None of the functions that update the rpc_pipefs dcache take the parent
> inode lock.

That may be, but I'm confused how that would trigger this issue. If I'm
understanding correctly, this warning indicates a reference counting
bug.

If __fsnotify_update_child_dentry_flags() had gone to sleep and the list
were edited, then it seems like there could be only two possibilities
that could cause bugs:

1. The dentry we slept holding a reference to was removed from the list,
and maybe moved to a different one, or just removed. If that were the
case, we're quite unlucky, because we'll start looping indefinitely as
we'll never get back to the beginning of the list, or worse.

2. A dentry adjacent to the one we held a reference to was removed. In
that case, our dentry's d_child pointers should get rearranged, and when
we wake, we should see those updates and continue.

In neither of those cases do I understand where we could have done a
dget() unpaired with a dput(), which is what seemingly would trigger
this issue.

I'm probably wrong, but without understanding the mechanism behind the
error, I'm not sure how to approach it.

> The assumption looks incorrect for other pseudo fs as well.
>
> The other side of the coin is that we do not really need to worry
> about walking a huge list of pseudo fs children.
>
> The question is how to classify those pseudo fs and whether there
> are other cases like this that we missed.
>
> Perhaps having simple_dentry_operationsis a good enough
> clue, but perhaps it is not enough. I am not sure.
>
> It covers all the cases of pseudo fs that I know about, so you
> can certainly use this clue to avoid going to sleep in the
> update loop as a first approximation.

I would worry that it would become an exercise of whack-a-mole.
Allow/deny-listing certain filesystems for certain behavior seems scary.

> I can try to figure this out, but I prefer that Al will chime in to
> provide reliable answers to those questions.

I have a core dump from the warning (with panic_on_warn=1) and will see
if I can trace or otherwise identify the exact mechanism myself.

> Thanks,
> Amir.
>

Thanks for your detailed review of both the patches. I didn't get much
time today to update the patches and test them. Your feedback looks very
helpful though, and I'll hope to send out an updated revision tomorrow.

In the absolute worst case (and I don't want to concede defeat just
yet), keeping patch 1 without patch 2 (sleepable iteration) would still
be a major win, since it resolves the thundering herd problem which is
what compounds problem of the long lists. 

Thanks!
Stephen
Amir Goldstein Oct. 19, 2022, 5:33 a.m. UTC | #3
On Wed, Oct 19, 2022 at 2:52 AM Stephen Brennan
<stephen.s.brennan@oracle.com> wrote:
>
> Amir Goldstein <amir73il@gmail.com> writes:
> > On Tue, Oct 18, 2022 at 7:12 AM Stephen Brennan
> > <stephen.s.brennan@oracle.com> wrote:
> >>
> >> Hi Jan, Amir, Al,
> >>
> >> Here's my first shot at implementing what we discussed. I tested it using the
> >> negative dentry creation tool I mentioned in my previous message, with a similar
> >> workflow. Rather than having a bunch of threads accessing the directory to
> >> create that "thundering herd" of CPUs in __fsnotify_update_child_dentry_flags, I
> >> just started a lot of inotifywait tasks:
> >>
> >> 1. Create 100 million negative dentries in a dir
> >> 2. Use trace-cmd to watch __fsnotify_update_child_dentry_flags:
> >>    trace-cmd start -p function_graph -l __fsnotify_update_child_dentry_flags
> >>    sudo cat /sys/kernel/debug/tracing/trace_pipe
> >> 3. Run a lot of inotifywait tasks: for i in {1..10} inotifywait $dir & done
> >>
> >> With step #3, I see only one execution of __fsnotify_update_child_dentry_flags.
> >> Once that completes, all the inotifywait tasks say "Watches established".
> >> Similarly, once an access occurs in the directory, a single
> >> __fsnotify_update_child_dentry_flags execution occurs, and all the tasks exit.
> >> In short: it works great!
> >>
> >> However, while testing this, I've observed a dentry still in use warning during
> >> unmount of rpc_pipefs on the "nfs" dentry during shutdown. NFS is of course in
> >> use, and I assume that fsnotify must have been used to trigger this. The error
> >> is not there on mainline without my patch so it's definitely caused by this
> >> code. I'll continue debugging it but I wanted to share my first take on this so
> >> you could take a look.
> >>
> >> [ 1595.197339] BUG: Dentry 000000005f5e7197{i=67,n=nfs}  still in use (2) [unmount of rpc_pipefs rpc_pipefs]
> >>
> >
> > Hmm, the assumption we made about partial stability of d_subdirs
> > under dir inode lock looks incorrect for rpc_pipefs.
> > None of the functions that update the rpc_pipefs dcache take the parent
> > inode lock.
>
> That may be, but I'm confused how that would trigger this issue. If I'm
> understanding correctly, this warning indicates a reference counting
> bug.

Yes.
On generic_shutdown_super() there should be no more
references to dentries.

>
> If __fsnotify_update_child_dentry_flags() had gone to sleep and the list
> were edited, then it seems like there could be only two possibilities
> that could cause bugs:
>
> 1. The dentry we slept holding a reference to was removed from the list,
> and maybe moved to a different one, or just removed. If that were the
> case, we're quite unlucky, because we'll start looping indefinitely as
> we'll never get back to the beginning of the list, or worse.
>
> 2. A dentry adjacent to the one we held a reference to was removed. In
> that case, our dentry's d_child pointers should get rearranged, and when
> we wake, we should see those updates and continue.
>
> In neither of those cases do I understand where we could have done a
> dget() unpaired with a dput(), which is what seemingly would trigger
> this issue.
>

I got the same impression.

> I'm probably wrong, but without understanding the mechanism behind the
> error, I'm not sure how to approach it.
>
> > The assumption looks incorrect for other pseudo fs as well.
> >
> > The other side of the coin is that we do not really need to worry
> > about walking a huge list of pseudo fs children.
> >
> > The question is how to classify those pseudo fs and whether there
> > are other cases like this that we missed.
> >
> > Perhaps having simple_dentry_operationsis a good enough
> > clue, but perhaps it is not enough. I am not sure.
> >
> > It covers all the cases of pseudo fs that I know about, so you
> > can certainly use this clue to avoid going to sleep in the
> > update loop as a first approximation.
>
> I would worry that it would become an exercise of whack-a-mole.
> Allow/deny-listing certain filesystems for certain behavior seems scary.
>

Totally agree.

> > I can try to figure this out, but I prefer that Al will chime in to
> > provide reliable answers to those questions.
>
> I have a core dump from the warning (with panic_on_warn=1) and will see
> if I can trace or otherwise identify the exact mechanism myself.
>

Most likely the refcount was already leaked earlier, but
worth trying.

>
> Thanks for your detailed review of both the patches. I didn't get much
> time today to update the patches and test them. Your feedback looks very
> helpful though, and I'll hope to send out an updated revision tomorrow.
>
> In the absolute worst case (and I don't want to concede defeat just
> yet), keeping patch 1 without patch 2 (sleepable iteration) would still
> be a major win, since it resolves the thundering herd problem which is
> what compounds problem of the long lists.
>

Makes sense.
Patch 1 logic is solid.

Hope my suggestions won't complicate you too much,
if they do, I am sure Jan will find a way to simplify ;)

Thanks,
Amir.
Stephen Brennan Oct. 27, 2022, 10:06 p.m. UTC | #4
Amir Goldstein <amir73il@gmail.com> writes:
> On Wed, Oct 19, 2022 at 2:52 AM Stephen Brennan
> <stephen.s.brennan@oracle.com> wrote:
>>
>> Amir Goldstein <amir73il@gmail.com> writes:
>> > On Tue, Oct 18, 2022 at 7:12 AM Stephen Brennan
>> > <stephen.s.brennan@oracle.com> wrote:
>> >>
>> >> Hi Jan, Amir, Al,
>> >>
>> >> Here's my first shot at implementing what we discussed. I tested it using the
>> >> negative dentry creation tool I mentioned in my previous message, with a similar
>> >> workflow. Rather than having a bunch of threads accessing the directory to
>> >> create that "thundering herd" of CPUs in __fsnotify_update_child_dentry_flags, I
>> >> just started a lot of inotifywait tasks:
>> >>
>> >> 1. Create 100 million negative dentries in a dir
>> >> 2. Use trace-cmd to watch __fsnotify_update_child_dentry_flags:
>> >>    trace-cmd start -p function_graph -l __fsnotify_update_child_dentry_flags
>> >>    sudo cat /sys/kernel/debug/tracing/trace_pipe
>> >> 3. Run a lot of inotifywait tasks: for i in {1..10} inotifywait $dir & done
>> >>
>> >> With step #3, I see only one execution of __fsnotify_update_child_dentry_flags.
>> >> Once that completes, all the inotifywait tasks say "Watches established".
>> >> Similarly, once an access occurs in the directory, a single
>> >> __fsnotify_update_child_dentry_flags execution occurs, and all the tasks exit.
>> >> In short: it works great!
>> >>
>> >> However, while testing this, I've observed a dentry still in use warning during
>> >> unmount of rpc_pipefs on the "nfs" dentry during shutdown. NFS is of course in
>> >> use, and I assume that fsnotify must have been used to trigger this. The error
>> >> is not there on mainline without my patch so it's definitely caused by this
>> >> code. I'll continue debugging it but I wanted to share my first take on this so
>> >> you could take a look.
>> >>
>> >> [ 1595.197339] BUG: Dentry 000000005f5e7197{i=67,n=nfs}  still in use (2) [unmount of rpc_pipefs rpc_pipefs]
>> >>
>> >
>> > Hmm, the assumption we made about partial stability of d_subdirs
>> > under dir inode lock looks incorrect for rpc_pipefs.
>> > None of the functions that update the rpc_pipefs dcache take the parent
>> > inode lock.
>>
>> That may be, but I'm confused how that would trigger this issue. If I'm
>> understanding correctly, this warning indicates a reference counting
>> bug.
>
> Yes.
> On generic_shutdown_super() there should be no more
> references to dentries.
>
>>
>> If __fsnotify_update_child_dentry_flags() had gone to sleep and the list
>> were edited, then it seems like there could be only two possibilities
>> that could cause bugs:
>>
>> 1. The dentry we slept holding a reference to was removed from the list,
>> and maybe moved to a different one, or just removed. If that were the
>> case, we're quite unlucky, because we'll start looping indefinitely as
>> we'll never get back to the beginning of the list, or worse.
>>
>> 2. A dentry adjacent to the one we held a reference to was removed. In
>> that case, our dentry's d_child pointers should get rearranged, and when
>> we wake, we should see those updates and continue.
>>
>> In neither of those cases do I understand where we could have done a
>> dget() unpaired with a dput(), which is what seemingly would trigger
>> this issue.
>>
>
> I got the same impression.

Well I feel stupid. The reason behind this seems to be... that
d_find_any_alias() returns a reference to the dentry, and I promptly
leaked that. I'll have it fixed in v3 which I'm going through testing
now.

Stephen
Amir Goldstein Oct. 28, 2022, 8:58 a.m. UTC | #5
>
> Well I feel stupid. The reason behind this seems to be... that
> d_find_any_alias() returns a reference to the dentry, and I promptly
> leaked that. I'll have it fixed in v3 which I'm going through testing
> now.
>

I reckon if you ran the LTP fsnotify tests you would have seen this
warning a lot more instead of just one random pseudo filesystem
that some process is probably setting a watch on...

You should run the existing LTP test to check for regressions.
The fanotify/inotify test cases in LTP are easy to run, for example:
run make in testcases/kernel/syscalls/fanotify and execute individual
./fanotify* executable.

If you point me to a branch, I can run the tests until you get
your LTP setup ready.

Thanks,
Amir.