Message ID | 20231109062056.3181775-15-viro@zeniv.linux.org.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/22] struct dentry: get rid of randomize_layout idiocy | expand |
On Thu, Nov 09, 2023 at 06:20:49AM +0000, Al Viro wrote: > Instead of bumping it from 0 to 1, calling retain_dentry(), then > decrementing it back to 0 (with ->d_lock held all the way through), > just leave refcount at 0 through all of that. > > It will have a visible effect for ->d_delete() - now it can be > called with refcount 0 instead of 1 and it can no longer play > silly buggers with dropping/regaining ->d_lock. Not that any > in-tree instances tried to (it's pretty hard to get right). > > Any out-of-tree ones will have to adjust (assuming they need any > changes). > > Note that we do not need to extend rcu-critical area here - we have > verified that refcount is non-negative after having grabbed ->d_lock, > so nobody will be able to free dentry until they get into __dentry_kill(), > which won't happen until they manage to grab ->d_lock. > > Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> > --- Looks good to me, Reviewed-by: Christian Brauner <brauner@kernel.org>
diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst index 58627f0baf3e..6b058362938c 100644 --- a/Documentation/filesystems/porting.rst +++ b/Documentation/filesystems/porting.rst @@ -1054,3 +1054,11 @@ The list of children anchored in parent dentry got turned into hlist now. Field names got changed (->d_children/->d_sib instead of ->d_subdirs/->d_child for anchor/entries resp.), so any affected places will be immediately caught by compiler. + +--- + +**mandatory** + + ->d_delete() instances are now called for dentries with ->d_lock held +and refcount equal to 0. They are not permitted to drop/regain ->d_lock. +None of in-tree instances did anything of that sort. Make sure yours do not... diff --git a/fs/dcache.c b/fs/dcache.c index 916b978bfd98..3179156e0ad9 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -888,15 +888,14 @@ void dput(struct dentry *dentry) } /* Slow case: now with the dentry lock held */ - dentry->d_lockref.count = 1; rcu_read_unlock(); if (likely(retain_dentry(dentry))) { - dentry->d_lockref.count--; spin_unlock(&dentry->d_lock); return; } + dentry->d_lockref.count = 1; dentry = dentry_kill(dentry); } } @@ -921,13 +920,8 @@ void dput_to_list(struct dentry *dentry, struct list_head *list) return; } rcu_read_unlock(); - dentry->d_lockref.count = 1; - if (!retain_dentry(dentry)) { - --dentry->d_lockref.count; + if (!retain_dentry(dentry)) to_shrink_list(dentry, list); - } else { - --dentry->d_lockref.count; - } spin_unlock(&dentry->d_lock); }
Instead of bumping it from 0 to 1, calling retain_dentry(), then decrementing it back to 0 (with ->d_lock held all the way through), just leave refcount at 0 through all of that. It will have a visible effect for ->d_delete() - now it can be called with refcount 0 instead of 1 and it can no longer play silly buggers with dropping/regaining ->d_lock. Not that any in-tree instances tried to (it's pretty hard to get right). Any out-of-tree ones will have to adjust (assuming they need any changes). Note that we do not need to extend rcu-critical area here - we have verified that refcount is non-negative after having grabbed ->d_lock, so nobody will be able to free dentry until they get into __dentry_kill(), which won't happen until they manage to grab ->d_lock. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> --- Documentation/filesystems/porting.rst | 8 ++++++++ fs/dcache.c | 10 ++-------- 2 files changed, 10 insertions(+), 8 deletions(-)