Message ID | 20231109062056.3181775-10-viro@zeniv.linux.org.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/22] struct dentry: get rid of randomize_layout idiocy | expand |
On Thu, Nov 09, 2023 at 06:20:44AM +0000, Al Viro wrote: > Currently the "need caller to do more work" path in fast_dput() > has refcount decremented, then, with ->d_lock held and > refcount verified to have reached 0 fast_dput() forcibly resets > the refcount to 1. > > Move that resetting refcount to 1 into the callers; later in > the series it will be massaged out of existence. > > Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> > --- Ok, this is safe to do because of [PATCH 09/22] fast_dput(): handle underflows gracefully https://lore.kernel.org/linux-fsdevel/20231109062056.3181775-9-viro@zeniv.linux.org.uk as return false from fast_dput() now always means refcount is zero. Reviewed-by: Christian Brauner <brauner@kernel.org> > fs/dcache.c | 9 ++------- > 1 file changed, 2 insertions(+), 7 deletions(-) > > diff --git a/fs/dcache.c b/fs/dcache.c > index e02b3c81bc02..9a3eeee02500 100644 > --- a/fs/dcache.c > +++ b/fs/dcache.c > @@ -847,13 +847,6 @@ static inline bool fast_dput(struct dentry *dentry) > spin_unlock(&dentry->d_lock); > return true; > } > - > - /* > - * Re-get the reference we optimistically dropped. We hold the > - * lock, and we just tested that it was zero, so we can just > - * set it to 1. > - */ > - dentry->d_lockref.count = 1; > return false; > } > > @@ -896,6 +889,7 @@ void dput(struct dentry *dentry) > } > > /* Slow case: now with the dentry lock held */ > + dentry->d_lockref.count = 1; > rcu_read_unlock(); > > if (likely(retain_dentry(dentry))) { > @@ -930,6 +924,7 @@ void dput_to_list(struct dentry *dentry, struct list_head *list) > return; > } > rcu_read_unlock(); > + dentry->d_lockref.count = 1; > if (!retain_dentry(dentry)) > __dput_to_list(dentry, list); > spin_unlock(&dentry->d_lock); > -- > 2.39.2 >
On Thu, Nov 09, 2023 at 03:54:34PM +0100, Christian Brauner wrote: > On Thu, Nov 09, 2023 at 06:20:44AM +0000, Al Viro wrote: > > Currently the "need caller to do more work" path in fast_dput() > > has refcount decremented, then, with ->d_lock held and > > refcount verified to have reached 0 fast_dput() forcibly resets > > the refcount to 1. > > > > Move that resetting refcount to 1 into the callers; later in > > the series it will be massaged out of existence. > > > > Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> > > --- > > Ok, this is safe to do because of > > [PATCH 09/22] fast_dput(): handle underflows gracefully > https://lore.kernel.org/linux-fsdevel/20231109062056.3181775-9-viro@zeniv.linux.org.uk > > as return false from fast_dput() now always means refcount is zero. Not sure how to put it in commit message cleanly. Perhaps something like the following variant? By now there is only one place in entire fast_dput() where we return false; that happens after refcount had been decremented and found (while holding ->d_lock) to be zero. In that case, just prior to returning false to caller, fast_dput() forcibly changes the refcount from 0 to 1. Lift that resetting refcount to 1 into the callers; later in the series it will be massaged out of existence.
diff --git a/fs/dcache.c b/fs/dcache.c index e02b3c81bc02..9a3eeee02500 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -847,13 +847,6 @@ static inline bool fast_dput(struct dentry *dentry) spin_unlock(&dentry->d_lock); return true; } - - /* - * Re-get the reference we optimistically dropped. We hold the - * lock, and we just tested that it was zero, so we can just - * set it to 1. - */ - dentry->d_lockref.count = 1; return false; } @@ -896,6 +889,7 @@ void dput(struct dentry *dentry) } /* Slow case: now with the dentry lock held */ + dentry->d_lockref.count = 1; rcu_read_unlock(); if (likely(retain_dentry(dentry))) { @@ -930,6 +924,7 @@ void dput_to_list(struct dentry *dentry, struct list_head *list) return; } rcu_read_unlock(); + dentry->d_lockref.count = 1; if (!retain_dentry(dentry)) __dput_to_list(dentry, list); spin_unlock(&dentry->d_lock);
Currently the "need caller to do more work" path in fast_dput() has refcount decremented, then, with ->d_lock held and refcount verified to have reached 0 fast_dput() forcibly resets the refcount to 1. Move that resetting refcount to 1 into the callers; later in the series it will be massaged out of existence. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> --- fs/dcache.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)