Message ID | 20240204021739.1157830-6-viro@zeniv.linux.org.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/13] fs/super.c: don't drop ->s_user_ns until we free struct super_block itself | expand |
On Sun, Feb 04, 2024 at 02:17:32AM +0000, Al Viro wrote: > In __afs_break_callback() we might check ->cb_nr_mmap and if it's non-zero > do queue_work(&vnode->cb_work). In afs_drop_open_mmap() we decrement > ->cb_nr_mmap and do flush_work(&vnode->cb_work) if it reaches zero. > > The trouble is, there's nothing to prevent __afs_break_callback() from > seeing ->cb_nr_mmap before the decrement and do queue_work() after both > the decrement and flush_work(). If that happens, we might be in trouble - > vnode might get freed before the queued work runs. > > __afs_break_callback() is always done under ->cb_lock, so let's make > sure that ->cb_nr_mmap can change from non-zero to zero while holding > ->cb_lock (the spinlock component of it - it's a seqlock and we don't > need to mess with the counter). > > Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> > --- Acked-by: Christian Brauner <brauner@kernel.org>
diff --git a/fs/afs/file.c b/fs/afs/file.c index 3d33b221d9ca..ef2cc8f565d2 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -417,13 +417,17 @@ static void afs_add_open_mmap(struct afs_vnode *vnode) static void afs_drop_open_mmap(struct afs_vnode *vnode) { - if (!atomic_dec_and_test(&vnode->cb_nr_mmap)) + if (atomic_add_unless(&vnode->cb_nr_mmap, -1, 1)) return; down_write(&vnode->volume->open_mmaps_lock); - if (atomic_read(&vnode->cb_nr_mmap) == 0) + read_seqlock_excl(&vnode->cb_lock); + // the only place where ->cb_nr_mmap may hit 0 + // see __afs_break_callback() for the other side... + if (atomic_dec_and_test(&vnode->cb_nr_mmap)) list_del_init(&vnode->cb_mmap_link); + read_sequnlock_excl(&vnode->cb_lock); up_write(&vnode->volume->open_mmaps_lock); flush_work(&vnode->cb_work);
In __afs_break_callback() we might check ->cb_nr_mmap and if it's non-zero do queue_work(&vnode->cb_work). In afs_drop_open_mmap() we decrement ->cb_nr_mmap and do flush_work(&vnode->cb_work) if it reaches zero. The trouble is, there's nothing to prevent __afs_break_callback() from seeing ->cb_nr_mmap before the decrement and do queue_work() after both the decrement and flush_work(). If that happens, we might be in trouble - vnode might get freed before the queued work runs. __afs_break_callback() is always done under ->cb_lock, so let's make sure that ->cb_nr_mmap can change from non-zero to zero while holding ->cb_lock (the spinlock component of it - it's a seqlock and we don't need to mess with the counter). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> --- fs/afs/file.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)