Message ID | 20190726074705.27513-1-naohiro.aota@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: fix extent_state leak in btrfs_lock_and_flush_ordered_range | expand |
On 26.07.19 г. 10:47 ч., Naohiro Aota wrote: > btrfs_lock_and_flush_ordered_range() loads given "*cached_state" into > cachedp, which, in general, is NULL. Then, lock_extent_bits() updates > "cachedp", but it never goes backs to the caller. Thus the caller still > see its "cached_state" to be NULL and never free the state allocated > under btrfs_lock_and_flush_ordered_range(). As a result, we will > see massive state leak with e.g. fstests btrfs/005. Fix this bug by > properly handling the pointers. > > Fixes: bd80d94efb83 ("btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range") > Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> > --- > fs/btrfs/ordered-data.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c > index df02ed25b7db..ab31b1a1b624 100644 > --- a/fs/btrfs/ordered-data.c > +++ b/fs/btrfs/ordered-data.c > @@ -982,13 +982,14 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree, > struct extent_state **cached_state) > { > struct btrfs_ordered_extent *ordered; > - struct extent_state *cachedp = NULL; > + struct extent_state *cache = NULL; > + struct extent_state **cachedp = &cache; > > if (cached_state) > - cachedp = *cached_state; > + cachedp = cached_state; > > while (1) { > - lock_extent_bits(tree, start, end, &cachedp); > + lock_extent_bits(tree, start, end, cachedp); > ordered = btrfs_lookup_ordered_range(inode, start, > end - start + 1); > if (!ordered) { > @@ -998,10 +999,10 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree, > * aren't exposing it outside of this function > */ > if (!cached_state) > - refcount_dec(&cachedp->refs); > + refcount_dec(&cache->refs); > break; > } > - unlock_extent_cached(tree, start, end, &cachedp); > + unlock_extent_cached(tree, start, end, cachedp); > btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1); > btrfs_put_ordered_extent(ordered); > } >
On Fri, Jul 26, 2019 at 04:47:05PM +0900, Naohiro Aota wrote: > btrfs_lock_and_flush_ordered_range() loads given "*cached_state" into > cachedp, which, in general, is NULL. Then, lock_extent_bits() updates > "cachedp", but it never goes backs to the caller. Thus the caller still > see its "cached_state" to be NULL and never free the state allocated > under btrfs_lock_and_flush_ordered_range(). As a result, we will > see massive state leak with e.g. fstests btrfs/005. Fix this bug by > properly handling the pointers. > > Fixes: bd80d94efb83 ("btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range") > Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Queued for 5.3, thanks.
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index df02ed25b7db..ab31b1a1b624 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -982,13 +982,14 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree, struct extent_state **cached_state) { struct btrfs_ordered_extent *ordered; - struct extent_state *cachedp = NULL; + struct extent_state *cache = NULL; + struct extent_state **cachedp = &cache; if (cached_state) - cachedp = *cached_state; + cachedp = cached_state; while (1) { - lock_extent_bits(tree, start, end, &cachedp); + lock_extent_bits(tree, start, end, cachedp); ordered = btrfs_lookup_ordered_range(inode, start, end - start + 1); if (!ordered) { @@ -998,10 +999,10 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree, * aren't exposing it outside of this function */ if (!cached_state) - refcount_dec(&cachedp->refs); + refcount_dec(&cache->refs); break; } - unlock_extent_cached(tree, start, end, &cachedp); + unlock_extent_cached(tree, start, end, cachedp); btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1); btrfs_put_ordered_extent(ordered); }
btrfs_lock_and_flush_ordered_range() loads given "*cached_state" into cachedp, which, in general, is NULL. Then, lock_extent_bits() updates "cachedp", but it never goes backs to the caller. Thus the caller still see its "cached_state" to be NULL and never free the state allocated under btrfs_lock_and_flush_ordered_range(). As a result, we will see massive state leak with e.g. fstests btrfs/005. Fix this bug by properly handling the pointers. Fixes: bd80d94efb83 ("btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range") Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> --- fs/btrfs/ordered-data.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)