diff mbox

[13/16] btrfs: __set_extent_bit, try preallocation out of locked section with lighter gfp flags

Message ID 06663d10e04cd3016ec41987c2e7275ca21ce80c.1461920675.git.dsterba@suse.com (mailing list archive)
State Superseded
Headers show

Commit Message

David Sterba April 29, 2016, 9:20 a.m. UTC
In __set_extent_bit we allocate with GFP_ATOMIC with the tree lock
held, this takes away allocator opportunities to satisfy the allocation.
In some cases we leave the locked section and we could repeat the
preallocation with less strict flags. It could lead to unnecessary
allocation, but we won't fail until we really need it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent_io.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Filipe Manana April 29, 2016, 10 a.m. UTC | #1
On Fri, Apr 29, 2016 at 10:20 AM, David Sterba <dsterba@suse.com> wrote:
> In __set_extent_bit we allocate with GFP_ATOMIC with the tree lock
> held, this takes away allocator opportunities to satisfy the allocation.
> In some cases we leave the locked section and we could repeat the
> preallocation with less strict flags. It could lead to unnecessary
> allocation, but we won't fail until we really need it.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/extent_io.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 8707bcc615ff..06ad442f6c03 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1049,6 +1049,13 @@ __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
>         spin_unlock(&tree->lock);
>         if (gfpflags_allow_blocking(mask))
>                 cond_resched();
> +       /*
> +        * If we used the preallocated state, try again here out of the
> +        * locked section so we can avoid GFP_ATOMIC. No error checking
> +        * as we might not need it in the end.
> +        */
> +       if (!prealloc)
> +               prealloc = alloc_extent_state(mask);

Under the again label we already try to allocate in non atomic mode
(gfpflags_allow_blocking() returns us true for GFP_NOFS), and some
other patch in the series removes the BUG_ON() there.
So this extra allocation attempt seems unnecessary to me. Or did I
miss something?

At least I'm seeing the following in 4.5's gfp.h:

#define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /*
Caller can reclaim */
#define __GFP_RECLAIM ((__force
gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM))
#define GFP_NOFS (__GFP_RECLAIM | __GFP_IO)

static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
{
     return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
}

So:

GFP_NOFS == ___GFP_DIRECT_RECLAIM | ___GFP_KSWAPD_RECLAIM | __GFP_IO
__GFP_DIRECT_RECLAIM == ___GFP_DIRECT_RECLAIM

So, as we always use GFP_NOFS for the gfp flags value...

Some comment goes for the other similar patches.



>         goto again;
>
>  out:
> --
> 2.7.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Sterba April 29, 2016, 11:46 a.m. UTC | #2
On Fri, Apr 29, 2016 at 11:00:34AM +0100, Filipe Manana wrote:
> On Fri, Apr 29, 2016 at 10:20 AM, David Sterba <dsterba@suse.com> wrote:
> > In __set_extent_bit we allocate with GFP_ATOMIC with the tree lock
> > held, this takes away allocator opportunities to satisfy the allocation.
> > In some cases we leave the locked section and we could repeat the
> > preallocation with less strict flags. It could lead to unnecessary
> > allocation, but we won't fail until we really need it.
> >
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> >  fs/btrfs/extent_io.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> > index 8707bcc615ff..06ad442f6c03 100644
> > --- a/fs/btrfs/extent_io.c
> > +++ b/fs/btrfs/extent_io.c
> > @@ -1049,6 +1049,13 @@ __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
> >         spin_unlock(&tree->lock);
> >         if (gfpflags_allow_blocking(mask))
> >                 cond_resched();
> > +       /*
> > +        * If we used the preallocated state, try again here out of the
> > +        * locked section so we can avoid GFP_ATOMIC. No error checking
> > +        * as we might not need it in the end.
> > +        */
> > +       if (!prealloc)
> > +               prealloc = alloc_extent_state(mask);
> 
> Under the again label we already try to allocate in non atomic mode
> (gfpflags_allow_blocking() returns us true for GFP_NOFS), and some
> other patch in the series removes the BUG_ON() there.
> So this extra allocation attempt seems unnecessary to me. Or did I
> miss something?

No, you're right, I've duplicated the preallocation, the
gfpflags_allow_blocking check gets dropped in the last patch.

> At least I'm seeing the following in 4.5's gfp.h:
> 
> #define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /*
> Caller can reclaim */
> #define __GFP_RECLAIM ((__force
> gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM))
> #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO)
> 
> static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
> {
>      return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
> }
> 
> So:
> 
> GFP_NOFS == ___GFP_DIRECT_RECLAIM | ___GFP_KSWAPD_RECLAIM | __GFP_IO
> __GFP_DIRECT_RECLAIM == ___GFP_DIRECT_RECLAIM
> 
> So, as we always use GFP_NOFS for the gfp flags value...
> 
> Some comment goes for the other similar patches.

I'll drop the patches, thanks for spotting it.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 8707bcc615ff..06ad442f6c03 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1049,6 +1049,13 @@  __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
 	spin_unlock(&tree->lock);
 	if (gfpflags_allow_blocking(mask))
 		cond_resched();
+	/*
+	 * If we used the preallocated state, try again here out of the
+	 * locked section so we can avoid GFP_ATOMIC. No error checking
+	 * as we might not need it in the end.
+	 */
+	if (!prealloc)
+		prealloc = alloc_extent_state(mask);
 	goto again;
 
 out: