Message ID | ef83bcab5822e599620da402f6164df072853997.1461920675.git.dsterba@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 994b4a757ed1..092f697470d8 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1288,6 +1288,13 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, spin_unlock(&tree->lock); if (gfpflags_allow_blocking(mask)) cond_resched(); + /* + * If we used the preallocated state, try again here out of the + * locked section so we can avoid GFP_ATOMIC. No error checking + * as we might not need it in the end. + */ + if (!prealloc) + prealloc = alloc_extent_state(mask); first_iteration = false; goto again;
In convert_extent_bit we allocate with GFP_ATOMIC with the tree lock held, this takes away allocator opportunities to satisfy the allocation. In some cases we leave the locked section and we could repeat the preallocation with less strict flags. It could lead to unnecessary allocation, but we won't fail until we really need it. Signed-off-by: David Sterba <dsterba@suse.com> --- fs/btrfs/extent_io.c | 7 +++++++ 1 file changed, 7 insertions(+)