Message ID | 20241024023142.25127-1-robbieko@synology.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | btrfs: reduce extent tree lock contention when searching for inline backref | expand |
On Thu, Oct 24, 2024 at 3:32 AM robbieko <robbieko@synology.com> wrote: > > From: Robbie Ko <robbieko@synology.com> > > When inserting extent backref, in order to check whether refs other than > inline refs are used, we always use keep locks for tree search, which > will increase the lock contention of extent-tree. > > We do not need the parent node every time to determine whether normal > refs are used. > It is only needed when the extent-item is the last item in a leaf. > > Therefore, we change to first use keep_locks=0 for search. > If the extent-item happens to be the last item in the leaf, we then > change to keep_locks=1 for the second search to reduce lock contention. > > Signed-off-by: Robbie Ko <robbieko@synology.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Looks good now, thanks. > --- > fs/btrfs/extent-tree.c | 26 +++++++++++++++++++++++--- > 1 file changed, 23 insertions(+), 3 deletions(-) > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c > index a5966324607d..54d149a41506 100644 > --- a/fs/btrfs/extent-tree.c > +++ b/fs/btrfs/extent-tree.c > @@ -795,7 +795,6 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, > if (insert) { > extra_size = btrfs_extent_inline_ref_size(want); > path->search_for_extension = 1; > - path->keep_locks = 1; > } else > extra_size = -1; > > @@ -946,6 +945,25 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, > ret = -EAGAIN; > goto out; > } > + > + if (path->slots[0] + 1 < btrfs_header_nritems(path->nodes[0])) { > + struct btrfs_key tmp_key; > + > + btrfs_item_key_to_cpu(path->nodes[0], &tmp_key, path->slots[0] + 1); > + if (tmp_key.objectid == bytenr && > + tmp_key.type < BTRFS_BLOCK_GROUP_ITEM_KEY) { > + ret = -EAGAIN; > + goto out; > + } > + goto enoent; > + } > + > + if (!path->keep_locks) { > + btrfs_release_path(path); > + path->keep_locks = 1; > + goto again; > + } > + > /* > * To add new inline back ref, we have to make sure > * there is no corresponding back ref item. > @@ -959,13 +977,15 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, > goto out; > } > } > +enoent: > *ref_ret = (struct btrfs_extent_inline_ref *)ptr; > out: > - if (insert) { > + if (path->keep_locks) { > path->keep_locks = 0; > - path->search_for_extension = 0; > btrfs_unlock_up_safe(path, 1); > } > + if (insert) > + path->search_for_extension = 0; > return ret; > } > > -- > 2.17.1 > >
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index a5966324607d..54d149a41506 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -795,7 +795,6 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, if (insert) { extra_size = btrfs_extent_inline_ref_size(want); path->search_for_extension = 1; - path->keep_locks = 1; } else extra_size = -1; @@ -946,6 +945,25 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, ret = -EAGAIN; goto out; } + + if (path->slots[0] + 1 < btrfs_header_nritems(path->nodes[0])) { + struct btrfs_key tmp_key; + + btrfs_item_key_to_cpu(path->nodes[0], &tmp_key, path->slots[0] + 1); + if (tmp_key.objectid == bytenr && + tmp_key.type < BTRFS_BLOCK_GROUP_ITEM_KEY) { + ret = -EAGAIN; + goto out; + } + goto enoent; + } + + if (!path->keep_locks) { + btrfs_release_path(path); + path->keep_locks = 1; + goto again; + } + /* * To add new inline back ref, we have to make sure * there is no corresponding back ref item. @@ -959,13 +977,15 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, goto out; } } +enoent: *ref_ret = (struct btrfs_extent_inline_ref *)ptr; out: - if (insert) { + if (path->keep_locks) { path->keep_locks = 0; - path->search_for_extension = 0; btrfs_unlock_up_safe(path, 1); } + if (insert) + path->search_for_extension = 0; return ret; }