diff mbox series

btrfs: reduce lock contention with not always use keep locks when insert extent backref

Message ID 20241023063818.11438-1-robbieko@synology.com (mailing list archive)
State New, archived
Headers show
Series btrfs: reduce lock contention with not always use keep locks when insert extent backref | expand

Commit Message

robbieko Oct. 23, 2024, 6:38 a.m. UTC
From: Robbie Ko <robbieko@synology.com>

When inserting extent backref, in order to check whether
refs other than inline refs are used, we always use keep
locks for tree search, which will increase the lock
contention of extent-tree.

We do not need the parent node every time to determine
whether normal refs are used.
It is only needed when the extent-item is the last item in leaf.

Therefore, we change to first use keep_locks=0 for search.
If the extent-item happens to be the last item in leaf,
we then change to keep_locks=1 for the second search to
slow down lock contention.

Signed-off-by: Robbie Ko <robbieko@synology.com>
---
 fs/btrfs/extent-tree.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

Comments

Filipe Manana Oct. 23, 2024, 10:34 a.m. UTC | #1
On Wed, Oct 23, 2024 at 7:39 AM robbieko <robbieko@synology.com> wrote:
>
> From: Robbie Ko <robbieko@synology.com>
>
> When inserting extent backref, in order to check whether
> refs other than inline refs are used, we always use keep
> locks for tree search, which will increase the lock
> contention of extent-tree.

The line length limit is 75, you're using about 50 characters per line.
You can make the lines use more characters which makes things a bit
easier to read.

On the other hand the subject is kind of too long and phrased a bit oddly:

btrfs: reduce lock contention with not always use keep locks when
insert extent backref

I would suggest something like:

btrfs: reduce extent tree lock contention when searching for inline backref

>
> We do not need the parent node every time to determine
> whether normal refs are used.
> It is only needed when the extent-item is the last item in leaf.

in leaf -> in a leaf

>
> Therefore, we change to first use keep_locks=0 for search.
> If the extent-item happens to be the last item in leaf,

in leaf -> in the leaf

> we then change to keep_locks=1 for the second search to
> slow down lock contention.

slow down -> reduce

>
> Signed-off-by: Robbie Ko <robbieko@synology.com>
> ---
>  fs/btrfs/extent-tree.c | 27 ++++++++++++++++++++++++---
>  1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index a5966324607d..7d5033b20a40 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -786,6 +786,8 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>         int ret;
>         bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA);
>         int needed;
> +       bool keep_locks = false;

There's no need to use this variable, we can use path->keep_locks directly.

> +       struct btrfs_key tmp_key;

Please move the declaration of tmp_key to the block where it's used, see below.

>
>         key.objectid = bytenr;
>         key.type = BTRFS_EXTENT_ITEM_KEY;
> @@ -795,7 +797,6 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>         if (insert) {
>                 extra_size = btrfs_extent_inline_ref_size(want);
>                 path->search_for_extension = 1;
> -               path->keep_locks = 1;
>         } else
>                 extra_size = -1;
>
> @@ -946,6 +947,24 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>                         ret = -EAGAIN;
>                         goto out;
>                 }
> +
> +               if (path->slots[0] + 1 < btrfs_header_nritems(path->nodes[0])) {

Here place the declaration of tmp_key, since it's not used outside this block.

Everything else looks good, thanks.

> +                       btrfs_item_key_to_cpu(path->nodes[0], &tmp_key, path->slots[0] + 1);
> +                       if (tmp_key.objectid == bytenr &&
> +                               tmp_key.type < BTRFS_BLOCK_GROUP_ITEM_KEY) {
> +                               ret = -EAGAIN;
> +                               goto out;
> +                       }
> +                       goto enoent;
> +               }
> +
> +               if (!keep_locks) {
> +                       btrfs_release_path(path);
> +                       path->keep_locks = 1;
> +                       keep_locks = true;
> +                       goto again;
> +               }
> +
>                 /*
>                  * To add new inline back ref, we have to make sure
>                  * there is no corresponding back ref item.
> @@ -959,13 +978,15 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>                         goto out;
>                 }
>         }
> +enoent:
>         *ref_ret = (struct btrfs_extent_inline_ref *)ptr;
>  out:
> -       if (insert) {
> +       if (keep_locks) {
>                 path->keep_locks = 0;
> -               path->search_for_extension = 0;
>                 btrfs_unlock_up_safe(path, 1);
>         }
> +       if (insert)
> +               path->search_for_extension = 0;
>         return ret;
>  }
>
> --
> 2.17.1
>
>
robbieko Oct. 24, 2024, 2:20 a.m. UTC | #2
Filipe Manana 於 2024/10/23 下午6:34 寫道:
> On Wed, Oct 23, 2024 at 7:39 AM robbieko <robbieko@synology.com> wrote:
>> From: Robbie Ko <robbieko@synology.com>
>>
>> When inserting extent backref, in order to check whether
>> refs other than inline refs are used, we always use keep
>> locks for tree search, which will increase the lock
>> contention of extent-tree.
> The line length limit is 75, you're using about 50 characters per line.
> You can make the lines use more characters which makes things a bit
> easier to read.
>
> On the other hand the subject is kind of too long and phrased a bit oddly:
>
> btrfs: reduce lock contention with not always use keep locks when
> insert extent backref
>
> I would suggest something like:
>
> btrfs: reduce extent tree lock contention when searching for inline backref

Ok, I will modify and send a new patch again.

Thanks.

>> We do not need the parent node every time to determine
>> whether normal refs are used.
>> It is only needed when the extent-item is the last item in leaf.
> in leaf -> in a leaf
>
>> Therefore, we change to first use keep_locks=0 for search.
>> If the extent-item happens to be the last item in leaf,
> in leaf -> in the leaf
>
>> we then change to keep_locks=1 for the second search to
>> slow down lock contention.
> slow down -> reduce
>
>> Signed-off-by: Robbie Ko <robbieko@synology.com>
>> ---
>>   fs/btrfs/extent-tree.c | 27 ++++++++++++++++++++++++---
>>   1 file changed, 24 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
>> index a5966324607d..7d5033b20a40 100644
>> --- a/fs/btrfs/extent-tree.c
>> +++ b/fs/btrfs/extent-tree.c
>> @@ -786,6 +786,8 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>>          int ret;
>>          bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA);
>>          int needed;
>> +       bool keep_locks = false;
> There's no need to use this variable, we can use path->keep_locks directly.
>
>> +       struct btrfs_key tmp_key;
> Please move the declaration of tmp_key to the block where it's used, see below.
>
>>          key.objectid = bytenr;
>>          key.type = BTRFS_EXTENT_ITEM_KEY;
>> @@ -795,7 +797,6 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>>          if (insert) {
>>                  extra_size = btrfs_extent_inline_ref_size(want);
>>                  path->search_for_extension = 1;
>> -               path->keep_locks = 1;
>>          } else
>>                  extra_size = -1;
>>
>> @@ -946,6 +947,24 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>>                          ret = -EAGAIN;
>>                          goto out;
>>                  }
>> +
>> +               if (path->slots[0] + 1 < btrfs_header_nritems(path->nodes[0])) {
> Here place the declaration of tmp_key, since it's not used outside this block.
>
> Everything else looks good, thanks.
>
>> +                       btrfs_item_key_to_cpu(path->nodes[0], &tmp_key, path->slots[0] + 1);
>> +                       if (tmp_key.objectid == bytenr &&
>> +                               tmp_key.type < BTRFS_BLOCK_GROUP_ITEM_KEY) {
>> +                               ret = -EAGAIN;
>> +                               goto out;
>> +                       }
>> +                       goto enoent;
>> +               }
>> +
>> +               if (!keep_locks) {
>> +                       btrfs_release_path(path);
>> +                       path->keep_locks = 1;
>> +                       keep_locks = true;
>> +                       goto again;
>> +               }
>> +
>>                  /*
>>                   * To add new inline back ref, we have to make sure
>>                   * there is no corresponding back ref item.
>> @@ -959,13 +978,15 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
>>                          goto out;
>>                  }
>>          }
>> +enoent:
>>          *ref_ret = (struct btrfs_extent_inline_ref *)ptr;
>>   out:
>> -       if (insert) {
>> +       if (keep_locks) {
>>                  path->keep_locks = 0;
>> -               path->search_for_extension = 0;
>>                  btrfs_unlock_up_safe(path, 1);
>>          }
>> +       if (insert)
>> +               path->search_for_extension = 0;
>>          return ret;
>>   }
>>
>> --
>> 2.17.1
>>
>>
diff mbox series

Patch

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index a5966324607d..7d5033b20a40 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -786,6 +786,8 @@  int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
 	int ret;
 	bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA);
 	int needed;
+	bool keep_locks = false;
+	struct btrfs_key tmp_key;
 
 	key.objectid = bytenr;
 	key.type = BTRFS_EXTENT_ITEM_KEY;
@@ -795,7 +797,6 @@  int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
 	if (insert) {
 		extra_size = btrfs_extent_inline_ref_size(want);
 		path->search_for_extension = 1;
-		path->keep_locks = 1;
 	} else
 		extra_size = -1;
 
@@ -946,6 +947,24 @@  int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
 			ret = -EAGAIN;
 			goto out;
 		}
+
+		if (path->slots[0] + 1 < btrfs_header_nritems(path->nodes[0])) {
+			btrfs_item_key_to_cpu(path->nodes[0], &tmp_key, path->slots[0] + 1);
+			if (tmp_key.objectid == bytenr &&
+				tmp_key.type < BTRFS_BLOCK_GROUP_ITEM_KEY) {
+				ret = -EAGAIN;
+				goto out;
+			}
+			goto enoent;
+		}
+
+		if (!keep_locks) {
+			btrfs_release_path(path);
+			path->keep_locks = 1;
+			keep_locks = true;
+			goto again;
+		}
+
 		/*
 		 * To add new inline back ref, we have to make sure
 		 * there is no corresponding back ref item.
@@ -959,13 +978,15 @@  int lookup_inline_extent_backref(struct btrfs_trans_handle *trans,
 			goto out;
 		}
 	}
+enoent:
 	*ref_ret = (struct btrfs_extent_inline_ref *)ptr;
 out:
-	if (insert) {
+	if (keep_locks) {
 		path->keep_locks = 0;
-		path->search_for_extension = 0;
 		btrfs_unlock_up_safe(path, 1);
 	}
+	if (insert)
+		path->search_for_extension = 0;
 	return ret;
 }