diff mbox series

[v3] btrfs: Batch up release of reserved metadata for delayed items used for deletion

Message ID 20220617125334.1067259-1-nborisov@suse.com (mailing list archive)
State New, archived
Headers show
Series [v3] btrfs: Batch up release of reserved metadata for delayed items used for deletion | expand

Commit Message

Nikolay Borisov June 17, 2022, 12:53 p.m. UTC
With Filipe's recent rework of the delayed inode code one aspect which
isn't batched is the release of the reserved metadata of delayed inode's
delete items. With this patch on top of Filipe's rework and running the
same test as provided in the description of a patch titled
"btrfs: improve batch deletion of delayed dir index items" I observe
the following change of the number of calls to btrfs_block_rsv_release:

Before this change:
@block_rsv_release: 1004
@btrfs_delete_delayed_items_total_time: 14602
@delete_batches: 505

After:
@block_rsv_release: 510
@btrfs_delete_delayed_items_total_time: 13643
@delete_batches: 507

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
V3:
 * Print the inode number in the tracepoint (this time for real)

 * Ensure batch release is called iff total_reserve_size is a positive number,
 this implies space is released from non log recover context.

V2:
 * Improved subject wording to make it more clear (Filipe)

 * Print the inode number in the tracepoint (Filipe)

 * More explicit referal to the test case in Filipe's initial patch to make it
 more clear how those numbers are derived.
 fs/btrfs/delayed-inode.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

--
2.25.1

Comments

Filipe Manana June 17, 2022, 2:41 p.m. UTC | #1
On Fri, Jun 17, 2022 at 03:53:34PM +0300, Nikolay Borisov wrote:
> With Filipe's recent rework of the delayed inode code one aspect which
> isn't batched is the release of the reserved metadata of delayed inode's
> delete items. With this patch on top of Filipe's rework and running the
> same test as provided in the description of a patch titled
> "btrfs: improve batch deletion of delayed dir index items" I observe
> the following change of the number of calls to btrfs_block_rsv_release:
> 
> Before this change:
> @block_rsv_release: 1004
> @btrfs_delete_delayed_items_total_time: 14602
> @delete_batches: 505
> 
> After:
> @block_rsv_release: 510
> @btrfs_delete_delayed_items_total_time: 13643
> @delete_batches: 507
> 
> Signed-off-by: Nikolay Borisov <nborisov@suse.com>

Looks good now, thanks.

Reviewed-by: Filipe Manana <fdmanana@suse.com>

> ---
> V3:
>  * Print the inode number in the tracepoint (this time for real)
> 
>  * Ensure batch release is called iff total_reserve_size is a positive number,
>  this implies space is released from non log recover context.
> 
> V2:
>  * Improved subject wording to make it more clear (Filipe)
> 
>  * Print the inode number in the tracepoint (Filipe)
> 
>  * More explicit referal to the test case in Filipe's initial patch to make it
>  more clear how those numbers are derived.
>  fs/btrfs/delayed-inode.c | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
> index e1e856436ad5..9d618226005b 100644
> --- a/fs/btrfs/delayed-inode.c
> +++ b/fs/btrfs/delayed-inode.c
> @@ -800,11 +800,13 @@ static int btrfs_batch_delete_items(struct btrfs_trans_handle *trans,
>  				    struct btrfs_path *path,
>  				    struct btrfs_delayed_item *item)
>  {
> +	struct btrfs_fs_info *fs_info = root->fs_info;
>  	struct btrfs_delayed_item *curr, *next;
>  	struct extent_buffer *leaf = path->nodes[0];
>  	LIST_HEAD(batch_list);
>  	int nitems, slot, last_slot;
>  	int ret;
> +	u64 total_reserved_size = item->bytes_reserved;
> 
>  	ASSERT(leaf != NULL);
> 
> @@ -841,14 +843,27 @@ static int btrfs_batch_delete_items(struct btrfs_trans_handle *trans,
>  		nitems++;
>  		curr = next;
>  		list_add_tail(&curr->tree_list, &batch_list);
> +		total_reserved_size += curr->bytes_reserved;
>  	}
> 
>  	ret = btrfs_del_items(trans, root, path, path->slots[0], nitems);
>  	if (ret)
>  		return ret;
> 
> +	/* In case of BTRFS_FS_LOG_RECOVERING items won't have reserved space */
> +	if (total_reserved_size > 0) {
> +		/*
> +		 * Check btrfs_delayed_item_reserve_metadata() to see why we don't need
> +		 * to release/reserve qgroup space.
> +		 */
> +		trace_btrfs_space_reservation(fs_info, "delayed_item",
> +					      item->key.objectid, total_reserved_size,
> +					      0);
> +		btrfs_block_rsv_release(fs_info, &fs_info->delayed_block_rsv,
> +					total_reserved_size, NULL);
> +	}
> +
>  	list_for_each_entry_safe(curr, next, &batch_list, tree_list) {
> -		btrfs_delayed_item_release_metadata(root, curr);
>  		list_del(&curr->tree_list);
>  		btrfs_release_delayed_item(curr);
>  	}
> --
> 2.25.1
>
David Sterba June 17, 2022, 3:11 p.m. UTC | #2
On Fri, Jun 17, 2022 at 03:53:34PM +0300, Nikolay Borisov wrote:
> With Filipe's recent rework of the delayed inode code one aspect which
> isn't batched is the release of the reserved metadata of delayed inode's
> delete items. With this patch on top of Filipe's rework and running the
> same test as provided in the description of a patch titled
> "btrfs: improve batch deletion of delayed dir index items" I observe
> the following change of the number of calls to btrfs_block_rsv_release:
> 
> Before this change:
> @block_rsv_release: 1004
> @btrfs_delete_delayed_items_total_time: 14602
> @delete_batches: 505
> 
> After:
> @block_rsv_release: 510
> @btrfs_delete_delayed_items_total_time: 13643
> @delete_batches: 507
> 
> Signed-off-by: Nikolay Borisov <nborisov@suse.com>

Added to misc-next, thanks.
diff mbox series

Patch

diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index e1e856436ad5..9d618226005b 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -800,11 +800,13 @@  static int btrfs_batch_delete_items(struct btrfs_trans_handle *trans,
 				    struct btrfs_path *path,
 				    struct btrfs_delayed_item *item)
 {
+	struct btrfs_fs_info *fs_info = root->fs_info;
 	struct btrfs_delayed_item *curr, *next;
 	struct extent_buffer *leaf = path->nodes[0];
 	LIST_HEAD(batch_list);
 	int nitems, slot, last_slot;
 	int ret;
+	u64 total_reserved_size = item->bytes_reserved;

 	ASSERT(leaf != NULL);

@@ -841,14 +843,27 @@  static int btrfs_batch_delete_items(struct btrfs_trans_handle *trans,
 		nitems++;
 		curr = next;
 		list_add_tail(&curr->tree_list, &batch_list);
+		total_reserved_size += curr->bytes_reserved;
 	}

 	ret = btrfs_del_items(trans, root, path, path->slots[0], nitems);
 	if (ret)
 		return ret;

+	/* In case of BTRFS_FS_LOG_RECOVERING items won't have reserved space */
+	if (total_reserved_size > 0) {
+		/*
+		 * Check btrfs_delayed_item_reserve_metadata() to see why we don't need
+		 * to release/reserve qgroup space.
+		 */
+		trace_btrfs_space_reservation(fs_info, "delayed_item",
+					      item->key.objectid, total_reserved_size,
+					      0);
+		btrfs_block_rsv_release(fs_info, &fs_info->delayed_block_rsv,
+					total_reserved_size, NULL);
+	}
+
 	list_for_each_entry_safe(curr, next, &batch_list, tree_list) {
-		btrfs_delayed_item_release_metadata(root, curr);
 		list_del(&curr->tree_list);
 		btrfs_release_delayed_item(curr);
 	}