From patchwork Wed Apr 16 21:50:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14054492 Received: from mail-yb1-f178.google.com (mail-yb1-f178.google.com [209.85.219.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C2701A2658 for ; Wed, 16 Apr 2025 21:50:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840237; cv=none; b=KLFu27QTTk/0IclSij7oUs1KymWRWFvSkgko9orf76UDp6rDYS33BdNva5DU4fSMqvU4BK75RtlKswDPl4mQbD2LstSX/K0+0B7A2gTPauIF0D6OpfvUkLMrWW6uNPzgS64+SC0WNie5ZACKJxR+C+wRRx5A6VSR3OEmOYWVgOA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840237; c=relaxed/simple; bh=SVUMOiU5VjXwjl1gHyGCw+TUeJ0pfZbKZS06ZeO8jCM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SzOCDHcQCRglvR0mJzzm9Cw/sR4R4aYqpuVad822AO5TMZoTQvYTSJAHj0DtDCMxGa5k6j2ChSR5pOLAtQVXilU6z2CnZ4RHMrXFQB0kHnMPt4OgWGcyMYBktliYPq8EpOQT2zoLTtnHTR2E/z7bzTsp1CuXsn7cII7O7Oq5ugs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=zj99vdRG; arc=none smtp.client-ip=209.85.219.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="zj99vdRG" Received: by mail-yb1-f178.google.com with SMTP id 3f1490d57ef6-e589c258663so163284276.1 for ; Wed, 16 Apr 2025 14:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744840232; x=1745445032; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=jTN1i34a53LNgOCNXlJaEMOQLfikRX8LcxuYAMgcHn4=; b=zj99vdRG2Fjs4ZQbVUZ5ssFJ0LOMB52+5T6M3yEanZHfXsPIUAVGWFU9WogwpoEItB /DP+TPfDehOyh8Di1YfHd1sffxndcwSgpNyI3kEJB5+OCj0BiHeSnwS10vNP8gv5hSpB 46kq+HoWclpt/R8NwGD/Ud7sRQ1NA6m6nnDaOcNn7GXkwOPSnOCtVC4ywB+eCSe2JdRC 575+Y/W85o0bc2+2qLdOMHWbZxN0ub7BnRK8FStV+oWbJ+IcQP7YU7Ur7T66EcHIYjsg WD37VGwBsV2YS1rWTPLgtST4x153+d14sgNEXY21dLYOt9rReeBO6UhcEh8zs3R4V2NO pkYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744840232; x=1745445032; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jTN1i34a53LNgOCNXlJaEMOQLfikRX8LcxuYAMgcHn4=; b=Ju9xWFcPoyBgiIaPgYzjx8nFi+NcamhV0uZgLgPrjFOBBvhbcdEH2lqXVHhnAIyeYb sWPoAXXInpbHnqZhYAQ/wkJLvEbWQTsXncRy+AUFaE6xu/G9NCzbnsda8Bi/31yXMw5j ooi7J6MxtEKDemaiVHIsjNRRTgfO4qsX04RfIEv4WzUhbCxdjam33kba8cY6IUB1AI4h Xuuzi+vu4xeVLr8kZFetlvA4Xs66p+B1dsn+ZQFNdC4p8SgEg0jA3TtJYk1GmjEO3yi2 HBzxDKT8RivUTX35C4pY4C7UkCmu8U3QgJSHgGNR0aRnEbjrCr+6jigYSxeJu1cOvF9n xKpg== X-Gm-Message-State: AOJu0YxURZ1Rz5QgjGMdn2/2zLRWKQtbCoSxomk+xAQfwksyYpVcfoGk +a8UEvs28cKQCtGYt7NGX2ldZY2/jwQGcA7DBzZcbtYVp8eNr75ab8+AsW/QMkGTFYtENLKzZBx qjwQ= X-Gm-Gg: ASbGncuuPu/higVj5cKo+p+gyw5jDRACGK6MwYLQIL6qg7a+Gn+F3i6F8Itz5Mdrhdr VptQT3dexEcrxXR1W6EYWnxVGNUaFpAgwyi9O9MRr+jG+mijtnh9YeaBvFUo1l9hK7AlH+PMzwm 79LeIIUnWanT/mvAfqifk8fmtQGcVsqAnCyRXmTrWkhtCi514ixERffa302cRFxi4BAF20VI7qb mnmj5US8kUlkszGq9InyF1wGwSqDjvVhJF7w88+OmqpXx+cYjg3+HtyDJOeleXmZ1qjHSHGybmO dHuxvuo76NO5gQRagJ/5rImIVKB9FdZtyB55LgwBBgZFWrjrGqxrahxA3HAESvSyFm1g8CT70+B IFA== X-Google-Smtp-Source: AGHT+IEx8Guln49mgbzJjNDN4oQ3f9NRNPRT9mxEB4/NBGT8/V5F+wp9hyO5w7vJAh8PJD5z2zauRQ== X-Received: by 2002:a05:6902:2845:b0:e69:1efc:97e5 with SMTP id 3f1490d57ef6-e72757660a4mr4923729276.8.1744840232365; Wed, 16 Apr 2025 14:50:32 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e727dc7d31dsm199659276.48.2025.04.16.14.50.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Apr 2025 14:50:31 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 1/3] btrfs: convert the buffer_radix to an xarray Date: Wed, 16 Apr 2025 17:50:23 -0400 Message-ID: <1f3982ec56c49f642ef35698c39e16f56f4bc4f5.1744840038.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order to fully utilize xarray tagging to improve writeback we need to convert the buffer_radix to a proper xarray. This conversion is relatively straightforward as the radix code uses the xarray underneath. Using xarray directly allows for quite a lot less code. Signed-off-by: Josef Bacik --- fs/btrfs/disk-io.c | 15 ++- fs/btrfs/extent_io.c | 196 +++++++++++++++-------------------- fs/btrfs/fs.h | 4 +- fs/btrfs/tests/btrfs-tests.c | 27 ++--- fs/btrfs/zoned.c | 16 +-- 5 files changed, 113 insertions(+), 145 deletions(-) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 59da809b7d57..5593873f5c0f 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2762,10 +2762,22 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info) return ret; } +/* + * lockdep gets confused between our buffer_xarray which requires IRQ locking + * because we modify marks in the IRQ context, and our delayed inode xarray + * which doesn't have these requirements. Use a class key so lockdep doesn't get + * them mixed up. + */ +static struct lock_class_key buffer_xa_class; + void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) { INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC); - INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC); + + /* Use the same flags as mapping->i_pages. */ + xa_init_flags(&fs_info->buffer_xarray, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT); + lockdep_set_class(&fs_info->buffer_xarray.xa_lock, &buffer_xa_class); + INIT_LIST_HEAD(&fs_info->trans_list); INIT_LIST_HEAD(&fs_info->dead_roots); INIT_LIST_HEAD(&fs_info->delayed_iputs); @@ -2777,7 +2789,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) spin_lock_init(&fs_info->delayed_iput_lock); spin_lock_init(&fs_info->defrag_inodes_lock); spin_lock_init(&fs_info->super_lock); - spin_lock_init(&fs_info->buffer_lock); spin_lock_init(&fs_info->unused_bgs_lock); spin_lock_init(&fs_info->treelog_bg_lock); spin_lock_init(&fs_info->zone_active_bgs_lock); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6cfd286b8bbc..309c86d1a696 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1893,19 +1893,20 @@ static void set_btree_ioerr(struct extent_buffer *eb) * context. */ static struct extent_buffer *find_extent_buffer_nolock( - const struct btrfs_fs_info *fs_info, u64 start) + struct btrfs_fs_info *fs_info, u64 start) { + XA_STATE(xas, &fs_info->buffer_xarray, start >> fs_info->sectorsize_bits); struct extent_buffer *eb; rcu_read_lock(); - eb = radix_tree_lookup(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits); - if (eb && atomic_inc_not_zero(&eb->refs)) { - rcu_read_unlock(); - return eb; - } + do { + eb = xas_load(&xas); + } while (xas_retry(&xas, eb)); + + if (eb && !atomic_inc_not_zero(&eb->refs)) + eb = NULL; rcu_read_unlock(); - return NULL; + return eb; } static void end_bbio_meta_write(struct btrfs_bio *bbio) @@ -2769,11 +2770,10 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo if (!btrfs_meta_is_subpage(fs_info)) { /* - * We do this since we'll remove the pages after we've - * removed the eb from the radix tree, so we could race - * and have this page now attached to the new eb. So - * only clear folio if it's still connected to - * this eb. + * We do this since we'll remove the pages after we've removed + * the eb from the xarray, so we could race and have this page + * now attached to the new eb. So only clear folio if it's + * still connected to this eb. */ if (folio_test_private(folio) && folio_get_private(folio) == eb) { BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); @@ -2938,9 +2938,9 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) { int refs; /* - * The TREE_REF bit is first set when the extent_buffer is added - * to the radix tree. It is also reset, if unset, when a new reference - * is created by find_extent_buffer. + * The TREE_REF bit is first set when the extent_buffer is added to the + * xarray. It is also reset, if unset, when a new reference is created + * by find_extent_buffer. * * It is only cleared in two cases: freeing the last non-tree * reference to the extent_buffer when its STALE bit is set or @@ -2952,13 +2952,12 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) * conditions between the calls to check_buffer_tree_ref in those * codepaths and clearing TREE_REF in try_release_extent_buffer. * - * The actual lifetime of the extent_buffer in the radix tree is - * adequately protected by the refcount, but the TREE_REF bit and - * its corresponding reference are not. To protect against this - * class of races, we call check_buffer_tree_ref from the codepaths - * which trigger io. Note that once io is initiated, TREE_REF can no - * longer be cleared, so that is the moment at which any such race is - * best fixed. + * The actual lifetime of the extent_buffer in the xarray is adequately + * protected by the refcount, but the TREE_REF bit and its corresponding + * reference are not. To protect against this class of races, we call + * check_buffer_tree_ref from the codepaths which trigger io. Note that + * once io is initiated, TREE_REF can no longer be cleared, so that is + * the moment at which any such race is best fixed. */ refs = atomic_read(&eb->refs); if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) @@ -3022,23 +3021,26 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, return ERR_PTR(-ENOMEM); eb->fs_info = fs_info; again: - ret = radix_tree_preload(GFP_NOFS); - if (ret) { - exists = ERR_PTR(ret); + xa_lock_irq(&fs_info->buffer_xarray); + exists = __xa_cmpxchg(&fs_info->buffer_xarray, + start >> fs_info->sectorsize_bits, NULL, eb, + GFP_NOFS); + if (xa_is_err(exists)) { + ret = xa_err(exists); + xa_unlock_irq(&fs_info->buffer_xarray); + btrfs_release_extent_buffer(eb); + return ERR_PTR(ret); + } + if (exists) { + if (!atomic_inc_not_zero(&exists->refs)) { + /* The extent buffer is being freed, retry. */ + xa_unlock_irq(&fs_info->buffer_xarray); + goto again; + } + xa_unlock_irq(&fs_info->buffer_xarray); goto free_eb; } - spin_lock(&fs_info->buffer_lock); - ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); - radix_tree_preload_end(); - if (ret == -EEXIST) { - exists = find_extent_buffer(fs_info, start); - if (exists) - goto free_eb; - else - goto again; - } + xa_unlock_irq(&fs_info->buffer_xarray); check_buffer_tree_ref(eb); return eb; @@ -3059,9 +3061,9 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info, lockdep_assert_held(&folio->mapping->i_private_lock); /* - * For subpage case, we completely rely on radix tree to ensure we - * don't try to insert two ebs for the same bytenr. So here we always - * return NULL and just continue. + * For subpage case, we completely rely on xarray to ensure we don't try + * to insert two ebs for the same bytenr. So here we always return NULL + * and just continue. */ if (btrfs_meta_is_subpage(fs_info)) return NULL; @@ -3194,7 +3196,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, /* * To inform we have an extra eb under allocation, so that * detach_extent_buffer_page() won't release the folio private when the - * eb hasn't been inserted into radix tree yet. + * eb hasn't been inserted into the xarray yet. * * The ref will be decreased when the eb releases the page, in * detach_extent_buffer_page(). Thus needs no special handling in the @@ -3328,10 +3330,10 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, /* * We can't unlock the pages just yet since the extent buffer - * hasn't been properly inserted in the radix tree, this - * opens a race with btree_release_folio which can free a page - * while we are still filling in all pages for the buffer and - * we could crash. + * hasn't been properly inserted in the xarray, this opens a + * race with btree_release_folio which can free a page while we + * are still filling in all pages for the buffer and we could + * crash. */ } if (uptodate) @@ -3340,23 +3342,25 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, if (page_contig) eb->addr = folio_address(eb->folios[0]) + offset_in_page(eb->start); again: - ret = radix_tree_preload(GFP_NOFS); - if (ret) + xa_lock_irq(&fs_info->buffer_xarray); + existing_eb = __xa_cmpxchg(&fs_info->buffer_xarray, + start >> fs_info->sectorsize_bits, NULL, eb, + GFP_NOFS); + if (xa_is_err(existing_eb)) { + ret = xa_err(existing_eb); + xa_unlock_irq(&fs_info->buffer_xarray); goto out; - - spin_lock(&fs_info->buffer_lock); - ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); - radix_tree_preload_end(); - if (ret == -EEXIST) { - ret = 0; - existing_eb = find_extent_buffer(fs_info, start); - if (existing_eb) - goto out; - else - goto again; } + if (existing_eb) { + if (!atomic_inc_not_zero(&existing_eb->refs)) { + xa_unlock_irq(&fs_info->buffer_xarray); + goto again; + } + xa_unlock_irq(&fs_info->buffer_xarray); + goto out; + } + xa_unlock_irq(&fs_info->buffer_xarray); + /* add one reference for the tree */ check_buffer_tree_ref(eb); @@ -3426,10 +3430,13 @@ static int release_extent_buffer(struct extent_buffer *eb) spin_unlock(&eb->refs_lock); - spin_lock(&fs_info->buffer_lock); - radix_tree_delete_item(&fs_info->buffer_radix, - eb->start >> fs_info->sectorsize_bits, eb); - spin_unlock(&fs_info->buffer_lock); + /* + * We're erasing, theoretically there will be no allocations, so + * just use GFP_ATOMIC. + */ + xa_cmpxchg_irq(&fs_info->buffer_xarray, + eb->start >> fs_info->sectorsize_bits, eb, NULL, + GFP_ATOMIC); btrfs_leak_debug_del_eb(eb); /* Should be safe to release folios at this point. */ @@ -4260,44 +4267,6 @@ void memmove_extent_buffer(const struct extent_buffer *dst, } } -#define GANG_LOOKUP_SIZE 16 -static struct extent_buffer *get_next_extent_buffer( - const struct btrfs_fs_info *fs_info, struct folio *folio, u64 bytenr) -{ - struct extent_buffer *gang[GANG_LOOKUP_SIZE]; - struct extent_buffer *found = NULL; - u64 folio_start = folio_pos(folio); - u64 cur = folio_start; - - ASSERT(in_range(bytenr, folio_start, PAGE_SIZE)); - lockdep_assert_held(&fs_info->buffer_lock); - - while (cur < folio_start + PAGE_SIZE) { - int ret; - int i; - - ret = radix_tree_gang_lookup(&fs_info->buffer_radix, - (void **)gang, cur >> fs_info->sectorsize_bits, - min_t(unsigned int, GANG_LOOKUP_SIZE, - PAGE_SIZE / fs_info->nodesize)); - if (ret == 0) - goto out; - for (i = 0; i < ret; i++) { - /* Already beyond page end */ - if (gang[i]->start >= folio_start + PAGE_SIZE) - goto out; - /* Found one */ - if (gang[i]->start >= bytenr) { - found = gang[i]; - goto out; - } - } - cur = gang[ret - 1]->start + gang[ret - 1]->len; - } -out: - return found; -} - static int try_release_subpage_extent_buffer(struct folio *folio) { struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); @@ -4306,21 +4275,26 @@ static int try_release_subpage_extent_buffer(struct folio *folio) int ret; while (cur < end) { + XA_STATE(xas, &fs_info->buffer_xarray, + cur >> fs_info->sectorsize_bits); struct extent_buffer *eb = NULL; /* * Unlike try_release_extent_buffer() which uses folio private - * to grab buffer, for subpage case we rely on radix tree, thus - * we need to ensure radix tree consistency. + * to grab buffer, for subpage case we rely on xarray, thus we + * need to ensure xarray tree consistency. * - * We also want an atomic snapshot of the radix tree, thus go + * We also want an atomic snapshot of the xarray tree, thus go * with spinlock rather than RCU. */ - spin_lock(&fs_info->buffer_lock); - eb = get_next_extent_buffer(fs_info, folio, cur); + xa_lock_irq(&fs_info->buffer_xarray); + do { + eb = xas_find(&xas, end >> fs_info->sectorsize_bits); + } while (xas_retry(&xas, eb)); + if (!eb) { /* No more eb in the page range after or at cur */ - spin_unlock(&fs_info->buffer_lock); + xa_unlock(&fs_info->buffer_xarray); break; } cur = eb->start + eb->len; @@ -4332,10 +4306,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio) spin_lock(&eb->refs_lock); if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { spin_unlock(&eb->refs_lock); - spin_unlock(&fs_info->buffer_lock); + xa_unlock(&fs_info->buffer_xarray); break; } - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_xarray); /* * If tree ref isn't set then we know the ref on this eb is a diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h index bcca43046064..d5d94977860c 100644 --- a/fs/btrfs/fs.h +++ b/fs/btrfs/fs.h @@ -776,10 +776,8 @@ struct btrfs_fs_info { struct btrfs_delayed_root *delayed_root; - /* Extent buffer radix tree */ - spinlock_t buffer_lock; /* Entries are eb->start / sectorsize */ - struct radix_tree_root buffer_radix; + struct xarray buffer_xarray; /* Next backup root to be overwritten */ int backup_root_index; diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c index 02a915eb51fb..741117ce7d3f 100644 --- a/fs/btrfs/tests/btrfs-tests.c +++ b/fs/btrfs/tests/btrfs-tests.c @@ -157,9 +157,9 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize) void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &fs_info->buffer_xarray, 0); struct btrfs_device *dev, *tmp; + struct extent_buffer *eb; if (!fs_info) return; @@ -169,25 +169,16 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info) test_mnt->mnt_sb->s_fs_info = NULL; - spin_lock(&fs_info->buffer_lock); - radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) { - struct extent_buffer *eb; - - eb = radix_tree_deref_slot_protected(slot, &fs_info->buffer_lock); - if (!eb) + xa_lock_irq(&fs_info->buffer_xarray); + xas_for_each(&xas, eb, ULONG_MAX) { + if (xas_retry(&xas, eb)) continue; - /* Shouldn't happen but that kind of thinking creates CVE's */ - if (radix_tree_exception(eb)) { - if (radix_tree_deref_retry(eb)) - slot = radix_tree_iter_retry(&iter); - continue; - } - slot = radix_tree_iter_resume(slot, &iter); - spin_unlock(&fs_info->buffer_lock); + xas_pause(&xas); + xa_unlock_irq(&fs_info->buffer_xarray); free_extent_buffer_stale(eb); - spin_lock(&fs_info->buffer_lock); + xa_lock_irq(&fs_info->buffer_xarray); } - spin_unlock(&fs_info->buffer_lock); + xa_unlock_irq(&fs_info->buffer_xarray); btrfs_mapping_tree_free(fs_info); list_for_each_entry_safe(dev, tmp, &fs_info->fs_devices->devices, diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index 7b30700ec930..7ed19ca399f3 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -2170,28 +2170,22 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) static void wait_eb_writebacks(struct btrfs_block_group *block_group) { struct btrfs_fs_info *fs_info = block_group->fs_info; + XA_STATE(xas, &fs_info->buffer_xarray, + block_group->start >> fs_info->sectorsize_bits); const u64 end = block_group->start + block_group->length; - struct radix_tree_iter iter; struct extent_buffer *eb; - void __rcu **slot; rcu_read_lock(); - radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, - block_group->start >> fs_info->sectorsize_bits) { - eb = radix_tree_deref_slot(slot); - if (!eb) + xas_for_each(&xas, eb, end >> fs_info->sectorsize_bits) { + if (xas_retry(&xas, eb)) continue; - if (radix_tree_deref_retry(eb)) { - slot = radix_tree_iter_retry(&iter); - continue; - } if (eb->start < block_group->start) continue; if (eb->start >= end) break; - slot = radix_tree_iter_resume(slot, &iter); + xas_pause(&xas); rcu_read_unlock(); wait_on_extent_buffer_writeback(eb); rcu_read_lock(); From patchwork Wed Apr 16 21:50:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14054493 Received: from mail-yb1-f177.google.com (mail-yb1-f177.google.com [209.85.219.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2C66234963 for ; Wed, 16 Apr 2025 21:50:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840237; cv=none; b=MvW+V/D+pwxYxM08hJdUgY094G8vL0MPTEAYQMaOcb4LKkrOTsGbcTdI67TfHNCt5xAZe0P5imsRy6psTdHXL+DZZGJhqBXMQNVj/mpL+x01qcVXka5RPZ4uL83PgbKsFmDNaaM7SecS5RhyRiTpTyfYuhRrTchWRNSvHBkSNKw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840237; c=relaxed/simple; bh=npvCoMFmqnvtVPS/BKElnja4Eyqhm5NE7nfkD4v9AMQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zwak574w8iZa0DBqzHF1h3bMMUj15xmt1silZ/j0XAsB/Atv+VUi7HmWzT1TOGpJ8KFAyy9R1fUYEgGijF2IwItjxMfkVmUuh4O+HH4Z1EWWdcqIojuiWdf+dd9cIVc8zDJoDfqUSJiMOE3rGjWxY57zbjgFl9j5cL7DZDDhIw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=zzG/Xqtx; arc=none smtp.client-ip=209.85.219.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="zzG/Xqtx" Received: by mail-yb1-f177.google.com with SMTP id 3f1490d57ef6-e6e1cd3f1c5so154804276.0 for ; Wed, 16 Apr 2025 14:50:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744840234; x=1745445034; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ndvw8hMR1+i2zWzq9orHnGbavKtphos3OGpZoYszMUU=; b=zzG/XqtxvI44b49nrWqmtzDfxMUonx8F6oZ4qWUaKeI5hXZZJDCjahJAf7LbudbIFE KFV8GvkbWJuW6OY1ee6Pj0cfx59cfv1TyrDNZANohXIJfTiOfKGhLF3VHYAH+ywWDmc2 NnmfYLeAg0fDseWgIZkwABsRg6PaDNH2d2O/DWn8kkHOHZbuQAgCqdZUyp505uHAlAuo s9/HPe3ekkYGcVlxvXINvAnqXyTboJcoYArAaHR2XrSZA5djvsgFomSL87eYrd1t9Cim yetzsFFmV2fmiwFGIx3XYCJE7xM8+EUGnRJL0y859U5ozpr3KeYi2br3SOFb2K10GiLu DqEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744840234; x=1745445034; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ndvw8hMR1+i2zWzq9orHnGbavKtphos3OGpZoYszMUU=; b=khk0F5H2QfVfrts6SxaHBsYsLBrQkoBzxcGg0lRMyTWopfxzgMyaHgDA2MeJlfpYRe OJTTjTRkhUK8riTJVtll9XqrByiRaJpXAsNYVExhbrtYm3uWhWXoTDJ1LqN27qhyqluo Dq8A1vd670mzRPlQWuG/4SHSazAE8E0fiP2FB16zq4W2avq4jHha9iEhBL+q5ed4kUKh 8DzYiRs5Zna8XWP/gK1Q1ppyg4ujf1g96oZfEFoOoozONalrxrkaGvtcb/ss12o0zdC9 3DRUPY4PhyDV/P8jSpSKC9JNSWXbPWUNijNgs3yP8+/wIrsjxGUqAAO1WyVv2l0vkdWX mGdw== X-Gm-Message-State: AOJu0Yw76BwpCjEqQVL9TkFT/mz9ZbBp6UB/lZV4B8WGgnUUAT3h9dz+ 9xmwxUni1hGk4UtIbq6D/MglF1EWNRDTEVFMuVGo1IQ0kic+ybJ9BzIYU+Urr4wFsh6wVnRRWmO RK88= X-Gm-Gg: ASbGnct6JLQfUti5XAFExdyivx+xhcZlSZrPHzqZFQMStaeOd3N/FlqzF0b9Tbm3MLy +KVYypc+NFajVvbeizPtU5xAGBGCmTI6iixqcYaOU4YEwlsdNG12a3rhbYwvNd2xv3G6Drr+edS R0bH3JTm6pBKDCZc+KM1JFVpEmr71KVJs+SMrJ4oOIymML3QNnPb+lJzv3rykYj5DGBS/dJgJii En1UF98B0uqsH2qQaaFYkJ5XlgoyD2qJuIc5TvQIBe7y6ojTPQj/RwAW1QlV267WH/70rmaGTjo MFDz5aYlRmURx3jyVLCbj4vLJtpm9oLqcbBDN1nrqcZyf1Jyze72lHLF1erRAoGf/s6F83NlPad Avw== X-Google-Smtp-Source: AGHT+IHv3H8JOmniiR7CdQtGg04bK02WIpB39niMRfb1Pig0J0b8NEOAty2kBbf0SKrk41Fd6oeFlQ== X-Received: by 2002:a05:690c:3704:b0:6fb:b2de:a2c3 with SMTP id 00721157ae682-706b328eb09mr46395507b3.9.1744840233981; Wed, 16 Apr 2025 14:50:33 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 00721157ae682-7053e11a796sm43774237b3.41.2025.04.16.14.50.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Apr 2025 14:50:33 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_xarray Date: Wed, 16 Apr 2025 17:50:24 -0400 Message-ID: <0eb287136c9a3ca45fceba7ecaa688a7c4d2c303.1744840038.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for changing how we do writeout of extent buffers, start tagging the extent buffer xarray with DIRTY and WRITEBACK to make it easier to find extent buffers that are in either state. Signed-off-by: Josef Bacik --- fs/btrfs/extent_io.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 309c86d1a696..dfed1157ebe1 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1801,8 +1801,19 @@ static noinline_for_stack bool lock_extent_buffer_for_io(struct extent_buffer *e */ spin_lock(&eb->refs_lock); if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) { + XA_STATE(xas, &fs_info->buffer_xarray, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + set_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); spin_unlock(&eb->refs_lock); + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK); + xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); + xas_unlock_irqrestore(&xas, flags); + btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN); percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, -eb->len, @@ -1888,6 +1899,33 @@ static void set_btree_ioerr(struct extent_buffer *eb) } } +static void buffer_xarray_set_mark(const struct extent_buffer *eb, xa_mark_t mark) +{ + struct btrfs_fs_info *fs_info = eb->fs_info; + XA_STATE(xas, &fs_info->buffer_xarray, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_set_mark(&xas, mark); + xas_unlock_irqrestore(&xas, flags); +} + +static void buffer_xarray_clear_mark(const struct extent_buffer *eb, + xa_mark_t mark) +{ + struct btrfs_fs_info *fs_info = eb->fs_info; + XA_STATE(xas, &fs_info->buffer_xarray, + eb->start >> fs_info->sectorsize_bits); + unsigned long flags; + + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + xas_clear_mark(&xas, mark); + xas_unlock_irqrestore(&xas, flags); +} + /* * The endio specific version which won't touch any unsafe spinlock in endio * context. @@ -1921,6 +1959,7 @@ static void end_bbio_meta_write(struct btrfs_bio *bbio) btrfs_meta_folio_clear_writeback(fi.folio, eb); } + buffer_xarray_clear_mark(eb, PAGECACHE_TAG_WRITEBACK); clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); smp_mb__after_atomic(); wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK); @@ -3537,6 +3576,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans, if (!test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) return; + buffer_xarray_clear_mark(eb, PAGECACHE_TAG_DIRTY); percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, -eb->len, fs_info->dirty_metadata_batch); @@ -3585,6 +3625,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb) folio_lock(eb->folios[0]); for (int i = 0; i < num_extent_folios(eb); i++) btrfs_meta_folio_set_dirty(eb->folios[i], eb); + buffer_xarray_set_mark(eb, PAGECACHE_TAG_DIRTY); if (subpage) folio_unlock(eb->folios[0]); percpu_counter_add_batch(&eb->fs_info->dirty_metadata_bytes, From patchwork Wed Apr 16 21:50:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 14054494 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54A3323537B for ; Wed, 16 Apr 2025 21:50:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840240; cv=none; b=UsTFuI5JCcK1BGW7SxZBXKUKqpE1JLvu6yg9TwMUfgxHlF0VyzXodtCvg6ohvDYjZNqG2t4R9ZLyaEl7nUp6XuthB67czw9mh3DCfnyGgzRiOgR/PIhlOmVU/lqpzvEG7sy1pwHUrlk/TIlrwUtSGFdrovmMWWGIgNcnppQL8+U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744840240; c=relaxed/simple; bh=aO/BPEswyHqwac6mLtoPy5Z3X4e/Ec3ORCQWyBaBHwk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sNK41EcSQvcGbDl0CISwKAvh9rXccfG5FZyRGfNoK3Egtoq1xBaeh7Etabr0iP2T3HCWxL5RmmJLBXIzDc8AtXLWDqNLxX7dREewpAhQFX97UHplF5V+rshFImQkugxs76ZHoJt58gn7NGMzjDG4khoeBeWMcfnpHFCQ6TJmjDY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com; spf=none smtp.mailfrom=toxicpanda.com; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b=mEJGJudW; arc=none smtp.client-ip=209.85.128.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=toxicpanda.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=toxicpanda-com.20230601.gappssmtp.com header.i=@toxicpanda-com.20230601.gappssmtp.com header.b="mEJGJudW" Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-6fedefb1c9cso780387b3.0 for ; Wed, 16 Apr 2025 14:50:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1744840236; x=1745445036; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=zflA4EVisdpuiU28rJ7f0dQVMzul4Jr7VRwItdMuUOI=; b=mEJGJudW6egJhCw6V0sLCzOOVQDT5uqtukZK0YXlf9tu1IuuzeMJIuSF07PCDYQX8A F8D23U8u+grO6jWMUpivyQE0g2n6uH1KRCRqkNiINyqZ5g9R6l5lqGJrv/MDmiTLveoP DNOiCnQ2CqNgXbvrYTsJhT+sSwL7l0dAsR9hvJJKFR09xvZ7jyRSHgTDTMEh9PaNscX6 3tPzL1g5Blv0tl85sJhCmOmLpGJSQzB218Zpv9YBEHu1YEXQEUKSYVDKMAlfD7b12XFB kWvqy8HwszgW2bwxTb0B/cUYYpo6xI00Cak92REtFWPj9jnyZ+1ECy8fgNTwuOTgZG6C lPVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744840236; x=1745445036; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zflA4EVisdpuiU28rJ7f0dQVMzul4Jr7VRwItdMuUOI=; b=TRqPTZlxEUVwWF0d0HBTVS2IKyr4KD83LzOvmChKZYQ1mP7Nuffy5usb9ORFSe8ee2 1Um7g0L/qT8H/6/ap8vaODw6YhsvsBbP2ekK6/tGsMjGh9Y9UDLlph5xWFlKfv4xOabE iio+8XkaQxhpnLcjpW4fFXbqU3lT5c8QEXAYY5zVndLgX16r3uQ8WkYj5cU749hllX5T rm06sPl67pcLe02AWKiBxT5z4eVfiUdy3umkFsl76iTBFmtKBmYnucFXOC9ob9CYNjd4 653AHq3EY01030TdoSh8A0C0scJgRzo0sQPflxGqbU915jjLnu5woQHp1p8uG9lG0lEe CZKg== X-Gm-Message-State: AOJu0Yw06iMC1vZFnDsGqrcdiqiXwidN+0BuFvvB3lfhplReLKWaqGhh tzCgWXI1g+qkIltDlhspcdzFiXtIHO4bhtNrkwNExjQUAtRlKOkqzaXc4srM3zSM2m3VUPD5bQw SN2o= X-Gm-Gg: ASbGncvFcix81keXIUl1iL8sLkzGAcQpPFdeN0QurHBfy2B+Fg3P8M/2qZFJKQzRHep d/gzFqkp8tTkSIa6pzz0KN4vr7JZH04ZG4WaySDCiMd5vodOB7hB1Sr0lb+I2vPOUOzySSlTFm7 dZoN9lDwszjMo9wMEZGIqchEojtsd8NE6ue1EFV3UHrmABVtFLpFAiMrrxA9hF3jfKD/wBsw+yL Lrx/eQjRprqacPpEvAHCYbX7NkEF43/HUtHwRVoK9Eosg8q/1zYiTxdT0xUnY0bOZ2rnlceVCCV sRUDw+1TP1iJCE4+WriDGU+35rdJKgplx8br547jz08ksaPxvZSdYw8tQ8LNn/UK8JNTx1UAFko aFA== X-Google-Smtp-Source: AGHT+IGZLwjBmAaYxmlYaSMc8p5cGphRnAENt0JuAHoDK+Xkkbshy5NHoW81VskkiHN7LOELsHmazg== X-Received: by 2002:a05:690c:3707:b0:703:afd6:42b8 with SMTP id 00721157ae682-706b32d8396mr53699567b3.19.1744840235620; Wed, 16 Apr 2025 14:50:35 -0700 (PDT) Received: from localhost (syn-076-182-020-124.res.spectrum.com. [76.182.20.124]) by smtp.gmail.com with ESMTPSA id 00721157ae682-7053e3a1b88sm43258077b3.117.2025.04.16.14.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Apr 2025 14:50:34 -0700 (PDT) From: Josef Bacik To: linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 3/3] btrfs: use buffer radix for extent buffer writeback operations Date: Wed, 16 Apr 2025 17:50:25 -0400 Message-ID: <306c71cc0374fe1a08e97e25f83a99c360494dd8.1744840038.git.josef@toxicpanda.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently we have this ugly back and forth with the btree writeback where we find the folio, find the eb associated with that folio, and then attempt to writeback. This results in two different paths for subpage eb's and >= pagesize eb's. Clean this up by adding our own infrastructure around looking up tag'ed eb's and writing the eb's out directly. This allows us to unify the subpage and >= pagesize IO paths, resulting in a much cleaner writeback path for extent buffers. I ran this through fsperf on a VM with 8 CPUs and 16gib of ram. I used smallfiles100k, but reduced the files to 1k to make it run faster, the results are as follows, with the statistically significant improvements marked with *, there were no regressions. fsperf was run with -n 10 for both runs, so the baseline is the average 10 runs and the test is the average of 10 runs. smallfiles100k results metric baseline current stdev diff ================================================================================ avg_commit_ms 68.58 58.44 3.35 -14.79% * commits 270.60 254.70 16.24 -5.88% dev_read_iops 48 48 0 0.00% dev_read_kbytes 1044 1044 0 0.00% dev_write_iops 866117.90 850028.10 14292.20 -1.86% dev_write_kbytes 10939976.40 10605701.20 351330.32 -3.06% elapsed 49.30 33 1.64 -33.06% * end_state_mount_ns 41251498.80 35773220.70 2531205.32 -13.28% * end_state_umount_ns 1.90e+09 1.50e+09 14186226.85 -21.38% * max_commit_ms 139 111.60 9.72 -19.71% * sys_cpu 4.90 3.86 0.88 -21.29% write_bw_bytes 42935768.20 64318451.10 1609415.05 49.80% * write_clat_ns_mean 366431.69 243202.60 14161.98 -33.63% * write_clat_ns_p50 49203.20 20992 264.40 -57.34% * write_clat_ns_p99 827392 653721.60 65904.74 -20.99% * write_io_kbytes 2035940 2035940 0 0.00% write_iops 10482.37 15702.75 392.92 49.80% * write_lat_ns_max 1.01e+08 90516129 3910102.06 -10.29% * write_lat_ns_mean 366556.19 243308.48 14154.51 -33.62% * As you can see we get about a 33% decrease runtime, with a 50% throughput increase, which is pretty significant. Signed-off-by: Josef Bacik --- fs/btrfs/extent_io.c | 344 ++++++++++++++++++++--------------------- fs/btrfs/extent_io.h | 1 + fs/btrfs/transaction.c | 5 +- 3 files changed, 173 insertions(+), 177 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index dfed1157ebe1..2503fc1f704b 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1926,6 +1926,117 @@ static void buffer_xarray_clear_mark(const struct extent_buffer *eb, xas_unlock_irqrestore(&xas, flags); } +static void buffer_xarray_tag_for_writeback(struct btrfs_fs_info *fs_info, + unsigned long start, unsigned long end) +{ + XA_STATE(xas, &fs_info->buffer_xarray, start); + unsigned int tagged = 0; + void *eb; + + xas_lock_irq(&xas); + xas_for_each_marked(&xas, eb, end, PAGECACHE_TAG_DIRTY) { + xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE); + if (++tagged % XA_CHECK_SCHED) + continue; + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); + } + xas_unlock_irq(&xas); +} + +struct eb_batch { + unsigned int nr; + unsigned int cur; + struct extent_buffer *ebs[PAGEVEC_SIZE]; +}; + +static inline bool eb_batch_add(struct eb_batch *batch, + struct extent_buffer *eb) +{ + batch->ebs[batch->nr++] = eb; + return (batch->nr < PAGEVEC_SIZE); +} + +static inline void eb_batch_init(struct eb_batch *batch) +{ + batch->nr = 0; + batch->cur = 0; +} + +static inline unsigned int eb_batch_count(struct eb_batch *batch) +{ + return batch->nr; +} + +static inline struct extent_buffer *eb_batch_next(struct eb_batch *batch) +{ + if (batch->cur >= batch->nr) + return NULL; + return batch->ebs[batch->cur++]; +} + +static inline void eb_batch_release(struct eb_batch *batch) +{ + for (unsigned int i = 0; i < batch->nr; i++) + free_extent_buffer(batch->ebs[i]); + eb_batch_init(batch); +} + +static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned long max, + xa_mark_t mark) +{ + struct extent_buffer *eb; + +retry: + eb = xas_find_marked(xas, max, mark); + + if (xas_retry(xas, eb)) + goto retry; + + if (!eb) + return NULL; + + if (!atomic_inc_not_zero(&eb->refs)) + goto reset; + + if (unlikely(eb != xas_reload(xas))) { + free_extent_buffer(eb); + goto reset; + } + + return eb; +reset: + xas_reset(xas); + goto retry; +} + +static unsigned int buffer_xarray_get_ebs_tag(struct btrfs_fs_info *fs_info, + unsigned long *start, + unsigned long end, xa_mark_t tag, + struct eb_batch *batch) +{ + XA_STATE(xas, &fs_info->buffer_xarray, *start); + struct extent_buffer *eb; + + rcu_read_lock(); + while ((eb = find_get_eb(&xas, end, tag)) != NULL) { + if (!eb_batch_add(batch, eb)) { + *start = (eb->start + eb->len) >> fs_info->sectorsize_bits; + goto out; + } + } + if (end == (unsigned long)-1) + *start = (unsigned long)-1; + else + *start = end + 1; +out: + rcu_read_unlock(); + + return eb_batch_count(batch); +} + /* * The endio specific version which won't touch any unsafe spinlock in endio * context. @@ -2031,163 +2142,37 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb, } /* - * Submit one subpage btree page. + * Wait for all eb writeback in the given range to finish. * - * The main difference to submit_eb_page() is: - * - Page locking - * For subpage, we don't rely on page locking at all. - * - * - Flush write bio - * We only flush bio if we may be unable to fit current extent buffers into - * current bio. - * - * Return >=0 for the number of submitted extent buffers. - * Return <0 for fatal error. + * @fs_info: the fs_info for this file system + * @start: the offset of the range to start waiting on writeback + * @end: the end of the range, inclusive. This is meant to be used in + * conjuction with wait_marked_extents, so this will usually be + * the_next_eb->start - 1. */ -static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) +void btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start, u64 end) { - struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); - int submitted = 0; - u64 folio_start = folio_pos(folio); - int bit_start = 0; - int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits; - const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio); + struct eb_batch batch; + unsigned long start_index = start >> fs_info->sectorsize_bits; + unsigned long end_index = end >> fs_info->sectorsize_bits; - /* Lock and write each dirty extent buffers in the range */ - while (bit_start < blocks_per_folio) { - struct btrfs_subpage *subpage = folio_get_private(folio); + eb_batch_init(&batch); + while (start_index <= end_index) { struct extent_buffer *eb; - unsigned long flags; - u64 start; + unsigned int nr_ebs; - /* - * Take private lock to ensure the subpage won't be detached - * in the meantime. - */ - spin_lock(&folio->mapping->i_private_lock); - if (!folio_test_private(folio)) { - spin_unlock(&folio->mapping->i_private_lock); + nr_ebs = buffer_xarray_get_ebs_tag(fs_info, &start_index, + end_index, + PAGECACHE_TAG_WRITEBACK, + &batch); + if (!nr_ebs) break; - } - spin_lock_irqsave(&subpage->lock, flags); - if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * blocks_per_folio, - subpage->bitmaps)) { - spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&folio->mapping->i_private_lock); - bit_start += sectors_per_node; - continue; - } - start = folio_start + bit_start * fs_info->sectorsize; - bit_start += sectors_per_node; - - /* - * Here we just want to grab the eb without touching extra - * spin locks, so call find_extent_buffer_nolock(). - */ - eb = find_extent_buffer_nolock(fs_info, start); - spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&folio->mapping->i_private_lock); - - /* - * The eb has already reached 0 refs thus find_extent_buffer() - * doesn't return it. We don't need to write back such eb - * anyway. - */ - if (!eb) - continue; - - if (lock_extent_buffer_for_io(eb, wbc)) { - write_one_eb(eb, wbc); - submitted++; - } - free_extent_buffer(eb); + while ((eb = eb_batch_next(&batch)) != NULL) + wait_on_extent_buffer_writeback(eb); + eb_batch_release(&batch); + cond_resched(); } - return submitted; -} - -/* - * Submit all page(s) of one extent buffer. - * - * @page: the page of one extent buffer - * @eb_context: to determine if we need to submit this page, if current page - * belongs to this eb, we don't need to submit - * - * The caller should pass each page in their bytenr order, and here we use - * @eb_context to determine if we have submitted pages of one extent buffer. - * - * If we have, we just skip until we hit a new page that doesn't belong to - * current @eb_context. - * - * If not, we submit all the page(s) of the extent buffer. - * - * Return >0 if we have submitted the extent buffer successfully. - * Return 0 if we don't need to submit the page, as it's already submitted by - * previous call. - * Return <0 for fatal error. - */ -static int submit_eb_page(struct folio *folio, struct btrfs_eb_write_context *ctx) -{ - struct writeback_control *wbc = ctx->wbc; - struct address_space *mapping = folio->mapping; - struct extent_buffer *eb; - int ret; - - if (!folio_test_private(folio)) - return 0; - - if (btrfs_meta_is_subpage(folio_to_fs_info(folio))) - return submit_eb_subpage(folio, wbc); - - spin_lock(&mapping->i_private_lock); - if (!folio_test_private(folio)) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - - eb = folio_get_private(folio); - - /* - * Shouldn't happen and normally this would be a BUG_ON but no point - * crashing the machine for something we can survive anyway. - */ - if (WARN_ON(!eb)) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - - if (eb == ctx->eb) { - spin_unlock(&mapping->i_private_lock); - return 0; - } - ret = atomic_inc_not_zero(&eb->refs); - spin_unlock(&mapping->i_private_lock); - if (!ret) - return 0; - - ctx->eb = eb; - - ret = btrfs_check_meta_write_pointer(eb->fs_info, ctx); - if (ret) { - if (ret == -EBUSY) - ret = 0; - free_extent_buffer(eb); - return ret; - } - - if (!lock_extent_buffer_for_io(eb, wbc)) { - free_extent_buffer(eb); - return 0; - } - /* Implies write in zoned mode. */ - if (ctx->zoned_bg) { - /* Mark the last eb in the block group. */ - btrfs_schedule_zone_finish_bg(ctx->zoned_bg, eb); - ctx->zoned_bg->meta_write_pointer += eb->len; - } - write_one_eb(eb, wbc); - free_extent_buffer(eb); - return 1; } int btree_write_cache_pages(struct address_space *mapping, @@ -2198,25 +2183,27 @@ int btree_write_cache_pages(struct address_space *mapping, int ret = 0; int done = 0; int nr_to_write_done = 0; - struct folio_batch fbatch; - unsigned int nr_folios; - pgoff_t index; - pgoff_t end; /* Inclusive */ + struct eb_batch batch; + unsigned int nr_ebs; + unsigned long index; + unsigned long end; int scanned = 0; xa_mark_t tag; - folio_batch_init(&fbatch); + eb_batch_init(&batch); if (wbc->range_cyclic) { - index = mapping->writeback_index; /* Start from prev offset */ + index = (mapping->writeback_index << PAGE_SHIFT) >> fs_info->sectorsize_bits; end = -1; + /* * Start from the beginning does not need to cycle over the * range, mark it as scanned. */ scanned = (index == 0); } else { - index = wbc->range_start >> PAGE_SHIFT; - end = wbc->range_end >> PAGE_SHIFT; + index = wbc->range_start >> fs_info->sectorsize_bits; + end = wbc->range_end >> fs_info->sectorsize_bits; + scanned = 1; } if (wbc->sync_mode == WB_SYNC_ALL) @@ -2226,31 +2213,40 @@ int btree_write_cache_pages(struct address_space *mapping, btrfs_zoned_meta_io_lock(fs_info); retry: if (wbc->sync_mode == WB_SYNC_ALL) - tag_pages_for_writeback(mapping, index, end); + buffer_xarray_tag_for_writeback(fs_info, index, end); while (!done && !nr_to_write_done && (index <= end) && - (nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch))) { - unsigned i; + (nr_ebs = buffer_xarray_get_ebs_tag(fs_info, &index, end, tag, + &batch))) { + struct extent_buffer *eb; - for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; + while ((eb = eb_batch_next(&batch)) != NULL) { + ctx.eb = eb; - ret = submit_eb_page(folio, &ctx); - if (ret == 0) + ret = btrfs_check_meta_write_pointer(eb->fs_info, &ctx); + if (ret) { + if (ret == -EBUSY) + ret = 0; + if (ret) { + done = 1; + break; + } + free_extent_buffer(eb); continue; - if (ret < 0) { - done = 1; - break; } - /* - * the filesystem may choose to bump up nr_to_write. - * We have to make sure to honor the new nr_to_write - * at any time - */ - nr_to_write_done = wbc->nr_to_write <= 0; + if (!lock_extent_buffer_for_io(eb, wbc)) + continue; + + /* Implies write in zoned mode. */ + if (ctx.zoned_bg) { + /* Mark the last eb in the block group. */ + btrfs_schedule_zone_finish_bg(ctx.zoned_bg, eb); + ctx.zoned_bg->meta_write_pointer += eb->len; + } + write_one_eb(eb, wbc); } - folio_batch_release(&fbatch); + nr_to_write_done = wbc->nr_to_write <= 0; + eb_batch_release(&batch); cond_resched(); } if (!scanned && !done) { diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index b344162f790c..4f0cf5b0d38f 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -240,6 +240,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc); int btree_write_cache_pages(struct address_space *mapping, struct writeback_control *wbc); +void btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start, u64 end); void btrfs_readahead(struct readahead_control *rac); int set_folio_extent_mapped(struct folio *folio); void clear_folio_extent_mapped(struct folio *folio); diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 39e48bf610a1..b72ac8b70e0e 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -1155,7 +1155,7 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info, if (!ret) ret = filemap_fdatawrite_range(mapping, start, end); if (!ret && wait_writeback) - ret = filemap_fdatawait_range(mapping, start, end); + btree_wait_writeback_range(fs_info, start, end); btrfs_free_extent_state(cached_state); if (ret) break; @@ -1175,7 +1175,6 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info, static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info, struct extent_io_tree *dirty_pages) { - struct address_space *mapping = fs_info->btree_inode->i_mapping; struct extent_state *cached_state = NULL; u64 start = 0; u64 end; @@ -1196,7 +1195,7 @@ static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info, if (ret == -ENOMEM) ret = 0; if (!ret) - ret = filemap_fdatawait_range(mapping, start, end); + btree_wait_writeback_range(fs_info, start, end); btrfs_free_extent_state(cached_state); if (ret) break;