From patchwork Wed Dec 18 09:41:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913300 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6919D1917D9 for ; Wed, 18 Dec 2024 09:41:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514919; cv=none; b=ckImjukyj7vpe+dLv88L6hNWqUwenNKATAzhLq2ebtp7imasByzE5qJmiGMawuM+TaM/w9WTmD8WSlYBifcfkC5wQhscdkNsHJq+jbTxVHJvEQp8sSGSg4Cqoq3XqgISTQroZoCUF7E8irXL3zspQLMJhRd9//2FJE8YeUk3hr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514919; c=relaxed/simple; bh=vkG0nn1CstCwXRZgpm4Z9EHe2TPCY3iUnJKsDSg8hPg=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LaTLvz3fxGzqUe615ciacbm8RvnhtOrIQv/iP1xhMO3L4KQjF2GbxjvgstDReEGYMKBBniyeV7FBV/NQhlbYaxYrCkItsX67gaD15DJcV2/G3mNrCLN6ydXE6gAZ9A5D1rynVtZuXpBuplNO2q/ubp7YLOwsu2qdUg8NZayXOOs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=kgUYSfA/; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=kgUYSfA/; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="kgUYSfA/"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="kgUYSfA/" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8F09E1F399 for ; Wed, 18 Dec 2024 09:41:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bRpvH1sIcMBlT22vtr7fNqIp36ztQVUa4efCdaxTmio=; b=kgUYSfA/BsXpW41XJt18w/0wtE5CuaFdEeqZ3RYdNgO4Bss61b9xi7itbBUJTpMRnuQVHJ oBttLBzbF9oqGAtdyNiFyPNCW+3aC48nyMxgCx01ZiGKs1BCvT+NAQr+Skcr8l4x36uiZk lDZ3DOaaRgDLVJQOTOfALWw3rImUdLI= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bRpvH1sIcMBlT22vtr7fNqIp36ztQVUa4efCdaxTmio=; b=kgUYSfA/BsXpW41XJt18w/0wtE5CuaFdEeqZ3RYdNgO4Bss61b9xi7itbBUJTpMRnuQVHJ oBttLBzbF9oqGAtdyNiFyPNCW+3aC48nyMxgCx01ZiGKs1BCvT+NAQr+Skcr8l4x36uiZk lDZ3DOaaRgDLVJQOTOfALWw3rImUdLI= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B196E132EA for ; Wed, 18 Dec 2024 09:41:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id SICUG+GYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:41:53 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 01/18] btrfs: rename btrfs_fs_info::sectorsize to blocksize for disk-io.c Date: Wed, 18 Dec 2024 20:11:17 +1030 Message-ID: <7d28eec4349d9b4ec5d7097e5194418a9cb16883.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:mid,imap1.dmz-prg2.suse.org:helo]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO All other file systems use the terminology "block size" for the minimum data block size, but due to historical reasons we use "sector size" for btrfs from day 1. Furthermore the kernel has its own sector size, fixed to 512 as the minimal supported block IO size. This can cause confusion when talking with other MM/FS people. So here we rename btrfs_fs_info::sectorsize to blocksize. But there are over 800 such usages across btrfs already, to make the transaction more smooth, for now @sectorsize and @blocksize are inside an anonymous union, so that both name can be utilized until we finish the full transaction. Signed-off-by: Qu Wenruo --- fs/btrfs/disk-io.c | 82 +++++++++++++++++++++++----------------------- fs/btrfs/fs.h | 25 +++++++++----- 2 files changed, 58 insertions(+), 49 deletions(-) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index eff0dd1ae62f..d3d2c9e2356a 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -555,7 +555,7 @@ static bool btree_dirty_folio(struct address_space *mapping, int cur_bit = 0; u64 page_start = folio_pos(folio); - if (fs_info->sectorsize == PAGE_SIZE) { + if (fs_info->blocksize == PAGE_SIZE) { eb = folio_get_private(folio); BUG_ON(!eb); BUG_ON(!test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); @@ -579,7 +579,7 @@ static bool btree_dirty_folio(struct address_space *mapping, continue; } spin_unlock_irqrestore(&subpage->lock, flags); - cur = page_start + cur_bit * fs_info->sectorsize; + cur = page_start + cur_bit * fs_info->blocksize; eb = find_extent_buffer(fs_info, cur); ASSERT(eb); @@ -588,7 +588,7 @@ static bool btree_dirty_folio(struct address_space *mapping, btrfs_assert_tree_write_locked(eb); free_extent_buffer(eb); - cur_bit += (fs_info->nodesize >> fs_info->sectorsize_bits) - 1; + cur_bit += (fs_info->nodesize >> fs_info->blocksize_bits) - 1; } return filemap_dirty_folio(mapping, folio); } @@ -738,7 +738,7 @@ struct btrfs_root *btrfs_alloc_dummy_root(struct btrfs_fs_info *fs_info) if (!root) return ERR_PTR(-ENOMEM); - /* We don't use the stripesize in selftest, set it as sectorsize */ + /* We don't use the stripesize in selftest, set it as blocksize */ root->alloc_bytenr = 0; return root; @@ -2341,7 +2341,7 @@ int btrfs_validate_super(const struct btrfs_fs_info *fs_info, const struct btrfs_super_block *sb, int mirror_num) { u64 nodesize = btrfs_super_nodesize(sb); - u64 sectorsize = btrfs_super_sectorsize(sb); + u64 blocksize = btrfs_super_sectorsize(sb); int ret = 0; const bool ignore_flags = btrfs_test_opt(fs_info, IGNORESUPERFLAGS); @@ -2378,31 +2378,31 @@ int btrfs_validate_super(const struct btrfs_fs_info *fs_info, } /* - * Check sectorsize and nodesize first, other check will need it. - * Check all possible sectorsize(4K, 8K, 16K, 32K, 64K) here. + * Check blocksize and nodesize first, other check will need it. + * Check all possible blocksize(4K, 8K, 16K, 32K, 64K) here. */ - if (!is_power_of_2(sectorsize) || sectorsize < 4096 || - sectorsize > BTRFS_MAX_METADATA_BLOCKSIZE) { - btrfs_err(fs_info, "invalid sectorsize %llu", sectorsize); + if (!is_power_of_2(blocksize) || blocksize < 4096 || + blocksize > BTRFS_MAX_METADATA_BLOCKSIZE) { + btrfs_err(fs_info, "invalid blocksize %llu", blocksize); ret = -EINVAL; } /* - * We only support at most two sectorsizes: 4K and PAGE_SIZE. + * We only support at most two blocksizes: 4K and PAGE_SIZE. * - * We can support 16K sectorsize with 64K page size without problem, - * but such sectorsize/pagesize combination doesn't make much sense. + * We can support 16K blocksize with 64K page size without problem, + * but such blocksize/pagesize combination doesn't make much sense. * 4K will be our future standard, PAGE_SIZE is supported from the very * beginning. */ - if (sectorsize > PAGE_SIZE || (sectorsize != SZ_4K && sectorsize != PAGE_SIZE)) { + if (blocksize > PAGE_SIZE || (blocksize != SZ_4K && blocksize != PAGE_SIZE)) { btrfs_err(fs_info, - "sectorsize %llu not yet supported for page size %lu", - sectorsize, PAGE_SIZE); + "blocksize %llu not yet supported for page size %lu", + blocksize, PAGE_SIZE); ret = -EINVAL; } - if (!is_power_of_2(nodesize) || nodesize < sectorsize || + if (!is_power_of_2(nodesize) || nodesize < blocksize || nodesize > BTRFS_MAX_METADATA_BLOCKSIZE) { btrfs_err(fs_info, "invalid nodesize %llu", nodesize); ret = -EINVAL; @@ -2414,17 +2414,17 @@ int btrfs_validate_super(const struct btrfs_fs_info *fs_info, } /* Root alignment check */ - if (!IS_ALIGNED(btrfs_super_root(sb), sectorsize)) { + if (!IS_ALIGNED(btrfs_super_root(sb), blocksize)) { btrfs_warn(fs_info, "tree_root block unaligned: %llu", btrfs_super_root(sb)); ret = -EINVAL; } - if (!IS_ALIGNED(btrfs_super_chunk_root(sb), sectorsize)) { + if (!IS_ALIGNED(btrfs_super_chunk_root(sb), blocksize)) { btrfs_warn(fs_info, "chunk_root block unaligned: %llu", btrfs_super_chunk_root(sb)); ret = -EINVAL; } - if (!IS_ALIGNED(btrfs_super_log_root(sb), sectorsize)) { + if (!IS_ALIGNED(btrfs_super_log_root(sb), blocksize)) { btrfs_warn(fs_info, "log_root block unaligned: %llu", btrfs_super_log_root(sb)); ret = -EINVAL; @@ -2819,8 +2819,8 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info) /* Usable values until the real ones are cached from the superblock */ fs_info->nodesize = 4096; - fs_info->sectorsize = 4096; - fs_info->sectorsize_bits = ilog2(4096); + fs_info->blocksize = 4096; + fs_info->blocksize_bits = ilog2(4096); fs_info->stripesize = 4096; /* Default compress algorithm when user does -o compress */ @@ -3123,10 +3123,10 @@ int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount) /* Runtime limitation for mixed block groups. */ if ((incompat & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS) && - (fs_info->sectorsize != fs_info->nodesize)) { + (fs_info->blocksize != fs_info->nodesize)) { btrfs_err(fs_info, -"unequal nodesize/sectorsize (%u != %u) are not allowed for mixed block groups", - fs_info->nodesize, fs_info->sectorsize); +"unequal nodesize/blocksize (%u != %u) are not allowed for mixed block groups", + fs_info->nodesize, fs_info->blocksize); return -EINVAL; } @@ -3185,10 +3185,10 @@ int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount) * we're already defaulting to v2 cache, no need to bother v1 as it's * going to be deprecated anyway. */ - if (fs_info->sectorsize < PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) { + if (fs_info->blocksize < PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) { btrfs_warn(fs_info, - "v1 space cache is not supported for page size %lu with sectorsize %u", - PAGE_SIZE, fs_info->sectorsize); + "v1 space cache is not supported for page size %lu with blocksize %u", + PAGE_SIZE, fs_info->blocksize); return -EINVAL; } @@ -3202,7 +3202,7 @@ int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount) int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices) { - u32 sectorsize; + u32 blocksize; u32 nodesize; u32 stripesize; u64 generation; @@ -3310,15 +3310,15 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device /* Set up fs_info before parsing mount options */ nodesize = btrfs_super_nodesize(disk_super); - sectorsize = btrfs_super_sectorsize(disk_super); - stripesize = sectorsize; + blocksize = btrfs_super_sectorsize(disk_super); + stripesize = blocksize; fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids)); - fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids)); + fs_info->delalloc_batch = blocksize * 512 * (1 + ilog2(nr_cpu_ids)); fs_info->nodesize = nodesize; - fs_info->sectorsize = sectorsize; - fs_info->sectorsize_bits = ilog2(sectorsize); - fs_info->sectors_per_page = (PAGE_SIZE >> fs_info->sectorsize_bits); + fs_info->blocksize = blocksize; + fs_info->blocksize_bits = ilog2(blocksize); + fs_info->blocks_per_page = (PAGE_SIZE >> fs_info->blocksize_bits); fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size; fs_info->stripesize = stripesize; @@ -3339,14 +3339,14 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device /* * At this point our mount options are validated, if we set ->max_inline - * to something non-standard make sure we truncate it to sectorsize. + * to something non-standard make sure we truncate it to blocksize. */ - fs_info->max_inline = min_t(u64, fs_info->max_inline, fs_info->sectorsize); + fs_info->max_inline = min_t(u64, fs_info->max_inline, fs_info->blocksize); - if (sectorsize < PAGE_SIZE) + if (blocksize < PAGE_SIZE) btrfs_warn(fs_info, "read-write for sector size %u with page size %lu is experimental", - sectorsize, PAGE_SIZE); + blocksize, PAGE_SIZE); ret = btrfs_init_workqueues(fs_info); if (ret) @@ -3356,8 +3356,8 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device sb->s_bdi->ra_pages = max(sb->s_bdi->ra_pages, SZ_4M / PAGE_SIZE); /* Update the values for the current filesystem. */ - sb->s_blocksize = sectorsize; - sb->s_blocksize_bits = blksize_bits(sectorsize); + sb->s_blocksize = blocksize; + sb->s_blocksize_bits = blksize_bits(blocksize); memcpy(&sb->s_uuid, fs_info->fs_devices->fsid, BTRFS_FSID_SIZE); mutex_lock(&fs_info->chunk_mutex); diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h index 58e6b4b953f1..9f8324ae3800 100644 --- a/fs/btrfs/fs.h +++ b/fs/btrfs/fs.h @@ -179,7 +179,7 @@ enum { /* * Indicate that we have found a tree block which is only aligned to - * sectorsize, but not to nodesize. This should be rare nowadays. + * blocksize, but not to nodesize. This should be rare nowadays. */ BTRFS_FS_UNALIGNED_TREE_BLOCK, @@ -707,7 +707,10 @@ struct btrfs_fs_info { * running. */ refcount_t scrub_workers_refcnt; - u32 sectors_per_page; + union { + u32 sectors_per_page; + u32 blocks_per_page; + }; struct workqueue_struct *scrub_workers; struct btrfs_discard_ctl discard_ctl; @@ -762,7 +765,7 @@ struct btrfs_fs_info { /* Extent buffer radix tree */ spinlock_t buffer_lock; - /* Entries are eb->start / sectorsize */ + /* Entries are eb->start / blocksize */ struct radix_tree_root buffer_radix; /* Next backup root to be overwritten */ @@ -794,9 +797,15 @@ struct btrfs_fs_info { /* Cached block sizes */ u32 nodesize; - u32 sectorsize; - /* ilog2 of sectorsize, use to avoid 64bit division */ - u32 sectorsize_bits; + union { + u32 sectorsize; + u32 blocksize; + }; + /* ilog2 of blocksize, use to avoid 64bit division */ + union { + u32 sectorsize_bits; + u32 blocksize_bits; + }; u32 csum_size; u32 csums_per_leaf; u32 stripesize; @@ -931,7 +940,7 @@ static inline u64 btrfs_get_last_root_drop_gen(const struct btrfs_fs_info *fs_in static inline u64 btrfs_csum_bytes_to_leaves( const struct btrfs_fs_info *fs_info, u64 csum_bytes) { - const u64 num_csums = csum_bytes >> fs_info->sectorsize_bits; + const u64 num_csums = csum_bytes >> fs_info->blocksize_bits; return DIV_ROUND_UP_ULL(num_csums, fs_info->csums_per_leaf); } @@ -959,7 +968,7 @@ static inline u64 btrfs_calc_metadata_size(const struct btrfs_fs_info *fs_info, #define BTRFS_MAX_EXTENT_ITEM_SIZE(r) ((BTRFS_LEAF_DATA_SIZE(r->fs_info) >> 4) - \ sizeof(struct btrfs_item)) -#define BTRFS_BYTES_TO_BLKS(fs_info, bytes) ((bytes) >> (fs_info)->sectorsize_bits) +#define BTRFS_BYTES_TO_BLKS(fs_info, bytes) ((bytes) >> (fs_info)->blocksize_bits) static inline bool btrfs_is_zoned(const struct btrfs_fs_info *fs_info) { From patchwork Wed Dec 18 09:41:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913301 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D401A1922F1 for ; Wed, 18 Dec 2024 09:41:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514920; cv=none; b=E+VvBX+WAb7a3OPVBiyzDO2QfjAejhkmpjGPUWwByJ1MV5uPY32w2qc6uJDUAcTRZfGbAQem8aw2EtVGhx4Xgux6bnyXlIlB4AMae3x6TZLSdf8OjuOVTF6RFuAU6DWzAtCyx3qv8QkKYFzpkZlIfNurhnjSBFzrCoHrijM1OYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514920; c=relaxed/simple; bh=v8sskZONzs+YtzI11VByAgxD/BGeVeDTGdlcqdfmDo0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MDUifMepl8xpy7ZmkF6CdrsyEITZG4W1jWyOolQ058kSGmW7YRSXrTmsSR8UZYcIDKkRrwaCRm3PmTKhsbm4FOxsXuyKO+yjHMy0L3ImVi/O1AxEjW0ZeSk3Y1LInbroCFgsRW/b7+CkrXyCwHDD3B56xW47NWFnvj4Bz/sOgNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=esSO1z5k; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=esSO1z5k; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="esSO1z5k"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="esSO1z5k" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id DFF971F444 for ; Wed, 18 Dec 2024 09:41:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514915; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6BLPtNQM6XZDk1qwPXErVmJI70DlJqW8wAOPD8dMGMw=; b=esSO1z5kkrRanh6Z3X8Ash7hZ4N/Y8u6mtZGqghlh2sgsoPUZRR/MtGUa4BHJkpNx7G08z TBo8tXF+YePb7LKXVXqW/EzIhjhRdghVe/Ac5LlhL8WwtmBG/p4BObivDzisUK2U8W1klh BVjvHdIZc2LaRZlsNktxJ5EiQx3pSvk= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514915; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6BLPtNQM6XZDk1qwPXErVmJI70DlJqW8wAOPD8dMGMw=; b=esSO1z5kkrRanh6Z3X8Ash7hZ4N/Y8u6mtZGqghlh2sgsoPUZRR/MtGUa4BHJkpNx7G08z TBo8tXF+YePb7LKXVXqW/EzIhjhRdghVe/Ac5LlhL8WwtmBG/p4BObivDzisUK2U8W1klh BVjvHdIZc2LaRZlsNktxJ5EiQx3pSvk= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 0D523132EA for ; Wed, 18 Dec 2024 09:41:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id SFEPL+KYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:41:54 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 02/18] btrfs: migrate subpage.[ch] to use block size terminology Date: Wed, 18 Dec 2024 20:11:18 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO Straightforward rename from "sector" to "block". Signed-off-by: Qu Wenruo --- fs/btrfs/subpage.c | 92 +++++++++++++++++++++++----------------------- fs/btrfs/subpage.h | 8 ++-- 2 files changed, 50 insertions(+), 50 deletions(-) diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c index 8c68059ac1b0..c37e24c11e21 100644 --- a/fs/btrfs/subpage.c +++ b/fs/btrfs/subpage.c @@ -7,7 +7,7 @@ #include "btrfs_inode.h" /* - * Subpage (sectorsize < PAGE_SIZE) support overview: + * Subpage (blocksize < PAGE_SIZE) support overview: * * Limitations: * @@ -51,7 +51,7 @@ * * - Common * Both metadata and data will use a new structure, btrfs_subpage, to - * record the status of each sector inside a page. This provides the extra + * record the status of each block inside a page. This provides the extra * granularity needed. * * - Metadata @@ -67,13 +67,13 @@ #if PAGE_SIZE > SZ_4K bool btrfs_is_subpage(const struct btrfs_fs_info *fs_info, struct address_space *mapping) { - if (fs_info->sectorsize >= PAGE_SIZE) + if (fs_info->blocksize >= PAGE_SIZE) return false; /* * Only data pages (either through DIO or compression) can have no * mapping. And if page->mapping->host is data inode, it's subpage. - * As we have ruled our sectorsize >= PAGE_SIZE case already. + * As we have ruled our blocksize >= PAGE_SIZE case already. */ if (!mapping || !mapping->host || is_data_inode(BTRFS_I(mapping->host))) return true; @@ -131,10 +131,10 @@ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info, struct btrfs_subpage *ret; unsigned int real_size; - ASSERT(fs_info->sectorsize < PAGE_SIZE); + ASSERT(fs_info->blocksize < PAGE_SIZE); real_size = struct_size(ret, bitmaps, - BITS_TO_LONGS(btrfs_bitmap_nr_max * fs_info->sectors_per_page)); + BITS_TO_LONGS(btrfs_bitmap_nr_max * fs_info->blocks_per_page)); ret = kzalloc(real_size, GFP_NOFS); if (!ret) return ERR_PTR(-ENOMEM); @@ -198,8 +198,8 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info, /* Basic checks */ ASSERT(folio_test_private(folio) && folio_get_private(folio)); - ASSERT(IS_ALIGNED(start, fs_info->sectorsize) && - IS_ALIGNED(len, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(start, fs_info->blocksize) && + IS_ALIGNED(len, fs_info->blocksize)); /* * The range check only works for mapped page, we can still have * unmapped page like dummy extent buffer pages. @@ -214,8 +214,8 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info, unsigned int __start_bit; \ \ btrfs_subpage_assert(fs_info, folio, start, len); \ - __start_bit = offset_in_page(start) >> fs_info->sectorsize_bits; \ - __start_bit += fs_info->sectors_per_page * btrfs_bitmap_nr_##name; \ + __start_bit = offset_in_page(start) >> fs_info->blocksize_bits; \ + __start_bit += fs_info->blocks_per_page * btrfs_bitmap_nr_##name; \ __start_bit; \ }) @@ -242,7 +242,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, { struct btrfs_subpage *subpage = folio_get_private(folio); const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len); - const int nbits = (len >> fs_info->sectorsize_bits); + const int nbits = (len >> fs_info->blocksize_bits); unsigned long flags; unsigned int cleared = 0; int bit = start_bit; @@ -285,7 +285,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, * We can simple unlock it. * * - folio locked with subpage range locked. - * We go through the locked sectors inside the range and clear their locked + * We go through the locked blocks inside the range and clear their locked * bitmap, reduce the writer lock number, and unlock the page if that's * the last locked range. */ @@ -323,7 +323,7 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, struct folio *folio, unsigned long bitmap) { struct btrfs_subpage *subpage = folio_get_private(folio); - const int start_bit = fs_info->sectors_per_page * btrfs_bitmap_nr_locked; + const int start_bit = fs_info->blocks_per_page * btrfs_bitmap_nr_locked; unsigned long flags; bool last = false; int cleared = 0; @@ -341,7 +341,7 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, } spin_lock_irqsave(&subpage->lock, flags); - for_each_set_bit(bit, &bitmap, fs_info->sectors_per_page) { + for_each_set_bit(bit, &bitmap, fs_info->blocks_per_page) { if (test_and_clear_bit(bit + start_bit, subpage->bitmaps)) cleared++; } @@ -354,13 +354,13 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, #define subpage_test_bitmap_all_set(fs_info, subpage, name) \ bitmap_test_range_all_set(subpage->bitmaps, \ - fs_info->sectors_per_page * btrfs_bitmap_nr_##name, \ - fs_info->sectors_per_page) + fs_info->blocks_per_page * btrfs_bitmap_nr_##name, \ + fs_info->blocks_per_page) #define subpage_test_bitmap_all_zero(fs_info, subpage, name) \ bitmap_test_range_all_zero(subpage->bitmaps, \ - fs_info->sectors_per_page * btrfs_bitmap_nr_##name, \ - fs_info->sectors_per_page) + fs_info->blocks_per_page * btrfs_bitmap_nr_##name, \ + fs_info->blocks_per_page) void btrfs_subpage_set_uptodate(const struct btrfs_fs_info *fs_info, struct folio *folio, u64 start, u32 len) @@ -371,7 +371,7 @@ void btrfs_subpage_set_uptodate(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (subpage_test_bitmap_all_set(fs_info, subpage, uptodate)) folio_mark_uptodate(folio); spin_unlock_irqrestore(&subpage->lock, flags); @@ -386,7 +386,7 @@ void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); folio_clear_uptodate(folio); spin_unlock_irqrestore(&subpage->lock, flags); } @@ -400,7 +400,7 @@ void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); spin_unlock_irqrestore(&subpage->lock, flags); folio_mark_dirty(folio); } @@ -425,7 +425,7 @@ bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info, bool last = false; spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (subpage_test_bitmap_all_zero(fs_info, subpage, dirty)) last = true; spin_unlock_irqrestore(&subpage->lock, flags); @@ -451,7 +451,7 @@ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (!folio_test_writeback(folio)) folio_start_writeback(folio); spin_unlock_irqrestore(&subpage->lock, flags); @@ -466,7 +466,7 @@ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (subpage_test_bitmap_all_zero(fs_info, subpage, writeback)) { ASSERT(folio_test_writeback(folio)); folio_end_writeback(folio); @@ -483,7 +483,7 @@ void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); folio_set_ordered(folio); spin_unlock_irqrestore(&subpage->lock, flags); } @@ -497,7 +497,7 @@ void btrfs_subpage_clear_ordered(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (subpage_test_bitmap_all_zero(fs_info, subpage, ordered)) folio_clear_ordered(folio); spin_unlock_irqrestore(&subpage->lock, flags); @@ -512,7 +512,7 @@ void btrfs_subpage_set_checked(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); if (subpage_test_bitmap_all_set(fs_info, subpage, checked)) folio_set_checked(folio); spin_unlock_irqrestore(&subpage->lock, flags); @@ -527,7 +527,7 @@ void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info, unsigned long flags; spin_lock_irqsave(&subpage->lock, flags); - bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->blocksize_bits); folio_clear_checked(folio); spin_unlock_irqrestore(&subpage->lock, flags); } @@ -548,7 +548,7 @@ bool btrfs_subpage_test_##name(const struct btrfs_fs_info *fs_info, \ \ spin_lock_irqsave(&subpage->lock, flags); \ ret = bitmap_test_range_all_set(subpage->bitmaps, start_bit, \ - len >> fs_info->sectorsize_bits); \ + len >> fs_info->blocksize_bits); \ spin_unlock_irqrestore(&subpage->lock, flags); \ return ret; \ } @@ -560,8 +560,8 @@ IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(checked); /* * Note that, in selftests (extent-io-tests), we can have empty fs_info passed - * in. We only test sectorsize == PAGE_SIZE cases so far, thus we can fall - * back to regular sectorsize branch. + * in. We only test blocksize == PAGE_SIZE cases so far, thus we can fall + * back to regular blocksize branch. */ #define IMPLEMENT_BTRFS_PAGE_OPS(name, folio_set_func, \ folio_clear_func, folio_test_func) \ @@ -656,7 +656,7 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info, } start_bit = subpage_calc_start_bit(fs_info, folio, dirty, start, len); - nbits = len >> fs_info->sectorsize_bits; + nbits = len >> fs_info->blocksize_bits; subpage = folio_get_private(folio); ASSERT(subpage); spin_lock_irqsave(&subpage->lock, flags); @@ -686,31 +686,31 @@ void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info, subpage = folio_get_private(folio); start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len); - nbits = len >> fs_info->sectorsize_bits; + nbits = len >> fs_info->blocksize_bits; spin_lock_irqsave(&subpage->lock, flags); /* Target range should not yet be locked. */ ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits)); bitmap_set(subpage->bitmaps, start_bit, nbits); ret = atomic_add_return(nbits, &subpage->nr_locked); - ASSERT(ret <= fs_info->sectors_per_page); + ASSERT(ret <= fs_info->blocks_per_page); spin_unlock_irqrestore(&subpage->lock, flags); } #define GET_SUBPAGE_BITMAP(subpage, fs_info, name, dst) \ { \ - const int sectors_per_page = fs_info->sectors_per_page; \ + const int blocks_per_page = fs_info->blocks_per_page; \ \ - ASSERT(sectors_per_page < BITS_PER_LONG); \ + ASSERT(blocks_per_page < BITS_PER_LONG); \ *dst = bitmap_read(subpage->bitmaps, \ - sectors_per_page * btrfs_bitmap_nr_##name, \ - sectors_per_page); \ + blocks_per_page * btrfs_bitmap_nr_##name, \ + blocks_per_page); \ } void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, struct folio *folio, u64 start, u32 len) { struct btrfs_subpage *subpage; - const u32 sectors_per_page = fs_info->sectors_per_page; + const u32 blocks_per_page = fs_info->blocks_per_page; unsigned long uptodate_bitmap; unsigned long dirty_bitmap; unsigned long writeback_bitmap; @@ -719,7 +719,7 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, unsigned long flags; ASSERT(folio_test_private(folio) && folio_get_private(folio)); - ASSERT(sectors_per_page > 1); + ASSERT(blocks_per_page > 1); subpage = folio_get_private(folio); spin_lock_irqsave(&subpage->lock, flags); @@ -735,11 +735,11 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, btrfs_warn(fs_info, "start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl", start, len, folio_pos(folio), - sectors_per_page, &uptodate_bitmap, - sectors_per_page, &dirty_bitmap, - sectors_per_page, &writeback_bitmap, - sectors_per_page, &ordered_bitmap, - sectors_per_page, &checked_bitmap); + blocks_per_page, &uptodate_bitmap, + blocks_per_page, &dirty_bitmap, + blocks_per_page, &writeback_bitmap, + blocks_per_page, &ordered_bitmap, + blocks_per_page, &checked_bitmap); } void btrfs_get_subpage_dirty_bitmap(struct btrfs_fs_info *fs_info, @@ -750,7 +750,7 @@ void btrfs_get_subpage_dirty_bitmap(struct btrfs_fs_info *fs_info, unsigned long flags; ASSERT(folio_test_private(folio) && folio_get_private(folio)); - ASSERT(fs_info->sectors_per_page > 1); + ASSERT(fs_info->blocks_per_page > 1); subpage = folio_get_private(folio); spin_lock_irqsave(&subpage->lock, flags); diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h index 428fa9389fd4..c223cdfa6056 100644 --- a/fs/btrfs/subpage.h +++ b/fs/btrfs/subpage.h @@ -23,7 +23,7 @@ struct btrfs_fs_info; * | | | * v v v * |u|u|u|u|........|u|u|d|d|.......|d|d|o|o|.......|o|o| - * |< sectors_per_page >| + * |< blocks_per_page >| * * Unlike regular macro-like enums, here we do not go upper-case names, as * these names will be utilized in various macros to define function names. @@ -39,7 +39,7 @@ enum { }; /* - * Structure to trace status of each sector inside a page, attached to + * Structure to trace status of each block inside a page, attached to * page::private for both data and metadata inodes. */ struct btrfs_subpage { @@ -57,7 +57,7 @@ struct btrfs_subpage { /* * Structures only used by data, * - * How many sectors inside the page is locked. + * How many blocks inside the page is locked. */ atomic_t nr_locked; }; @@ -83,7 +83,7 @@ int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio, enum btrfs_subpage_type type); void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio); -/* Allocate additional data where page represents more than one sector */ +/* Allocate additional data where page represents more than one block */ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info, enum btrfs_subpage_type type); void btrfs_free_subpage(struct btrfs_subpage *subpage); From patchwork Wed Dec 18 09:41:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913307 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 527AA1990C8 for ; Wed, 18 Dec 2024 09:42:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514926; cv=none; b=TadLreB0ekCXfccbX82ED2W8qvjbj5LB4fjzXIUun2IYYicJEDuEg8wrwXL3beWnkQL47f83d1Lk3pFqOuzxj096rfkvqxdHjR6s+nGhIU4KdrsHrqA3WE2T/gby+vldrNeAoQk77BNSLco3T9C2Foq4tUkNQBQOxV9Y6v56iDw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514926; c=relaxed/simple; bh=Fst+Mqm+dWDNtSkEEGdomEBCW3O5AqdB9m3Hzz6izDk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aJju1Dvecv0WrMQbYjBIomU8uOl6wlxgRBZoSlWwQz/RJFIsiwtX+PvY7gZALy6ocMz5uLxRjV8Ks0xsu/VXHOfTPiZOxt9mKh0pDorZiFRJbCJrl8U46ewrE6F4EBSXXW90/yUhnIpRpFR6OGEjzgSqqmBvO2ElH97tIA+44sY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=KEwVMWqf; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=KEwVMWqf; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="KEwVMWqf"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="KEwVMWqf" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 328B82115F for ; Wed, 18 Dec 2024 09:41:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tObid2YugiBLL/W+L0IGj8tRiOUn9V4yCFmmlUnlSao=; b=KEwVMWqf8E/CJpX6dCR05wrG/oTFExsTe9/eJjLfC4DUiy9ucDQdy0T/BKNzubXjrBJcTU tGY2ZCRyZx79xpz/B/o2X6ZPfMUabuniWGx5ubrbJbZkrptrPHJWUfrdDsZfvW7Jq3jPXs pd0K9/A6KRvZv/BgN3DrADBLckownis= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=KEwVMWqf DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tObid2YugiBLL/W+L0IGj8tRiOUn9V4yCFmmlUnlSao=; b=KEwVMWqf8E/CJpX6dCR05wrG/oTFExsTe9/eJjLfC4DUiy9ucDQdy0T/BKNzubXjrBJcTU tGY2ZCRyZx79xpz/B/o2X6ZPfMUabuniWGx5ubrbJbZkrptrPHJWUfrdDsZfvW7Jq3jPXs pd0K9/A6KRvZv/BgN3DrADBLckownis= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5E601132EA for ; Wed, 18 Dec 2024 09:41:56 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id uOkuB+SYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:41:56 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 03/18] btrfs: migrate tree-checker.c to use block size terminology Date: Wed, 18 Dec 2024 20:11:19 +1030 Message-ID: <33b4832760bb9b224b44e29fa6e22e05c1d9ae77.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 328B82115F X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns,suse.com:email,suse.com:dkim,suse.com:mid]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO Straightforward rename from "sector" to "block", except the btrfs_chunk_sector_size() usage, which will be kept as is. We will keep "sector" to describe the minimal IO unit for a block device. Signed-off-by: Qu Wenruo --- fs/btrfs/tree-checker.c | 100 ++++++++++++++++++++-------------------- 1 file changed, 50 insertions(+), 50 deletions(-) diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c index dfeee033f31f..7864a096f709 100644 --- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -126,7 +126,7 @@ static u64 file_extent_end(struct extent_buffer *leaf, if (btrfs_file_extent_type(leaf, extent) == BTRFS_FILE_EXTENT_INLINE) { len = btrfs_file_extent_ram_bytes(leaf, extent); - end = ALIGN(key->offset + len, leaf->fs_info->sectorsize); + end = ALIGN(key->offset + len, leaf->fs_info->blocksize); } else { len = btrfs_file_extent_num_bytes(leaf, extent); end = key->offset + len; @@ -209,14 +209,14 @@ static int check_extent_data_item(struct extent_buffer *leaf, { struct btrfs_fs_info *fs_info = leaf->fs_info; struct btrfs_file_extent_item *fi; - u32 sectorsize = fs_info->sectorsize; + u32 blocksize = fs_info->blocksize; u32 item_size = btrfs_item_size(leaf, slot); u64 extent_end; - if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) { + if (unlikely(!IS_ALIGNED(key->offset, blocksize))) { file_extent_err(leaf, slot, "unaligned file_offset for file extent, have %llu should be aligned to %u", - key->offset, sectorsize); + key->offset, blocksize); return -EUCLEAN; } @@ -302,11 +302,11 @@ static int check_extent_data_item(struct extent_buffer *leaf, item_size, sizeof(*fi)); return -EUCLEAN; } - if (unlikely(CHECK_FE_ALIGNED(leaf, slot, fi, ram_bytes, sectorsize) || - CHECK_FE_ALIGNED(leaf, slot, fi, disk_bytenr, sectorsize) || - CHECK_FE_ALIGNED(leaf, slot, fi, disk_num_bytes, sectorsize) || - CHECK_FE_ALIGNED(leaf, slot, fi, offset, sectorsize) || - CHECK_FE_ALIGNED(leaf, slot, fi, num_bytes, sectorsize))) + if (unlikely(CHECK_FE_ALIGNED(leaf, slot, fi, ram_bytes, blocksize) || + CHECK_FE_ALIGNED(leaf, slot, fi, disk_bytenr, blocksize) || + CHECK_FE_ALIGNED(leaf, slot, fi, disk_num_bytes, blocksize) || + CHECK_FE_ALIGNED(leaf, slot, fi, offset, blocksize) || + CHECK_FE_ALIGNED(leaf, slot, fi, num_bytes, blocksize))) return -EUCLEAN; /* Catch extent end overflow */ @@ -365,7 +365,7 @@ static int check_csum_item(struct extent_buffer *leaf, struct btrfs_key *key, int slot, struct btrfs_key *prev_key) { struct btrfs_fs_info *fs_info = leaf->fs_info; - u32 sectorsize = fs_info->sectorsize; + u32 blocksize = fs_info->blocksize; const u32 csumsize = fs_info->csum_size; if (unlikely(key->objectid != BTRFS_EXTENT_CSUM_OBJECTID)) { @@ -374,10 +374,10 @@ static int check_csum_item(struct extent_buffer *leaf, struct btrfs_key *key, key->objectid, BTRFS_EXTENT_CSUM_OBJECTID); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) { + if (unlikely(!IS_ALIGNED(key->offset, blocksize))) { generic_err(leaf, slot, "unaligned key offset for csum item, have %llu should be aligned to %u", - key->offset, sectorsize); + key->offset, blocksize); return -EUCLEAN; } if (unlikely(!IS_ALIGNED(btrfs_item_size(leaf, slot), csumsize))) { @@ -391,7 +391,7 @@ static int check_csum_item(struct extent_buffer *leaf, struct btrfs_key *key, u32 prev_item_size; prev_item_size = btrfs_item_size(leaf, slot - 1); - prev_csum_end = (prev_item_size / csumsize) * sectorsize; + prev_csum_end = (prev_item_size / csumsize) * blocksize; prev_csum_end += prev_key->offset; if (unlikely(prev_csum_end > key->offset)) { generic_err(leaf, slot - 1, @@ -857,20 +857,20 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf, num_stripes, nparity); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(logical, fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(logical, fs_info->blocksize))) { chunk_err(leaf, chunk, logical, "invalid chunk logical, have %llu should aligned to %u", - logical, fs_info->sectorsize); + logical, fs_info->blocksize); return -EUCLEAN; } - if (unlikely(btrfs_chunk_sector_size(leaf, chunk) != fs_info->sectorsize)) { + if (unlikely(btrfs_chunk_sector_size(leaf, chunk) != fs_info->blocksize)) { chunk_err(leaf, chunk, logical, - "invalid chunk sectorsize, have %u expect %u", + "invalid chunk blocksize, have %u expect %u", btrfs_chunk_sector_size(leaf, chunk), - fs_info->sectorsize); + fs_info->blocksize); return -EUCLEAN; } - if (unlikely(!length || !IS_ALIGNED(length, fs_info->sectorsize))) { + if (unlikely(!length || !IS_ALIGNED(length, fs_info->blocksize))) { chunk_err(leaf, chunk, logical, "invalid chunk length, have %llu", length); return -EUCLEAN; @@ -1229,10 +1229,10 @@ static int check_root_item(struct extent_buffer *leaf, struct btrfs_key *key, } /* Alignment and level check */ - if (unlikely(!IS_ALIGNED(btrfs_root_bytenr(&ri), fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(btrfs_root_bytenr(&ri), fs_info->blocksize))) { generic_err(leaf, slot, "invalid root bytenr, have %llu expect to be aligned to %u", - btrfs_root_bytenr(&ri), fs_info->sectorsize); + btrfs_root_bytenr(&ri), fs_info->blocksize); return -EUCLEAN; } if (unlikely(btrfs_root_level(&ri) >= BTRFS_MAX_LEVEL)) { @@ -1327,10 +1327,10 @@ static int check_extent_item(struct extent_buffer *leaf, return -EUCLEAN; } /* key->objectid is the bytenr for both key types */ - if (unlikely(!IS_ALIGNED(key->objectid, fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(key->objectid, fs_info->blocksize))) { generic_err(leaf, slot, "invalid key objectid, have %llu expect to be aligned to %u", - key->objectid, fs_info->sectorsize); + key->objectid, fs_info->blocksize); return -EUCLEAN; } @@ -1420,10 +1420,10 @@ static int check_extent_item(struct extent_buffer *leaf, key->type, BTRFS_EXTENT_ITEM_KEY); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(key->offset, fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(key->offset, fs_info->blocksize))) { extent_err(leaf, slot, "invalid extent length, have %llu expect aligned to %u", - key->offset, fs_info->sectorsize); + key->offset, fs_info->blocksize); return -EUCLEAN; } if (unlikely(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF)) { @@ -1486,10 +1486,10 @@ static int check_extent_item(struct extent_buffer *leaf, /* Contains parent bytenr */ case BTRFS_SHARED_BLOCK_REF_KEY: if (unlikely(!IS_ALIGNED(inline_offset, - fs_info->sectorsize))) { + fs_info->blocksize))) { extent_err(leaf, slot, "invalid tree parent bytenr, have %llu expect aligned to %u", - inline_offset, fs_info->sectorsize); + inline_offset, fs_info->blocksize); return -EUCLEAN; } inline_refs++; @@ -1521,10 +1521,10 @@ static int check_extent_item(struct extent_buffer *leaf, return -EUCLEAN; } if (unlikely(!IS_ALIGNED(dref_offset, - fs_info->sectorsize))) { + fs_info->blocksize))) { extent_err(leaf, slot, "invalid data ref offset, have %llu expect aligned to %u", - dref_offset, fs_info->sectorsize); + dref_offset, fs_info->blocksize); return -EUCLEAN; } if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) { @@ -1538,10 +1538,10 @@ static int check_extent_item(struct extent_buffer *leaf, case BTRFS_SHARED_DATA_REF_KEY: sref = (struct btrfs_shared_data_ref *)(iref + 1); if (unlikely(!IS_ALIGNED(inline_offset, - fs_info->sectorsize))) { + fs_info->blocksize))) { extent_err(leaf, slot, "invalid data parent bytenr, have %llu expect aligned to %u", - inline_offset, fs_info->sectorsize); + inline_offset, fs_info->blocksize); return -EUCLEAN; } if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) { @@ -1641,17 +1641,17 @@ static int check_simple_keyed_refs(struct extent_buffer *leaf, expect_item_size, key->type); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->blocksize))) { generic_err(leaf, slot, "invalid key objectid for shared block ref, have %llu expect aligned to %u", - key->objectid, leaf->fs_info->sectorsize); + key->objectid, leaf->fs_info->blocksize); return -EUCLEAN; } if (unlikely(key->type != BTRFS_TREE_BLOCK_REF_KEY && - !IS_ALIGNED(key->offset, leaf->fs_info->sectorsize))) { + !IS_ALIGNED(key->offset, leaf->fs_info->blocksize))) { extent_err(leaf, slot, "invalid tree parent bytenr, have %llu expect aligned to %u", - key->offset, leaf->fs_info->sectorsize); + key->offset, leaf->fs_info->blocksize); return -EUCLEAN; } return 0; @@ -1671,10 +1671,10 @@ static int check_extent_data_ref(struct extent_buffer *leaf, sizeof(*dref), key->type); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->blocksize))) { generic_err(leaf, slot, "invalid key objectid for shared block ref, have %llu expect aligned to %u", - key->objectid, leaf->fs_info->sectorsize); + key->objectid, leaf->fs_info->blocksize); return -EUCLEAN; } for (; ptr < end; ptr += sizeof(*dref)) { @@ -1703,10 +1703,10 @@ static int check_extent_data_ref(struct extent_buffer *leaf, root); return -EUCLEAN; } - if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->blocksize))) { extent_err(leaf, slot, "invalid extent data backref offset, have %llu expect aligned to %u", - offset, leaf->fs_info->sectorsize); + offset, leaf->fs_info->blocksize); return -EUCLEAN; } if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) { @@ -1773,10 +1773,10 @@ static int check_inode_ref(struct extent_buffer *leaf, static int check_raid_stripe_extent(const struct extent_buffer *leaf, const struct btrfs_key *key, int slot) { - if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->blocksize))) { generic_err(leaf, slot, "invalid key objectid for raid stripe extent, have %llu expect aligned to %u", - key->objectid, leaf->fs_info->sectorsize); + key->objectid, leaf->fs_info->blocksize); return -EUCLEAN; } @@ -1795,7 +1795,7 @@ static int check_dev_extent_item(const struct extent_buffer *leaf, struct btrfs_key *prev_key) { struct btrfs_dev_extent *de; - const u32 sectorsize = leaf->fs_info->sectorsize; + const u32 blocksize = leaf->fs_info->blocksize; de = btrfs_item_ptr(leaf, slot, struct btrfs_dev_extent); /* Basic fixed member checks. */ @@ -1816,25 +1816,25 @@ static int check_dev_extent_item(const struct extent_buffer *leaf, return -EUCLEAN; } /* Alignment check. */ - if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) { + if (unlikely(!IS_ALIGNED(key->offset, blocksize))) { generic_err(leaf, slot, "invalid dev extent key.offset, has %llu not aligned to %u", - key->offset, sectorsize); + key->offset, blocksize); return -EUCLEAN; } if (unlikely(!IS_ALIGNED(btrfs_dev_extent_chunk_offset(leaf, de), - sectorsize))) { + blocksize))) { generic_err(leaf, slot, "invalid dev extent chunk offset, has %llu not aligned to %u", btrfs_dev_extent_chunk_objectid(leaf, de), - sectorsize); + blocksize); return -EUCLEAN; } if (unlikely(!IS_ALIGNED(btrfs_dev_extent_length(leaf, de), - sectorsize))) { + blocksize))) { generic_err(leaf, slot, "invalid dev extent length, has %llu not aligned to %u", - btrfs_dev_extent_length(leaf, de), sectorsize); + btrfs_dev_extent_length(leaf, de), blocksize); return -EUCLEAN; } /* Overlap check with previous dev extent. */ @@ -2123,10 +2123,10 @@ enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node) "invalid NULL node pointer"); return BTRFS_TREE_BLOCK_INVALID_BLOCKPTR; } - if (unlikely(!IS_ALIGNED(bytenr, fs_info->sectorsize))) { + if (unlikely(!IS_ALIGNED(bytenr, fs_info->blocksize))) { generic_err(node, slot, "unaligned pointer, have %llu should be aligned to %u", - bytenr, fs_info->sectorsize); + bytenr, fs_info->blocksize); return BTRFS_TREE_BLOCK_INVALID_BLOCKPTR; } From patchwork Wed Dec 18 09:41:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913303 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D8991917D9 for ; Wed, 18 Dec 2024 09:42:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514924; cv=none; b=oySl5qNLD9AesAs2uKr0sUg9WzxI4FwtrJHZs76p3mEtpPl4dSVxvkaAYKf6Mo8vW8TCQ7MM+TMLL1V/KxZiiyG2SAP1XBzovM7wgOed3OY8Thcr1coOCj4OzbLcCIYpUg7kzUU4Tm2h1+g/IOahKEjfve4G0LARXQhyMD3Kylo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514924; c=relaxed/simple; bh=cnWq3zO+4Smy12SKGE7gprfQZB+ueyCgBoYHYzTKxEI=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Dy13IogHNGcQ9LDxyBGVmf2YTj+2hCVuvmQd76DjTy/W2ioZJ3FBA02DOuNRiSfh2qQzgAM+0EoAH/R2B2L0jDEwJqNZ/mCCSbB/+4tPTFP79AYVMFmPMhtInXc2CiwbMXcuJqsO2r0eXB38TLgfbiqxGufCjRZAeS8TgoCE2g4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=Vc3mvNgk; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=Vc3mvNgk; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="Vc3mvNgk"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="Vc3mvNgk" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C46AF1F449 for ; Wed, 18 Dec 2024 09:41:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514918; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/2DxJPQuK4eN7+6+2W0fRa3cJMm2L6smP9Xfrao1CCc=; b=Vc3mvNgknbDgUId/zJu+aYhexJx2Vf67cJvDE5vwEttfpRBmHATwTgT6G/x5DLsErEuPeT n5/13P4Qau4rYZUaUe5S0T+An/Z2dBZaI2CpCulkueTTa9Xe3kHb/lLKhihjC5IJBH15Ep GULR9a6LtW4oX/KfcfvhgMEZWlDMSzc= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514918; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/2DxJPQuK4eN7+6+2W0fRa3cJMm2L6smP9Xfrao1CCc=; b=Vc3mvNgknbDgUId/zJu+aYhexJx2Vf67cJvDE5vwEttfpRBmHATwTgT6G/x5DLsErEuPeT n5/13P4Qau4rYZUaUe5S0T+An/Z2dBZaI2CpCulkueTTa9Xe3kHb/lLKhihjC5IJBH15Ep GULR9a6LtW4oX/KfcfvhgMEZWlDMSzc= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B0AB8132EA for ; Wed, 18 Dec 2024 09:41:57 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wL1IG+WYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:41:57 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 04/18] btrfs: migrate scrub.c to use block size terminology Date: Wed, 18 Dec 2024 20:11:20 +1030 Message-ID: <101d3ec7127ae1c30ddc20b53d43d2a5c70a9abf.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; FUZZY_BLOCKED(0.00)[rspamd.com]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; ARC_NA(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; RCVD_TLS_ALL(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; FROM_EQ_ENVFROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MIME_TRACE(0.00)[0:+]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid] X-Spam-Score: -2.80 X-Spam-Flag: NO Mostly straightforward rename from "sector" to "block", except bio interfaces. Also rename the macro SCRUB_MAX_SECTORS_PER_BLOCK to SCRUB_MAX_BLOCKS_PER_TREE_BLOCK. Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 442 +++++++++++++++++++++++------------------------ 1 file changed, 221 insertions(+), 221 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 204c928beaf9..5cec0875a707 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -60,25 +60,25 @@ struct scrub_ctx; /* * The following value times PAGE_SIZE needs to be large enough to match the - * largest node/leaf/sector size that shall be supported. + * largest node/leaf/block size that shall be supported. */ -#define SCRUB_MAX_SECTORS_PER_BLOCK (BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K) +#define SCRUB_MAX_BLOCKS_PER_TREE_BLOCK (BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K) -/* Represent one sector and its needed info to verify the content. */ -struct scrub_sector_verification { +/* Represent one block and its needed info to verify the content. */ +struct scrub_block_verification { bool is_metadata; union { /* * Csum pointer for data csum verification. Should point to a - * sector csum inside scrub_stripe::csums. + * block csum inside scrub_stripe::csums. * - * NULL if this data sector has no csum. + * NULL if this data block has no csum. */ u8 *csum; /* - * Extra info for metadata verification. All sectors inside a + * Extra info for metadata verification. All blocks inside a * tree block share the same generation. */ u64 generation; @@ -110,7 +110,7 @@ struct scrub_stripe { struct btrfs_block_group *bg; struct page *pages[SCRUB_STRIPE_PAGES]; - struct scrub_sector_verification *sectors; + struct scrub_block_verification *blocks; struct btrfs_device *dev; u64 logical; @@ -118,8 +118,8 @@ struct scrub_stripe { u16 mirror_num; - /* Should be BTRFS_STRIPE_LEN / sectorsize. */ - u16 nr_sectors; + /* Should be BTRFS_STRIPE_LEN / blocksize. */ + u16 nr_blocks; /* * How many data/meta extents are in this stripe. Only for scrub status @@ -138,8 +138,8 @@ struct scrub_stripe { */ unsigned long state; - /* Indicate which sectors are covered by extent items. */ - unsigned long extent_sector_bitmap; + /* Indicate which blocks are covered by extent items. */ + unsigned long extent_block_bitmap; /* * The errors hit during the initial read of the stripe. @@ -238,9 +238,9 @@ static void release_scrub_stripe(struct scrub_stripe *stripe) __free_page(stripe->pages[i]); stripe->pages[i] = NULL; } - kfree(stripe->sectors); + kfree(stripe->blocks); kfree(stripe->csums); - stripe->sectors = NULL; + stripe->blocks = NULL; stripe->csums = NULL; stripe->sctx = NULL; stripe->state = 0; @@ -253,7 +253,7 @@ static int init_scrub_stripe(struct btrfs_fs_info *fs_info, memset(stripe, 0, sizeof(*stripe)); - stripe->nr_sectors = BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits; + stripe->nr_blocks = BTRFS_STRIPE_LEN >> fs_info->blocksize_bits; stripe->state = 0; init_waitqueue_head(&stripe->io_wait); @@ -265,13 +265,13 @@ static int init_scrub_stripe(struct btrfs_fs_info *fs_info, if (ret < 0) goto error; - stripe->sectors = kcalloc(stripe->nr_sectors, - sizeof(struct scrub_sector_verification), + stripe->blocks = kcalloc(stripe->nr_blocks, + sizeof(struct scrub_block_verification), GFP_KERNEL); - if (!stripe->sectors) + if (!stripe->blocks) goto error; - stripe->csums = kcalloc(BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits, + stripe->csums = kcalloc(BTRFS_STRIPE_LEN >> fs_info->blocksize_bits, fs_info->csum_size, GFP_KERNEL); if (!stripe->csums) goto error; @@ -456,7 +456,7 @@ static int scrub_print_warning_inode(u64 inum, u64 offset, u64 num_bytes, btrfs_dev_name(swarn->dev), swarn->physical, root, inum, offset, - fs_info->sectorsize, nlink, + fs_info->blocksize, nlink, (char *)(unsigned long)ipath->fspath->val[i]); btrfs_put_root(local_root); @@ -579,29 +579,29 @@ static int fill_writer_pointer_gap(struct scrub_ctx *sctx, u64 physical) return ret; } -static struct page *scrub_stripe_get_page(struct scrub_stripe *stripe, int sector_nr) +static struct page *scrub_stripe_get_page(struct scrub_stripe *stripe, int block_nr) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; - int page_index = (sector_nr << fs_info->sectorsize_bits) >> PAGE_SHIFT; + int page_index = (block_nr << fs_info->blocksize_bits) >> PAGE_SHIFT; return stripe->pages[page_index]; } static unsigned int scrub_stripe_get_page_offset(struct scrub_stripe *stripe, - int sector_nr) + int block_nr) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; - return offset_in_page(sector_nr << fs_info->sectorsize_bits); + return offset_in_page(block_nr << fs_info->blocksize_bits); } -static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr) +static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int block_nr) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; - const u32 sectors_per_tree = fs_info->nodesize >> fs_info->sectorsize_bits; - const u64 logical = stripe->logical + (sector_nr << fs_info->sectorsize_bits); - const struct page *first_page = scrub_stripe_get_page(stripe, sector_nr); - const unsigned int first_off = scrub_stripe_get_page_offset(stripe, sector_nr); + const u32 blocks_per_tree = fs_info->nodesize >> fs_info->blocksize_bits; + const u64 logical = stripe->logical + (block_nr << fs_info->blocksize_bits); + const struct page *first_page = scrub_stripe_get_page(stripe, block_nr); + const unsigned int first_off = scrub_stripe_get_page_offset(stripe, block_nr); SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); u8 on_disk_csum[BTRFS_CSUM_SIZE]; u8 calculated_csum[BTRFS_CSUM_SIZE]; @@ -616,8 +616,8 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr memcpy(on_disk_csum, header->csum, fs_info->csum_size); if (logical != btrfs_stack_header_bytenr(header)) { - bitmap_set(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree); - bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree); + bitmap_set(&stripe->csum_error_bitmap, block_nr, blocks_per_tree); + bitmap_set(&stripe->error_bitmap, block_nr, blocks_per_tree); btrfs_warn_rl(fs_info, "tree block %llu mirror %u has bad bytenr, has %llu want %llu", logical, stripe->mirror_num, @@ -626,8 +626,8 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr } if (memcmp(header->fsid, fs_info->fs_devices->metadata_uuid, BTRFS_FSID_SIZE) != 0) { - bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree); - bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree); + bitmap_set(&stripe->meta_error_bitmap, block_nr, blocks_per_tree); + bitmap_set(&stripe->error_bitmap, block_nr, blocks_per_tree); btrfs_warn_rl(fs_info, "tree block %llu mirror %u has bad fsid, has %pU want %pU", logical, stripe->mirror_num, @@ -636,8 +636,8 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr } if (memcmp(header->chunk_tree_uuid, fs_info->chunk_tree_uuid, BTRFS_UUID_SIZE) != 0) { - bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree); - bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree); + bitmap_set(&stripe->meta_error_bitmap, block_nr, blocks_per_tree); + bitmap_set(&stripe->error_bitmap, block_nr, blocks_per_tree); btrfs_warn_rl(fs_info, "tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU", logical, stripe->mirror_num, @@ -649,20 +649,20 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr shash->tfm = fs_info->csum_shash; crypto_shash_init(shash); crypto_shash_update(shash, page_address(first_page) + first_off + - BTRFS_CSUM_SIZE, fs_info->sectorsize - BTRFS_CSUM_SIZE); + BTRFS_CSUM_SIZE, fs_info->blocksize - BTRFS_CSUM_SIZE); - for (int i = sector_nr + 1; i < sector_nr + sectors_per_tree; i++) { + for (int i = block_nr + 1; i < block_nr + blocks_per_tree; i++) { struct page *page = scrub_stripe_get_page(stripe, i); unsigned int page_off = scrub_stripe_get_page_offset(stripe, i); crypto_shash_update(shash, page_address(page) + page_off, - fs_info->sectorsize); + fs_info->blocksize); } crypto_shash_final(shash, calculated_csum); if (memcmp(calculated_csum, on_disk_csum, fs_info->csum_size) != 0) { - bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree); - bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree); + bitmap_set(&stripe->meta_error_bitmap, block_nr, blocks_per_tree); + bitmap_set(&stripe->error_bitmap, block_nr, blocks_per_tree); btrfs_warn_rl(fs_info, "tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT, logical, stripe->mirror_num, @@ -670,44 +670,44 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr CSUM_FMT_VALUE(fs_info->csum_size, calculated_csum)); return; } - if (stripe->sectors[sector_nr].generation != + if (stripe->blocks[block_nr].generation != btrfs_stack_header_generation(header)) { - bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree); - bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree); + bitmap_set(&stripe->meta_error_bitmap, block_nr, blocks_per_tree); + bitmap_set(&stripe->error_bitmap, block_nr, blocks_per_tree); btrfs_warn_rl(fs_info, "tree block %llu mirror %u has bad generation, has %llu want %llu", logical, stripe->mirror_num, btrfs_stack_header_generation(header), - stripe->sectors[sector_nr].generation); + stripe->blocks[block_nr].generation); return; } - bitmap_clear(&stripe->error_bitmap, sector_nr, sectors_per_tree); - bitmap_clear(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree); - bitmap_clear(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree); + bitmap_clear(&stripe->error_bitmap, block_nr, blocks_per_tree); + bitmap_clear(&stripe->csum_error_bitmap, block_nr, blocks_per_tree); + bitmap_clear(&stripe->meta_error_bitmap, block_nr, blocks_per_tree); } -static void scrub_verify_one_sector(struct scrub_stripe *stripe, int sector_nr) +static void scrub_verify_one_block(struct scrub_stripe *stripe, int block_nr) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; - struct scrub_sector_verification *sector = &stripe->sectors[sector_nr]; - const u32 sectors_per_tree = fs_info->nodesize >> fs_info->sectorsize_bits; - struct page *page = scrub_stripe_get_page(stripe, sector_nr); - unsigned int pgoff = scrub_stripe_get_page_offset(stripe, sector_nr); + struct scrub_block_verification *block = &stripe->blocks[block_nr]; + const u32 blocks_per_tree = fs_info->nodesize >> fs_info->blocksize_bits; + struct page *page = scrub_stripe_get_page(stripe, block_nr); + unsigned int pgoff = scrub_stripe_get_page_offset(stripe, block_nr); u8 csum_buf[BTRFS_CSUM_SIZE]; int ret; - ASSERT(sector_nr >= 0 && sector_nr < stripe->nr_sectors); + ASSERT(block_nr >= 0 && block_nr < stripe->nr_blocks); - /* Sector not utilized, skip it. */ - if (!test_bit(sector_nr, &stripe->extent_sector_bitmap)) + /* Block not utilized, skip it. */ + if (!test_bit(block_nr, &stripe->extent_block_bitmap)) return; /* IO error, no need to check. */ - if (test_bit(sector_nr, &stripe->io_error_bitmap)) + if (test_bit(block_nr, &stripe->io_error_bitmap)) return; /* Metadata, verify the full tree block. */ - if (sector->is_metadata) { + if (block->is_metadata) { /* * Check if the tree block crosses the stripe boundary. If * crossed the boundary, we cannot verify it but only give a @@ -716,15 +716,15 @@ static void scrub_verify_one_sector(struct scrub_stripe *stripe, int sector_nr) * This can only happen on a very old filesystem where chunks * are not ensured to be stripe aligned. */ - if (unlikely(sector_nr + sectors_per_tree > stripe->nr_sectors)) { + if (unlikely(block_nr + blocks_per_tree > stripe->nr_blocks)) { btrfs_warn_rl(fs_info, "tree block at %llu crosses stripe boundary %llu", stripe->logical + - (sector_nr << fs_info->sectorsize_bits), + (block_nr << fs_info->blocksize_bits), stripe->logical); return; } - scrub_verify_one_metadata(stripe, sector_nr); + scrub_verify_one_metadata(stripe, block_nr); return; } @@ -732,52 +732,52 @@ static void scrub_verify_one_sector(struct scrub_stripe *stripe, int sector_nr) * Data is easier, we just verify the data csum (if we have it). For * cases without csum, we have no other choice but to trust it. */ - if (!sector->csum) { - clear_bit(sector_nr, &stripe->error_bitmap); + if (!block->csum) { + clear_bit(block_nr, &stripe->error_bitmap); return; } - ret = btrfs_check_sector_csum(fs_info, page, pgoff, csum_buf, sector->csum); + ret = btrfs_check_sector_csum(fs_info, page, pgoff, csum_buf, block->csum); if (ret < 0) { - set_bit(sector_nr, &stripe->csum_error_bitmap); - set_bit(sector_nr, &stripe->error_bitmap); + set_bit(block_nr, &stripe->csum_error_bitmap); + set_bit(block_nr, &stripe->error_bitmap); } else { - clear_bit(sector_nr, &stripe->csum_error_bitmap); - clear_bit(sector_nr, &stripe->error_bitmap); + clear_bit(block_nr, &stripe->csum_error_bitmap); + clear_bit(block_nr, &stripe->error_bitmap); } } -/* Verify specified sectors of a stripe. */ +/* Verify specified blocks of a stripe. */ static void scrub_verify_one_stripe(struct scrub_stripe *stripe, unsigned long bitmap) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; - const u32 sectors_per_tree = fs_info->nodesize >> fs_info->sectorsize_bits; - int sector_nr; + const u32 blocks_per_tree = fs_info->nodesize >> fs_info->blocksize_bits; + int block_nr; - for_each_set_bit(sector_nr, &bitmap, stripe->nr_sectors) { - scrub_verify_one_sector(stripe, sector_nr); - if (stripe->sectors[sector_nr].is_metadata) - sector_nr += sectors_per_tree - 1; + for_each_set_bit(block_nr, &bitmap, stripe->nr_blocks) { + scrub_verify_one_block(stripe, block_nr); + if (stripe->blocks[block_nr].is_metadata) + block_nr += blocks_per_tree - 1; } } -static int calc_sector_number(struct scrub_stripe *stripe, struct bio_vec *first_bvec) +static int calc_block_number(struct scrub_stripe *stripe, struct bio_vec *first_bvec) { int i; - for (i = 0; i < stripe->nr_sectors; i++) { + for (i = 0; i < stripe->nr_blocks; i++) { if (scrub_stripe_get_page(stripe, i) == first_bvec->bv_page && scrub_stripe_get_page_offset(stripe, i) == first_bvec->bv_offset) break; } - ASSERT(i < stripe->nr_sectors); + ASSERT(i < stripe->nr_blocks); return i; } /* * Repair read is different to the regular read: * - * - Only reads the failed sectors + * - Only reads the failed blocks * - May have extra blocksize limits */ static void scrub_repair_read_endio(struct btrfs_bio *bbio) @@ -785,23 +785,23 @@ static void scrub_repair_read_endio(struct btrfs_bio *bbio) struct scrub_stripe *stripe = bbio->private; struct btrfs_fs_info *fs_info = stripe->bg->fs_info; struct bio_vec *bvec; - int sector_nr = calc_sector_number(stripe, bio_first_bvec_all(&bbio->bio)); + int block_nr = calc_block_number(stripe, bio_first_bvec_all(&bbio->bio)); u32 bio_size = 0; int i; - ASSERT(sector_nr < stripe->nr_sectors); + ASSERT(block_nr < stripe->nr_blocks); bio_for_each_bvec_all(bvec, &bbio->bio, i) bio_size += bvec->bv_len; if (bbio->bio.bi_status) { - bitmap_set(&stripe->io_error_bitmap, sector_nr, - bio_size >> fs_info->sectorsize_bits); - bitmap_set(&stripe->error_bitmap, sector_nr, - bio_size >> fs_info->sectorsize_bits); + bitmap_set(&stripe->io_error_bitmap, block_nr, + bio_size >> fs_info->blocksize_bits); + bitmap_set(&stripe->error_bitmap, block_nr, + bio_size >> fs_info->blocksize_bits); } else { - bitmap_clear(&stripe->io_error_bitmap, sector_nr, - bio_size >> fs_info->sectorsize_bits); + bitmap_clear(&stripe->io_error_bitmap, block_nr, + bio_size >> fs_info->blocksize_bits); } bio_put(&bbio->bio); if (atomic_dec_and_test(&stripe->pending_io)) @@ -825,7 +825,7 @@ static void scrub_stripe_submit_repair_read(struct scrub_stripe *stripe, ASSERT(stripe->mirror_num >= 1); ASSERT(atomic_read(&stripe->pending_io) == 0); - for_each_set_bit(i, &old_error_bitmap, stripe->nr_sectors) { + for_each_set_bit(i, &old_error_bitmap, stripe->nr_blocks) { struct page *page; int pgoff; int ret; @@ -833,7 +833,7 @@ static void scrub_stripe_submit_repair_read(struct scrub_stripe *stripe, page = scrub_stripe_get_page(stripe, i); pgoff = scrub_stripe_get_page_offset(stripe, i); - /* The current sector cannot be merged, submit the bio. */ + /* The current block cannot be merged, submit the bio. */ if (bbio && ((i > 0 && !test_bit(i - 1, &stripe->error_bitmap)) || bbio->bio.bi_iter.bi_size >= blocksize)) { ASSERT(bbio->bio.bi_iter.bi_size); @@ -845,14 +845,14 @@ static void scrub_stripe_submit_repair_read(struct scrub_stripe *stripe, } if (!bbio) { - bbio = btrfs_bio_alloc(stripe->nr_sectors, REQ_OP_READ, + bbio = btrfs_bio_alloc(stripe->nr_blocks, REQ_OP_READ, fs_info, scrub_repair_read_endio, stripe); bbio->bio.bi_iter.bi_sector = (stripe->logical + - (i << fs_info->sectorsize_bits)) >> SECTOR_SHIFT; + (i << fs_info->blocksize_bits)) >> SECTOR_SHIFT; } - ret = bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff); - ASSERT(ret == fs_info->sectorsize); + ret = bio_add_page(&bbio->bio, page, fs_info->blocksize, pgoff); + ASSERT(ret == fs_info->blocksize); } if (bbio) { ASSERT(bbio->bio.bi_iter.bi_size); @@ -871,11 +871,11 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx, struct btrfs_fs_info *fs_info = sctx->fs_info; struct btrfs_device *dev = NULL; u64 physical = 0; - int nr_data_sectors = 0; - int nr_meta_sectors = 0; - int nr_nodatacsum_sectors = 0; - int nr_repaired_sectors = 0; - int sector_nr; + int nr_data_blocks = 0; + int nr_meta_blocks = 0; + int nr_nodatacsum_blocks = 0; + int nr_repaired_blocks = 0; + int block_nr; if (test_bit(SCRUB_STRIPE_FLAG_NO_REPORT, &stripe->state)) return; @@ -886,8 +886,8 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx, * Although our scrub_stripe infrastructure is mostly based on btrfs_submit_bio() * thus no need for dev/physical, error reporting still needs dev and physical. */ - if (!bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors)) { - u64 mapped_len = fs_info->sectorsize; + if (!bitmap_empty(&stripe->init_error_bitmap, stripe->nr_blocks)) { + u64 mapped_len = fs_info->blocksize; struct btrfs_io_context *bioc = NULL; int stripe_index = stripe->mirror_num - 1; int ret; @@ -909,29 +909,29 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx, } skip: - for_each_set_bit(sector_nr, &stripe->extent_sector_bitmap, stripe->nr_sectors) { + for_each_set_bit(block_nr, &stripe->extent_block_bitmap, stripe->nr_blocks) { bool repaired = false; - if (stripe->sectors[sector_nr].is_metadata) { - nr_meta_sectors++; + if (stripe->blocks[block_nr].is_metadata) { + nr_meta_blocks++; } else { - nr_data_sectors++; - if (!stripe->sectors[sector_nr].csum) - nr_nodatacsum_sectors++; + nr_data_blocks++; + if (!stripe->blocks[block_nr].csum) + nr_nodatacsum_blocks++; } - if (test_bit(sector_nr, &stripe->init_error_bitmap) && - !test_bit(sector_nr, &stripe->error_bitmap)) { - nr_repaired_sectors++; + if (test_bit(block_nr, &stripe->init_error_bitmap) && + !test_bit(block_nr, &stripe->error_bitmap)) { + nr_repaired_blocks++; repaired = true; } - /* Good sector from the beginning, nothing need to be done. */ - if (!test_bit(sector_nr, &stripe->init_error_bitmap)) + /* Good block from the beginning, nothing need to be done. */ + if (!test_bit(block_nr, &stripe->init_error_bitmap)) continue; /* - * Report error for the corrupted sectors. If repaired, just + * Report error for the corrupted blocks. If repaired, just * output the message of repaired message. */ if (repaired) { @@ -960,15 +960,15 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx, stripe->logical, stripe->mirror_num); } - if (test_bit(sector_nr, &stripe->io_error_bitmap)) + if (test_bit(block_nr, &stripe->io_error_bitmap)) if (__ratelimit(&rs) && dev) scrub_print_common_warning("i/o error", dev, false, stripe->logical, physical); - if (test_bit(sector_nr, &stripe->csum_error_bitmap)) + if (test_bit(block_nr, &stripe->csum_error_bitmap)) if (__ratelimit(&rs) && dev) scrub_print_common_warning("checksum error", dev, false, stripe->logical, physical); - if (test_bit(sector_nr, &stripe->meta_error_bitmap)) + if (test_bit(block_nr, &stripe->meta_error_bitmap)) if (__ratelimit(&rs) && dev) scrub_print_common_warning("header error", dev, false, stripe->logical, physical); @@ -977,30 +977,30 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx, spin_lock(&sctx->stat_lock); sctx->stat.data_extents_scrubbed += stripe->nr_data_extents; sctx->stat.tree_extents_scrubbed += stripe->nr_meta_extents; - sctx->stat.data_bytes_scrubbed += nr_data_sectors << fs_info->sectorsize_bits; - sctx->stat.tree_bytes_scrubbed += nr_meta_sectors << fs_info->sectorsize_bits; - sctx->stat.no_csum += nr_nodatacsum_sectors; + sctx->stat.data_bytes_scrubbed += nr_data_blocks << fs_info->blocksize_bits; + sctx->stat.tree_bytes_scrubbed += nr_meta_blocks << fs_info->blocksize_bits; + sctx->stat.no_csum += nr_nodatacsum_blocks; sctx->stat.read_errors += stripe->init_nr_io_errors; sctx->stat.csum_errors += stripe->init_nr_csum_errors; sctx->stat.verify_errors += stripe->init_nr_meta_errors; sctx->stat.uncorrectable_errors += - bitmap_weight(&stripe->error_bitmap, stripe->nr_sectors); - sctx->stat.corrected_errors += nr_repaired_sectors; + bitmap_weight(&stripe->error_bitmap, stripe->nr_blocks); + sctx->stat.corrected_errors += nr_repaired_blocks; spin_unlock(&sctx->stat_lock); } -static void scrub_write_sectors(struct scrub_ctx *sctx, struct scrub_stripe *stripe, - unsigned long write_bitmap, bool dev_replace); +static void scrub_write_blocks(struct scrub_ctx *sctx, struct scrub_stripe *stripe, + unsigned long write_bitmap, bool dev_replace); /* * The main entrance for all read related scrub work, including: * * - Wait for the initial read to finish - * - Verify and locate any bad sectors + * - Verify and locate any bad blocks * - Go through the remaining mirrors and try to read as large blocksize as * possible - * - Go through all mirrors (including the failed mirror) sector-by-sector - * - Submit writeback for repaired sectors + * - Go through all mirrors (including the failed mirror) block-by-block + * - Submit writeback for repaired blocks * * Writeback for dev-replace does not happen here, it needs extra * synchronization for zoned devices. @@ -1019,17 +1019,17 @@ static void scrub_stripe_read_repair_worker(struct work_struct *work) ASSERT(stripe->mirror_num > 0); wait_scrub_stripe_io(stripe); - scrub_verify_one_stripe(stripe, stripe->extent_sector_bitmap); + scrub_verify_one_stripe(stripe, stripe->extent_block_bitmap); /* Save the initial failed bitmap for later repair and report usage. */ stripe->init_error_bitmap = stripe->error_bitmap; stripe->init_nr_io_errors = bitmap_weight(&stripe->io_error_bitmap, - stripe->nr_sectors); + stripe->nr_blocks); stripe->init_nr_csum_errors = bitmap_weight(&stripe->csum_error_bitmap, - stripe->nr_sectors); + stripe->nr_blocks); stripe->init_nr_meta_errors = bitmap_weight(&stripe->meta_error_bitmap, - stripe->nr_sectors); + stripe->nr_blocks); - if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors)) + if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_blocks)) goto out; /* @@ -1047,17 +1047,17 @@ static void scrub_stripe_read_repair_worker(struct work_struct *work) BTRFS_STRIPE_LEN, false); wait_scrub_stripe_io(stripe); scrub_verify_one_stripe(stripe, old_error_bitmap); - if (bitmap_empty(&stripe->error_bitmap, stripe->nr_sectors)) + if (bitmap_empty(&stripe->error_bitmap, stripe->nr_blocks)) goto out; } /* * Last safety net, try re-checking all mirrors, including the failed - * one, sector-by-sector. + * one, block-by-block. * - * As if one sector failed the drive's internal csum, the whole read - * containing the offending sector would be marked as error. - * Thus here we do sector-by-sector read. + * As if one block failed the drive's internal csum, the whole read + * containing the offending block would be marked as error. + * Thus here we do block-by-block read. * * This can be slow, thus we only try it as the last resort. */ @@ -1068,24 +1068,24 @@ static void scrub_stripe_read_repair_worker(struct work_struct *work) const unsigned long old_error_bitmap = stripe->error_bitmap; scrub_stripe_submit_repair_read(stripe, mirror, - fs_info->sectorsize, true); + fs_info->blocksize, true); wait_scrub_stripe_io(stripe); scrub_verify_one_stripe(stripe, old_error_bitmap); - if (bitmap_empty(&stripe->error_bitmap, stripe->nr_sectors)) + if (bitmap_empty(&stripe->error_bitmap, stripe->nr_blocks)) goto out; } out: /* - * Submit the repaired sectors. For zoned case, we cannot do repair + * Submit the repaired blocks. For zoned case, we cannot do repair * in-place, but queue the bg to be relocated. */ bitmap_andnot(&repaired, &stripe->init_error_bitmap, &stripe->error_bitmap, - stripe->nr_sectors); - if (!sctx->readonly && !bitmap_empty(&repaired, stripe->nr_sectors)) { + stripe->nr_blocks); + if (!sctx->readonly && !bitmap_empty(&repaired, stripe->nr_blocks)) { if (btrfs_is_zoned(fs_info)) { btrfs_repair_one_zone(fs_info, sctx->stripes[0].bg->start); } else { - scrub_write_sectors(sctx, stripe, repaired, false); + scrub_write_blocks(sctx, stripe, repaired, false); wait_scrub_stripe_io(stripe); } } @@ -1099,21 +1099,21 @@ static void scrub_read_endio(struct btrfs_bio *bbio) { struct scrub_stripe *stripe = bbio->private; struct bio_vec *bvec; - int sector_nr = calc_sector_number(stripe, bio_first_bvec_all(&bbio->bio)); - int num_sectors; + int block_nr = calc_block_number(stripe, bio_first_bvec_all(&bbio->bio)); + int num_blocks; u32 bio_size = 0; int i; - ASSERT(sector_nr < stripe->nr_sectors); + ASSERT(block_nr < stripe->nr_blocks); bio_for_each_bvec_all(bvec, &bbio->bio, i) bio_size += bvec->bv_len; - num_sectors = bio_size >> stripe->bg->fs_info->sectorsize_bits; + num_blocks = bio_size >> stripe->bg->fs_info->blocksize_bits; if (bbio->bio.bi_status) { - bitmap_set(&stripe->io_error_bitmap, sector_nr, num_sectors); - bitmap_set(&stripe->error_bitmap, sector_nr, num_sectors); + bitmap_set(&stripe->io_error_bitmap, block_nr, num_blocks); + bitmap_set(&stripe->error_bitmap, block_nr, num_blocks); } else { - bitmap_clear(&stripe->io_error_bitmap, sector_nr, num_sectors); + bitmap_clear(&stripe->io_error_bitmap, block_nr, num_blocks); } bio_put(&bbio->bio); if (atomic_dec_and_test(&stripe->pending_io)) { @@ -1128,7 +1128,7 @@ static void scrub_write_endio(struct btrfs_bio *bbio) struct scrub_stripe *stripe = bbio->private; struct btrfs_fs_info *fs_info = stripe->bg->fs_info; struct bio_vec *bvec; - int sector_nr = calc_sector_number(stripe, bio_first_bvec_all(&bbio->bio)); + int block_nr = calc_block_number(stripe, bio_first_bvec_all(&bbio->bio)); u32 bio_size = 0; int i; @@ -1139,8 +1139,8 @@ static void scrub_write_endio(struct btrfs_bio *bbio) unsigned long flags; spin_lock_irqsave(&stripe->write_error_lock, flags); - bitmap_set(&stripe->write_error_bitmap, sector_nr, - bio_size >> fs_info->sectorsize_bits); + bitmap_set(&stripe->write_error_bitmap, block_nr, + bio_size >> fs_info->blocksize_bits); spin_unlock_irqrestore(&stripe->write_error_lock, flags); } bio_put(&bbio->bio); @@ -1173,13 +1173,13 @@ static void scrub_submit_write_bio(struct scrub_ctx *sctx, * And also need to update the write pointer if write finished * successfully. */ - if (!test_bit(bio_off >> fs_info->sectorsize_bits, + if (!test_bit(bio_off >> fs_info->blocksize_bits, &stripe->write_error_bitmap)) sctx->write_pointer += bio_len; } /* - * Submit the write bio(s) for the sectors specified by @write_bitmap. + * Submit the write bio(s) for the blocks specified by @write_bitmap. * * Here we utilize btrfs_submit_repair_write(), which has some extra benefits: * @@ -1191,35 +1191,35 @@ static void scrub_submit_write_bio(struct scrub_ctx *sctx, * * - Handle dev-replace and read-repair writeback differently */ -static void scrub_write_sectors(struct scrub_ctx *sctx, struct scrub_stripe *stripe, +static void scrub_write_blocks(struct scrub_ctx *sctx, struct scrub_stripe *stripe, unsigned long write_bitmap, bool dev_replace) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; struct btrfs_bio *bbio = NULL; - int sector_nr; + int block_nr; - for_each_set_bit(sector_nr, &write_bitmap, stripe->nr_sectors) { - struct page *page = scrub_stripe_get_page(stripe, sector_nr); - unsigned int pgoff = scrub_stripe_get_page_offset(stripe, sector_nr); + for_each_set_bit(block_nr, &write_bitmap, stripe->nr_blocks) { + struct page *page = scrub_stripe_get_page(stripe, block_nr); + unsigned int pgoff = scrub_stripe_get_page_offset(stripe, block_nr); int ret; - /* We should only writeback sectors covered by an extent. */ - ASSERT(test_bit(sector_nr, &stripe->extent_sector_bitmap)); + /* We should only writeback blocks covered by an extent. */ + ASSERT(test_bit(block_nr, &stripe->extent_block_bitmap)); - /* Cannot merge with previous sector, submit the current one. */ - if (bbio && sector_nr && !test_bit(sector_nr - 1, &write_bitmap)) { + /* Cannot merge with previous block, submit the current one. */ + if (bbio && block_nr && !test_bit(block_nr - 1, &write_bitmap)) { scrub_submit_write_bio(sctx, stripe, bbio, dev_replace); bbio = NULL; } if (!bbio) { - bbio = btrfs_bio_alloc(stripe->nr_sectors, REQ_OP_WRITE, + bbio = btrfs_bio_alloc(stripe->nr_blocks, REQ_OP_WRITE, fs_info, scrub_write_endio, stripe); bbio->bio.bi_iter.bi_sector = (stripe->logical + - (sector_nr << fs_info->sectorsize_bits)) >> + (block_nr << fs_info->blocksize_bits)) >> SECTOR_SHIFT; } - ret = bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff); - ASSERT(ret == fs_info->sectorsize); + ret = bio_add_page(&bbio->bio, page, fs_info->blocksize, pgoff); + ASSERT(ret == fs_info->blocksize); } if (bbio) scrub_submit_write_bio(sctx, stripe, bbio, dev_replace); @@ -1487,23 +1487,23 @@ static void fill_one_extent_info(struct btrfs_fs_info *fs_info, for (u64 cur_logical = max(stripe->logical, extent_start); cur_logical < min(stripe->logical + BTRFS_STRIPE_LEN, extent_start + extent_len); - cur_logical += fs_info->sectorsize) { - const int nr_sector = (cur_logical - stripe->logical) >> - fs_info->sectorsize_bits; - struct scrub_sector_verification *sector = - &stripe->sectors[nr_sector]; + cur_logical += fs_info->blocksize) { + const int nr_block = (cur_logical - stripe->logical) >> + fs_info->blocksize_bits; + struct scrub_block_verification *block = + &stripe->blocks[nr_block]; - set_bit(nr_sector, &stripe->extent_sector_bitmap); + set_bit(nr_block, &stripe->extent_block_bitmap); if (extent_flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) { - sector->is_metadata = true; - sector->generation = extent_gen; + block->is_metadata = true; + block->generation = extent_gen; } } } static void scrub_stripe_reset_bitmaps(struct scrub_stripe *stripe) { - stripe->extent_sector_bitmap = 0; + stripe->extent_block_bitmap = 0; stripe->init_error_bitmap = 0; stripe->init_nr_io_errors = 0; stripe->init_nr_csum_errors = 0; @@ -1541,8 +1541,8 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg, u64 extent_gen; int ret; - memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) * - stripe->nr_sectors); + memset(stripe->blocks, 0, sizeof(struct scrub_block_verification) * + stripe->nr_blocks); scrub_stripe_reset_bitmaps(stripe); /* The range must be inside the bg. */ @@ -1575,12 +1575,12 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg, stripe->mirror_num = mirror_num; stripe_end = stripe->logical + BTRFS_STRIPE_LEN - 1; - /* Fill the first extent info into stripe->sectors[] array. */ + /* Fill the first extent info into stripe->blocks[] array. */ fill_one_extent_info(fs_info, stripe, extent_start, extent_len, extent_flags, extent_gen); cur_logical = extent_start + extent_len; - /* Fill the extent info for the remaining sectors. */ + /* Fill the extent info for the remaining blocks. */ while (cur_logical <= stripe_end) { ret = find_first_extent_item(extent_root, extent_path, cur_logical, stripe_end - cur_logical + 1); @@ -1603,7 +1603,7 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg, /* Now fill the data csum. */ if (bg->flags & BTRFS_BLOCK_GROUP_DATA) { - int sector_nr; + int block_nr; unsigned long csum_bitmap = 0; /* Csum space should have already been allocated. */ @@ -1611,9 +1611,9 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg, /* * Our csum bitmap should be large enough, as BTRFS_STRIPE_LEN - * should contain at most 16 sectors. + * should contain at most 16 blocks. */ - ASSERT(BITS_PER_LONG >= BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits); + ASSERT(BITS_PER_LONG >= (BTRFS_STRIPE_LEN >> fs_info->blocksize_bits)); ret = btrfs_lookup_csums_bitmap(csum_root, csum_path, stripe->logical, stripe_end, @@ -1623,9 +1623,9 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg, if (ret > 0) ret = 0; - for_each_set_bit(sector_nr, &csum_bitmap, stripe->nr_sectors) { - stripe->sectors[sector_nr].csum = stripe->csums + - sector_nr * fs_info->csum_size; + for_each_set_bit(block_nr, &csum_bitmap, stripe->nr_blocks) { + stripe->blocks[block_nr].csum = stripe->csums + + block_nr * fs_info->csum_size; } } set_bit(SCRUB_STRIPE_FLAG_INITIALIZED, &stripe->state); @@ -1641,10 +1641,10 @@ static void scrub_reset_stripe(struct scrub_stripe *stripe) stripe->nr_data_extents = 0; stripe->state = 0; - for (int i = 0; i < stripe->nr_sectors; i++) { - stripe->sectors[i].is_metadata = false; - stripe->sectors[i].csum = NULL; - stripe->sectors[i].generation = 0; + for (int i = 0; i < stripe->nr_blocks; i++) { + stripe->blocks[i].is_metadata = false; + stripe->blocks[i].csum = NULL; + stripe->blocks[i].generation = 0; } } @@ -1656,29 +1656,29 @@ static u32 stripe_length(const struct scrub_stripe *stripe) stripe->bg->start + stripe->bg->length - stripe->logical); } -static void scrub_submit_extent_sector_read(struct scrub_stripe *stripe) +static void scrub_submit_extent_block_read(struct scrub_stripe *stripe) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; struct btrfs_bio *bbio = NULL; - unsigned int nr_sectors = stripe_length(stripe) >> fs_info->sectorsize_bits; + unsigned int nr_blocks = stripe_length(stripe) >> fs_info->blocksize_bits; u64 stripe_len = BTRFS_STRIPE_LEN; int mirror = stripe->mirror_num; int i; atomic_inc(&stripe->pending_io); - for_each_set_bit(i, &stripe->extent_sector_bitmap, stripe->nr_sectors) { + for_each_set_bit(i, &stripe->extent_block_bitmap, stripe->nr_blocks) { struct page *page = scrub_stripe_get_page(stripe, i); unsigned int pgoff = scrub_stripe_get_page_offset(stripe, i); /* We're beyond the chunk boundary, no need to read anymore. */ - if (i >= nr_sectors) + if (i >= nr_blocks) break; - /* The current sector cannot be merged, submit the bio. */ + /* The current block cannot be merged, submit the bio. */ if (bbio && ((i > 0 && - !test_bit(i - 1, &stripe->extent_sector_bitmap)) || + !test_bit(i - 1, &stripe->extent_block_bitmap)) || bbio->bio.bi_iter.bi_size >= stripe_len)) { ASSERT(bbio->bio.bi_iter.bi_size); atomic_inc(&stripe->pending_io); @@ -1690,11 +1690,11 @@ static void scrub_submit_extent_sector_read(struct scrub_stripe *stripe) struct btrfs_io_stripe io_stripe = {}; struct btrfs_io_context *bioc = NULL; const u64 logical = stripe->logical + - (i << fs_info->sectorsize_bits); + (i << fs_info->blocksize_bits); int err; io_stripe.rst_search_commit_root = true; - stripe_len = (nr_sectors - i) << fs_info->sectorsize_bits; + stripe_len = (nr_blocks - i) << fs_info->blocksize_bits; /* * For RST cases, we need to manually split the bbio to * follow the RST boundary. @@ -1718,12 +1718,12 @@ static void scrub_submit_extent_sector_read(struct scrub_stripe *stripe) continue; } - bbio = btrfs_bio_alloc(stripe->nr_sectors, REQ_OP_READ, + bbio = btrfs_bio_alloc(stripe->nr_blocks, REQ_OP_READ, fs_info, scrub_read_endio, stripe); bbio->bio.bi_iter.bi_sector = logical >> SECTOR_SHIFT; } - __bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff); + __bio_add_page(&bbio->bio, page, fs_info->blocksize, pgoff); } if (bbio) { @@ -1744,7 +1744,7 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx, { struct btrfs_fs_info *fs_info = sctx->fs_info; struct btrfs_bio *bbio; - unsigned int nr_sectors = stripe_length(stripe) >> fs_info->sectorsize_bits; + unsigned int nr_blocks = stripe_length(stripe) >> fs_info->blocksize_bits; int mirror = stripe->mirror_num; ASSERT(stripe->bg); @@ -1752,7 +1752,7 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx, ASSERT(test_bit(SCRUB_STRIPE_FLAG_INITIALIZED, &stripe->state)); if (btrfs_need_stripe_tree_update(fs_info, stripe->bg->flags)) { - scrub_submit_extent_sector_read(stripe); + scrub_submit_extent_block_read(stripe); return; } @@ -1761,14 +1761,14 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx, bbio->bio.bi_iter.bi_sector = stripe->logical >> SECTOR_SHIFT; /* Read the whole range inside the chunk boundary. */ - for (unsigned int cur = 0; cur < nr_sectors; cur++) { + for (unsigned int cur = 0; cur < nr_blocks; cur++) { struct page *page = scrub_stripe_get_page(stripe, cur); unsigned int pgoff = scrub_stripe_get_page_offset(stripe, cur); int ret; - ret = bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff); + ret = bio_add_page(&bbio->bio, page, fs_info->blocksize, pgoff); /* We should have allocated enough bio vectors. */ - ASSERT(ret == fs_info->sectorsize); + ASSERT(ret == fs_info->blocksize); } atomic_inc(&stripe->pending_io); @@ -1792,14 +1792,14 @@ static bool stripe_has_metadata_error(struct scrub_stripe *stripe) { int i; - for_each_set_bit(i, &stripe->error_bitmap, stripe->nr_sectors) { - if (stripe->sectors[i].is_metadata) { + for_each_set_bit(i, &stripe->error_bitmap, stripe->nr_blocks) { + if (stripe->blocks[i].is_metadata) { struct btrfs_fs_info *fs_info = stripe->bg->fs_info; btrfs_err(fs_info, - "stripe %llu has unrepaired metadata sector at %llu", + "stripe %llu has unrepaired metadata block at %llu", stripe->logical, - stripe->logical + (i << fs_info->sectorsize_bits)); + stripe->logical + (i << fs_info->blocksize_bits)); return true; } } @@ -1873,9 +1873,9 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx) ASSERT(stripe->dev == fs_info->dev_replace.srcdev); - bitmap_andnot(&good, &stripe->extent_sector_bitmap, - &stripe->error_bitmap, stripe->nr_sectors); - scrub_write_sectors(sctx, stripe, good, true); + bitmap_andnot(&good, &stripe->extent_block_bitmap, + &stripe->error_bitmap, stripe->nr_blocks); + scrub_write_blocks(sctx, stripe, good, true); } } @@ -2008,7 +2008,7 @@ static int scrub_raid56_parity_stripe(struct scrub_ctx *sctx, /* Check if all data stripes are empty. */ for (int i = 0; i < data_stripes; i++) { stripe = &sctx->raid56_data_stripes[i]; - if (!bitmap_empty(&stripe->extent_sector_bitmap, stripe->nr_sectors)) { + if (!bitmap_empty(&stripe->extent_block_bitmap, stripe->nr_blocks)) { all_empty = false; break; } @@ -2048,17 +2048,17 @@ static int scrub_raid56_parity_stripe(struct scrub_ctx *sctx, * As we may hit an empty data stripe while it's missing. */ bitmap_and(&error, &stripe->error_bitmap, - &stripe->extent_sector_bitmap, stripe->nr_sectors); - if (!bitmap_empty(&error, stripe->nr_sectors)) { + &stripe->extent_block_bitmap, stripe->nr_blocks); + if (!bitmap_empty(&error, stripe->nr_blocks)) { btrfs_err(fs_info, -"unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl", - full_stripe_start, i, stripe->nr_sectors, +"unrepaired blocks detected, full stripe %llu data stripe %u errors %*pbl", + full_stripe_start, i, stripe->nr_blocks, &error); ret = -EIO; goto out; } bitmap_or(&extent_bitmap, &extent_bitmap, - &stripe->extent_sector_bitmap, stripe->nr_sectors); + &stripe->extent_block_bitmap, stripe->nr_blocks); } /* Now we can check and regenerate the P/Q stripe. */ @@ -2076,7 +2076,7 @@ static int scrub_raid56_parity_stripe(struct scrub_ctx *sctx, goto out; } rbio = raid56_parity_alloc_scrub_rbio(bio, bioc, scrub_dev, &extent_bitmap, - BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits); + BTRFS_STRIPE_LEN >> fs_info->blocksize_bits); btrfs_put_bioc(bioc); if (!rbio) { ret = -ENOMEM; @@ -2920,12 +2920,12 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, ASSERT(fs_info->nodesize <= BTRFS_STRIPE_LEN); /* - * SCRUB_MAX_SECTORS_PER_BLOCK is calculated using the largest possible - * value (max nodesize / min sectorsize), thus nodesize should always + * SCRUB_MAX_BLOCKS_PER_TREE_BLOCK is calculated using the largest possible + * value (max nodesize / min blocksize), thus nodesize should always * be fine. */ ASSERT(fs_info->nodesize <= - SCRUB_MAX_SECTORS_PER_BLOCK << fs_info->sectorsize_bits); + SCRUB_MAX_BLOCKS_PER_TREE_BLOCK << fs_info->blocksize_bits); /* Allocate outside of device_list_mutex */ sctx = scrub_setup_ctx(fs_info, is_dev_replace); @@ -2991,7 +2991,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, /* * In order to avoid deadlock with reclaim when there is a transaction * trying to pause scrub, make sure we use GFP_NOFS for all the - * allocations done at btrfs_scrub_sectors() and scrub_sectors_for_parity() + * allocations done at btrfs_scrub_blocks() and scrub_blocks_for_parity() * invoked by our callees. The pausing request is done when the * transaction commit starts, and it blocks the transaction until scrub * is paused (done at specific points at scrub_stripe() or right above From patchwork Wed Dec 18 09:41:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913304 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CC74198A1A for ; Wed, 18 Dec 2024 09:42:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514924; cv=none; b=agDQ0hJ8wbDmr7WhxNifUxJy73j85btYpvtcrq+845Xv6LMgu6273VULJtkHyNogWosEx1d8e8kURUqGer4BnIZWQYojyR0kFB9nXDODXpnJl5+rJW65t4ij2lyCWZTd9/mAuHgm8M2VmTYpNkJoOLaMiRDAfLff7PGF2HlDuzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514924; c=relaxed/simple; bh=EezIwDv3OdlzXR9LEPCSp8YNz63o2mwvdyq+N+719WA=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O1YioW5f/HVYkqLkThdUCxOcYRBmwQ8waLND+1AHuRJOhew7onoqrNyS2ozmkcYFswesKoveUVgkEgZ+x5Hg6py/FL7z7Dv11M3gc6DKO/lPzUiuaTgxBr85t3kyFLaLKtjqiEo/2M4O95f9qYeKu3VcB5Y08OnTRVGgpFnWAog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=CCCZJkme; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=CCCZJkme; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="CCCZJkme"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="CCCZJkme" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 227BC2116B for ; Wed, 18 Dec 2024 09:42:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dGP0ZiJzZJ86Fh8DCVZxVGmgc9zH4PLwu2Uq3CUc+qk=; b=CCCZJkmeExgXrNYMRfTJ00/UEJDdBdVbEJsJotFSEZuWOrfn/g8ejaA8qp3SaIHPtpm0op Fn9XomN1gE6K5n5plt0eAVIMzfcGVWyuqxQOCv0+ceEbtnAAzp2r0AavQZup/NSV5KWVlW ow7Ql7koNEcSO6E92JWUEAUD6ZOrzJ4= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=CCCZJkme DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dGP0ZiJzZJ86Fh8DCVZxVGmgc9zH4PLwu2Uq3CUc+qk=; b=CCCZJkmeExgXrNYMRfTJ00/UEJDdBdVbEJsJotFSEZuWOrfn/g8ejaA8qp3SaIHPtpm0op Fn9XomN1gE6K5n5plt0eAVIMzfcGVWyuqxQOCv0+ceEbtnAAzp2r0AavQZup/NSV5KWVlW ow7Ql7koNEcSO6E92JWUEAUD6ZOrzJ4= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4D59E132EA for ; Wed, 18 Dec 2024 09:41:59 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id aFIVA+eYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:41:59 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 05/18] btrfs: migrate extent_io.[ch] to use block size terminology Date: Wed, 18 Dec 2024 20:11:21 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 227BC2116B X-Spam-Score: -3.01 X-Rspamd-Action: no action X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo,suse.com:dkim,suse.com:mid,suse.com:email]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Flag: NO X-Spam-Level: Straightforward rename from "sector" to "block", except the bio interface. Signed-off-by: Qu Wenruo --- fs/btrfs/extent_io.c | 124 +++++++++++++++++++++---------------------- fs/btrfs/extent_io.h | 16 +++--- 2 files changed, 70 insertions(+), 70 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 9725ff7f274d..26e53c6c077c 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -103,7 +103,7 @@ struct btrfs_bio_ctrl { struct writeback_control *wbc; /* - * The sectors of the page which are going to be submitted by + * The blocks of the page which are going to be submitted by * extent_writepage_io(). * This is to avoid touching ranges covered by compression/inline. */ @@ -457,7 +457,7 @@ static void end_bbio_data_write(struct btrfs_bio *bbio) struct bio *bio = &bbio->bio; int error = blk_status_to_errno(bio->bi_status); struct folio_iter fi; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; ASSERT(!bio_flagged(bio, BIO_CLONED)); bio_for_each_folio_all(fi, bio) { @@ -468,12 +468,12 @@ static void end_bbio_data_write(struct btrfs_bio *bbio) /* Only order 0 (single page) folios are allowed for data. */ ASSERT(folio_order(folio) == 0); - /* Our read/write should always be sector aligned. */ - if (!IS_ALIGNED(fi.offset, sectorsize)) + /* Our read/write should always be block aligned. */ + if (!IS_ALIGNED(fi.offset, blocksize)) btrfs_err(fs_info, "partial page write in btrfs with offset %zu and length %zu", fi.offset, fi.length); - else if (!IS_ALIGNED(fi.length, sectorsize)) + else if (!IS_ALIGNED(fi.length, blocksize)) btrfs_info(fs_info, "incomplete page write with offset %zu and length %zu", fi.offset, fi.length); @@ -515,7 +515,7 @@ static void end_bbio_data_read(struct btrfs_bio *bbio) struct btrfs_fs_info *fs_info = bbio->fs_info; struct bio *bio = &bbio->bio; struct folio_iter fi; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; ASSERT(!bio_flagged(bio, BIO_CLONED)); bio_for_each_folio_all(fi, &bbio->bio) { @@ -534,17 +534,17 @@ static void end_bbio_data_read(struct btrfs_bio *bbio) bbio->mirror_num); /* - * We always issue full-sector reads, but if some block in a + * We always issue full-block reads, but if some block in a * folio fails to read, blk_update_request() will advance * bv_offset and adjust bv_len to compensate. Print a warning * for unaligned offsets, and an error if they don't add up to - * a full sector. + * a full block. */ - if (!IS_ALIGNED(fi.offset, sectorsize)) + if (!IS_ALIGNED(fi.offset, blocksize)) btrfs_err(fs_info, "partial page read in btrfs with offset %zu and length %zu", fi.offset, fi.length); - else if (!IS_ALIGNED(fi.offset + fi.length, sectorsize)) + else if (!IS_ALIGNED(fi.offset + fi.length, blocksize)) btrfs_info(fs_info, "incomplete page read with offset %zu and length %zu", fi.offset, fi.length); @@ -795,7 +795,7 @@ static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl, /* * len_to_oe_boundary defaults to U32_MAX, which isn't folio or - * sector aligned. alloc_new_bio() then sets it to the end of + * block aligned. alloc_new_bio() then sets it to the end of * our ordered extent for writes into zoned devices. * * When len_to_oe_boundary is tracking an ordered extent, we @@ -955,7 +955,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, int ret = 0; size_t pg_offset = 0; size_t iosize; - size_t blocksize = fs_info->sectorsize; + size_t blocksize = fs_info->blocksize; ret = set_folio_extent_mapped(folio); if (ret < 0) { @@ -978,7 +978,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, bool force_bio_submit = false; u64 disk_bytenr; - ASSERT(IS_ALIGNED(cur, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(cur, fs_info->blocksize)); if (cur >= last_byte) { iosize = folio_size(folio) - pg_offset; folio_zero_range(folio, pg_offset, iosize); @@ -1111,8 +1111,8 @@ static void set_delalloc_bitmap(struct folio *folio, unsigned long *delalloc_bit unsigned int nbits; ASSERT(start >= folio_start && start + len <= folio_start + PAGE_SIZE); - start_bit = (start - folio_start) >> fs_info->sectorsize_bits; - nbits = len >> fs_info->sectorsize_bits; + start_bit = (start - folio_start) >> fs_info->blocksize_bits; + nbits = len >> fs_info->blocksize_bits; ASSERT(bitmap_test_range_all_zero(delalloc_bitmap, start_bit, nbits)); bitmap_set(delalloc_bitmap, start_bit, nbits); } @@ -1123,21 +1123,21 @@ static bool find_next_delalloc_bitmap(struct folio *folio, { struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); const u64 folio_start = folio_pos(folio); - const unsigned int bitmap_size = fs_info->sectors_per_page; + const unsigned int bitmap_size = fs_info->blocks_per_page; unsigned int start_bit; unsigned int first_zero; unsigned int first_set; ASSERT(start >= folio_start && start < folio_start + PAGE_SIZE); - start_bit = (start - folio_start) >> fs_info->sectorsize_bits; + start_bit = (start - folio_start) >> fs_info->blocksize_bits; first_set = find_next_bit(delalloc_bitmap, bitmap_size, start_bit); if (first_set >= bitmap_size) return false; - *found_start = folio_start + (first_set << fs_info->sectorsize_bits); + *found_start = folio_start + (first_set << fs_info->blocksize_bits); first_zero = find_next_zero_bit(delalloc_bitmap, bitmap_size, first_set); - *found_len = (first_zero - first_set) << fs_info->sectorsize_bits; + *found_len = (first_zero - first_set) << fs_info->blocksize_bits; return true; } @@ -1175,16 +1175,16 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, /* Save the dirty bitmap as our submission bitmap will be a subset of it. */ if (btrfs_is_subpage(fs_info, inode->vfs_inode.i_mapping)) { - ASSERT(fs_info->sectors_per_page > 1); + ASSERT(fs_info->blocks_per_page > 1); btrfs_get_subpage_dirty_bitmap(fs_info, folio, &bio_ctrl->submit_bitmap); } else { bio_ctrl->submit_bitmap = 1; } - for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->sectors_per_page) { - u64 start = page_start + (bit << fs_info->sectorsize_bits); + for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->blocks_per_page) { + u64 start = page_start + (bit << fs_info->blocksize_bits); - btrfs_folio_set_lock(fs_info, folio, start, fs_info->sectorsize); + btrfs_folio_set_lock(fs_info, folio, start, fs_info->blocksize); } /* Lock all (subpage) delalloc ranges inside the folio first. */ @@ -1227,7 +1227,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, if (!found) break; /* - * The subpage range covers the last sector, the delalloc range may + * The subpage range covers the last block, the delalloc range may * end beyond the folio boundary, use the saved delalloc_end * instead. */ @@ -1260,9 +1260,9 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, */ if (ret > 0) { unsigned int start_bit = (found_start - page_start) >> - fs_info->sectorsize_bits; + fs_info->blocksize_bits; unsigned int end_bit = (min(page_end + 1, found_start + found_len) - - page_start) >> fs_info->sectorsize_bits; + page_start) >> fs_info->blocksize_bits; bitmap_clear(&bio_ctrl->submit_bitmap, start_bit, end_bit - start_bit); } /* @@ -1292,7 +1292,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, * If all ranges are submitted asynchronously, we just need to account * for them here. */ - if (bitmap_empty(&bio_ctrl->submit_bitmap, fs_info->sectors_per_page)) { + if (bitmap_empty(&bio_ctrl->submit_bitmap, fs_info->blocks_per_page)) { wbc->nr_to_write -= delalloc_to_write; return 1; } @@ -1310,12 +1310,12 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode, } /* - * Return 0 if we have submitted or queued the sector for submission. + * Return 0 if we have submitted or queued the block for submission. * Return <0 for critical errors. * * Caller should make sure filepos < i_size and handle filepos >= i_size case. */ -static int submit_one_sector(struct btrfs_inode *inode, +static int submit_one_block(struct btrfs_inode *inode, struct folio *folio, u64 filepos, struct btrfs_bio_ctrl *bio_ctrl, loff_t i_size) @@ -1326,22 +1326,22 @@ static int submit_one_sector(struct btrfs_inode *inode, u64 disk_bytenr; u64 extent_offset; u64 em_end; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; - ASSERT(IS_ALIGNED(filepos, sectorsize)); + ASSERT(IS_ALIGNED(filepos, blocksize)); /* @filepos >= i_size case should be handled by the caller. */ ASSERT(filepos < i_size); - em = btrfs_get_extent(inode, NULL, filepos, sectorsize); + em = btrfs_get_extent(inode, NULL, filepos, blocksize); if (IS_ERR(em)) return PTR_ERR(em); extent_offset = filepos - em->start; em_end = extent_map_end(em); ASSERT(filepos <= em_end); - ASSERT(IS_ALIGNED(em->start, sectorsize)); - ASSERT(IS_ALIGNED(em->len, sectorsize)); + ASSERT(IS_ALIGNED(em->start, blocksize)); + ASSERT(IS_ALIGNED(em->len, blocksize)); block_start = extent_map_block_start(em); disk_bytenr = extent_map_block_start(em) + extent_offset; @@ -1359,18 +1359,18 @@ static int submit_one_sector(struct btrfs_inode *inode, * So clear subpage dirty bit here so next time we won't submit * a folio for a range already written to disk. */ - btrfs_folio_clear_dirty(fs_info, folio, filepos, sectorsize); - btrfs_folio_set_writeback(fs_info, folio, filepos, sectorsize); + btrfs_folio_clear_dirty(fs_info, folio, filepos, blocksize); + btrfs_folio_set_writeback(fs_info, folio, filepos, blocksize); /* * Above call should set the whole folio with writeback flag, even - * just for a single subpage sector. + * just for a single subpage block. * As long as the folio is properly locked and the range is correct, * we should always get the folio with writeback flag. */ ASSERT(folio_test_writeback(folio)); submit_extent_folio(bio_ctrl, disk_bytenr, folio, - sectorsize, filepos - folio_pos(folio)); + blocksize, filepos - folio_pos(folio)); return 0; } @@ -1407,15 +1407,15 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode, return 1; } - for (cur = start; cur < start + len; cur += fs_info->sectorsize) - set_bit((cur - folio_start) >> fs_info->sectorsize_bits, &range_bitmap); + for (cur = start; cur < start + len; cur += fs_info->blocksize) + set_bit((cur - folio_start) >> fs_info->blocksize_bits, &range_bitmap); bitmap_and(&bio_ctrl->submit_bitmap, &bio_ctrl->submit_bitmap, &range_bitmap, - fs_info->sectors_per_page); + fs_info->blocks_per_page); bio_ctrl->end_io_func = end_bbio_data_write; - for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->sectors_per_page) { - cur = folio_pos(folio) + (bit << fs_info->sectorsize_bits); + for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->blocks_per_page) { + cur = folio_pos(folio) + (bit << fs_info->blocksize_bits); if (cur >= i_size) { btrfs_mark_ordered_io_finished(inode, folio, cur, @@ -1425,21 +1425,21 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode, * bother writing back. * But we still need to clear the dirty subpage bit, or * the next time the folio gets dirtied, we will try to - * writeback the sectors with subpage dirty bits, + * writeback the blocks with subpage dirty bits, * causing writeback without ordered extent. */ btrfs_folio_clear_dirty(fs_info, folio, cur, start + len - cur); break; } - ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size); + ret = submit_one_block(inode, folio, cur, bio_ctrl, i_size); if (ret < 0) goto out; submitted_io = true; } out: /* - * If we didn't submitted any sector (>= i_size), folio dirty get + * If we didn't submitted any block (>= i_size), folio dirty get * cleared but PAGECACHE_TAG_DIRTY is not cleared (only cleared * by folio_start_writeback() if the folio is not dirty). * @@ -1658,7 +1658,7 @@ static struct extent_buffer *find_extent_buffer_nolock( rcu_read_lock(); eb = radix_tree_lookup(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits); + start >> fs_info->blocksize_bits); if (eb && atomic_inc_not_zero(&eb->refs)) { rcu_read_unlock(); return eb; @@ -1794,10 +1794,10 @@ static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) int submitted = 0; u64 folio_start = folio_pos(folio); int bit_start = 0; - int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits; + int blocks_per_node = fs_info->nodesize >> fs_info->blocksize_bits; /* Lock and write each dirty extent buffers in the range */ - while (bit_start < fs_info->sectors_per_page) { + while (bit_start < fs_info->blocks_per_page) { struct btrfs_subpage *subpage = folio_get_private(folio); struct extent_buffer *eb; unsigned long flags; @@ -1813,7 +1813,7 @@ static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) break; } spin_lock_irqsave(&subpage->lock, flags); - if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * fs_info->sectors_per_page, + if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * fs_info->blocks_per_page, subpage->bitmaps)) { spin_unlock_irqrestore(&subpage->lock, flags); spin_unlock(&folio->mapping->i_private_lock); @@ -1821,8 +1821,8 @@ static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) continue; } - start = folio_start + bit_start * fs_info->sectorsize; - bit_start += sectors_per_node; + start = folio_start + bit_start * fs_info->blocksize; + bit_start += blocks_per_node; /* * Here we just want to grab the eb without touching extra @@ -2246,7 +2246,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f int ret = 0; struct address_space *mapping = inode->i_mapping; struct btrfs_fs_info *fs_info = inode_to_fs_info(inode); - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; loff_t i_size = i_size_read(inode); u64 cur = start; struct btrfs_bio_ctrl bio_ctrl = { @@ -2257,7 +2257,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f if (wbc->no_cgroup_owner) bio_ctrl.opf |= REQ_BTRFS_CGROUP_PUNT; - ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); + ASSERT(IS_ALIGNED(start, blocksize) && IS_ALIGNED(end + 1, blocksize)); while (cur <= end) { u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end); @@ -2283,7 +2283,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f ASSERT(folio_test_dirty(folio)); /* - * Set the submission bitmap to submit all sectors. + * Set the submission bitmap to submit all blocks. * extent_writepage_io() will do the truncation correctly. */ bio_ctrl.submit_bitmap = (unsigned long)-1; @@ -2354,7 +2354,7 @@ int extent_invalidate_folio(struct extent_io_tree *tree, struct extent_state *cached_state = NULL; u64 start = folio_pos(folio); u64 end = start + folio_size(folio) - 1; - size_t blocksize = folio_to_fs_info(folio)->sectorsize; + size_t blocksize = folio_to_fs_info(folio)->blocksize; /* This function is only called for the btree inode */ ASSERT(tree->owner == IO_TREE_BTREE_INODE_IO); @@ -2810,7 +2810,7 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, } spin_lock(&fs_info->buffer_lock); ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); + start >> fs_info->blocksize_bits, eb); spin_unlock(&fs_info->buffer_lock); radix_tree_preload_end(); if (ret == -EEXIST) { @@ -2867,7 +2867,7 @@ static struct extent_buffer *grab_extent_buffer( static int check_eb_alignment(struct btrfs_fs_info *fs_info, u64 start) { - if (!IS_ALIGNED(start, fs_info->sectorsize)) { + if (!IS_ALIGNED(start, fs_info->blocksize)) { btrfs_err(fs_info, "bad tree block start %llu", start); return -EINVAL; } @@ -3128,7 +3128,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, spin_lock(&fs_info->buffer_lock); ret = radix_tree_insert(&fs_info->buffer_radix, - start >> fs_info->sectorsize_bits, eb); + start >> fs_info->blocksize_bits, eb); spin_unlock(&fs_info->buffer_lock); radix_tree_preload_end(); if (ret == -EEXIST) { @@ -3212,7 +3212,7 @@ static int release_extent_buffer(struct extent_buffer *eb) spin_lock(&fs_info->buffer_lock); radix_tree_delete(&fs_info->buffer_radix, - eb->start >> fs_info->sectorsize_bits); + eb->start >> fs_info->blocksize_bits); spin_unlock(&fs_info->buffer_lock); } else { spin_unlock(&eb->refs_lock); @@ -3714,7 +3714,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv, /* * Check that the extent buffer is uptodate. * - * For regular sector size == PAGE_SIZE case, check if @page is uptodate. + * For regular block size == PAGE_SIZE case, check if @page is uptodate. * For subpage case, check if the range covered by the eb has EXTENT_UPTODATE. */ static void assert_eb_folio_uptodate(const struct extent_buffer *eb, int i) @@ -4126,7 +4126,7 @@ static struct extent_buffer *get_next_extent_buffer( int i; ret = radix_tree_gang_lookup(&fs_info->buffer_radix, - (void **)gang, cur >> fs_info->sectorsize_bits, + (void **)gang, cur >> fs_info->blocksize_bits, min_t(unsigned int, GANG_LOOKUP_SIZE, PAGE_SIZE / fs_info->nodesize)); if (ret == 0) diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 8a36117ed453..c0e70412851f 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -145,13 +145,13 @@ static inline unsigned long offset_in_eb_folio(const struct extent_buffer *eb, * @eb: target extent buffer * @start: offset inside the extent buffer * - * Will handle both sectorsize == PAGE_SIZE and sectorsize < PAGE_SIZE cases. + * Will handle both blocksize == PAGE_SIZE and blocksize < PAGE_SIZE cases. */ static inline size_t get_eb_offset_in_folio(const struct extent_buffer *eb, unsigned long offset) { /* - * 1) sectorsize == PAGE_SIZE and nodesize >= PAGE_SIZE case + * 1) blocksize == PAGE_SIZE and nodesize >= PAGE_SIZE case * 1.1) One large folio covering the whole eb * The eb->start is aligned to folio size, thus adding it * won't cause any difference. @@ -159,7 +159,7 @@ static inline size_t get_eb_offset_in_folio(const struct extent_buffer *eb, * The eb->start is aligned to folio (page) size, thus * adding it won't cause any difference. * - * 2) sectorsize < PAGE_SIZE and nodesize < PAGE_SIZE case + * 2) blocksize < PAGE_SIZE and nodesize < PAGE_SIZE case * In this case there would only be one page sized folio, and there * may be several different extent buffers in the page/folio. * We need to add eb->start to properly access the offset inside @@ -172,7 +172,7 @@ static inline unsigned long get_eb_folio_index(const struct extent_buffer *eb, unsigned long offset) { /* - * 1) sectorsize == PAGE_SIZE and nodesize >= PAGE_SIZE case + * 1) blocksize == PAGE_SIZE and nodesize >= PAGE_SIZE case * 1.1) One large folio covering the whole eb. * the folio_shift would be large enough to always make us * return 0 as index. @@ -180,7 +180,7 @@ static inline unsigned long get_eb_folio_index(const struct extent_buffer *eb, * The folio_shift would be PAGE_SHIFT, giving us the correct * index. * - * 2) sectorsize < PAGE_SIZE and nodesize < PAGE_SIZE case + * 2) blocksize < PAGE_SIZE and nodesize < PAGE_SIZE case * The folio would only be page sized, and always give us 0 as index. */ return offset >> eb->folio_shift; @@ -275,10 +275,10 @@ void btrfs_readahead_node_child(struct extent_buffer *node, int slot); static inline int num_extent_pages(const struct extent_buffer *eb) { /* - * For sectorsize == PAGE_SIZE case, since nodesize is always aligned to - * sectorsize, it's just eb->len >> PAGE_SHIFT. + * For blocksize == PAGE_SIZE case, since nodesize is always aligned to + * blocksize, it's just eb->len >> PAGE_SHIFT. * - * For sectorsize < PAGE_SIZE case, we could have nodesize < PAGE_SIZE, + * For blocksize < PAGE_SIZE case, we could have nodesize < PAGE_SIZE, * thus have to ensure we get at least one page. */ return (eb->len >> PAGE_SHIFT) ?: 1; From patchwork Wed Dec 18 09:41:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913305 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 424F9198E6D for ; Wed, 18 Dec 2024 09:42:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514925; cv=none; b=f61gfcf0STO92DWUPbPSjw/rjDXPdkNbrxcg3RkEjLfkRb3zvKKjaegF/+KTUIzdflGnFS174DURVKs7cAqzWe1rJSycQI4Q0wTE/Ql6iQELvB6cyGYO+Qwb862sRb/jeZDnFisqGgxPxRzJ9/XOFoAQHbWhvmWDYfzMXjw+6i4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514925; c=relaxed/simple; bh=d2jafA3tOodVQEIrJP0/d+16GWXeA15OvxNQcodbjOo=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EHnk/I1mtxE0aeVRfOc4CrFoj2CnEQWC6uP8yoTOuPuqROBCRRNUu8Gc8rDLeU7QRXFIvMvLjBs7mCwWYXSRpqDK78MP7E8JAh5blEWKXhb9FYFAuWzyQ45hyVwvpDHL1JtlBi4IF6/jlb27XbWe3uBWNEMkqPp+iYClP/jePrI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=SDiAXSR1; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=SDiAXSR1; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="SDiAXSR1"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="SDiAXSR1" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7154C1F399 for ; Wed, 18 Dec 2024 09:42:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514921; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1AOg0/sfb1GvB+ZS1g++mImtn6Lss1B9VzTyPTOZk+E=; b=SDiAXSR13e6aYZxF5YpE5b/2OaICRVAaWEFdHQfPFLVram4FaniXODRGqzWq2dgK+SCDi6 4xumQkZLf8H9wwrpqxPgyWkeZE631K0o0/Mbmw76cD4BanBij5gPtDORJbfyTtrit+RMTW PvGWuMxY+7qavQhaUDSBiL0UCe/qEmc= Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=SDiAXSR1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514921; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1AOg0/sfb1GvB+ZS1g++mImtn6Lss1B9VzTyPTOZk+E=; b=SDiAXSR13e6aYZxF5YpE5b/2OaICRVAaWEFdHQfPFLVram4FaniXODRGqzWq2dgK+SCDi6 4xumQkZLf8H9wwrpqxPgyWkeZE631K0o0/Mbmw76cD4BanBij5gPtDORJbfyTtrit+RMTW PvGWuMxY+7qavQhaUDSBiL0UCe/qEmc= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9E2A9132EA for ; Wed, 18 Dec 2024 09:42:00 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id mK3LFuiYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:00 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 06/18] btrfs: migrate compression related code to use block size terminology Date: Wed, 18 Dec 2024 20:11:22 +1030 Message-ID: <02dbd122ec14fb9a927e7b4ad2a5a3557a6862df.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 7154C1F399 X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns,suse.com:email,suse.com:dkim,suse.com:mid]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO Straightforward rename from "sector" to "block", except the bio interface. Most of them are light user of "sectorsize", but LZO is the exception because the header format is fully blocksize dependent. Signed-off-by: Qu Wenruo --- fs/btrfs/compression.c | 30 ++++++++-------- fs/btrfs/lzo.c | 80 +++++++++++++++++++++--------------------- fs/btrfs/zlib.c | 2 +- fs/btrfs/zstd.c | 6 ++-- 4 files changed, 59 insertions(+), 59 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 0c4d486c3048..847d5e14cc75 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -378,8 +378,8 @@ void btrfs_submit_compressed_write(struct btrfs_ordered_extent *ordered, struct btrfs_fs_info *fs_info = inode->root->fs_info; struct compressed_bio *cb; - ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->sectorsize)); - ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->blocksize)); + ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->blocksize)); cb = alloc_compressed_bio(inode, ordered->file_offset, REQ_OP_WRITE | write_flags, @@ -405,7 +405,7 @@ void btrfs_submit_compressed_write(struct btrfs_ordered_extent *ordered, * NOTE: this won't work well for subpage, as for subpage read, we lock the * full page then submit bio for each compressed/regular extents. * - * This means, if we have several sectors in the same page points to the same + * This means, if we have several blocks in the same page points to the same * on-disk compressed data, we will re-read the same extent many times and * this function can only help for the next page. */ @@ -425,7 +425,7 @@ static noinline int add_ra_bio_pages(struct inode *inode, struct address_space *mapping = inode->i_mapping; struct extent_map_tree *em_tree; struct extent_io_tree *tree; - int sectors_missed = 0; + int blocks_missed = 0; em_tree = &BTRFS_I(inode)->extent_tree; tree = &BTRFS_I(inode)->io_tree; @@ -440,7 +440,7 @@ static noinline int add_ra_bio_pages(struct inode *inode, * This makes readahead less effective, so here disable readahead for * subpage for now, until full compressed write is supported. */ - if (fs_info->sectorsize < PAGE_SIZE) + if (fs_info->blocksize < PAGE_SIZE) return 0; end_index = (i_size_read(inode) - 1) >> PAGE_SHIFT; @@ -459,11 +459,11 @@ static noinline int add_ra_bio_pages(struct inode *inode, u64 offset = offset_in_folio(folio, cur); folio_put(folio); - sectors_missed += (folio_sz - offset) >> - fs_info->sectorsize_bits; + blocks_missed += (folio_sz - offset) >> + fs_info->blocksize_bits; /* Beyond threshold, no need to continue */ - if (sectors_missed > 4) + if (blocks_missed > 4) break; /* @@ -510,7 +510,7 @@ static noinline int add_ra_bio_pages(struct inode *inode, * to this compressed extent on disk. */ if (!em || cur < em->start || - (cur + fs_info->sectorsize > extent_map_end(em)) || + (cur + fs_info->blocksize > extent_map_end(em)) || (extent_map_block_start(em) >> SECTOR_SHIFT) != orig_bio->bi_iter.bi_sector) { free_extent_map(em); @@ -544,7 +544,7 @@ static noinline int add_ra_bio_pages(struct inode *inode, * subpage::readers number, as at endio we will decrease * subpage::readers and to unlock the page. */ - if (fs_info->sectorsize < PAGE_SIZE) + if (fs_info->blocksize < PAGE_SIZE) btrfs_folio_set_lock(fs_info, folio, cur, add_size); folio_put(folio); cur += add_size; @@ -581,7 +581,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio) /* we need the actual starting offset of this extent in the file */ read_lock(&em_tree->lock); - em = lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize); + em = lookup_extent_mapping(em_tree, file_offset, fs_info->blocksize); read_unlock(&em_tree->lock); if (!em) { ret = BLK_STS_IOERR; @@ -1068,15 +1068,15 @@ int btrfs_decompress(int type, const u8 *data_in, struct folio *dest_folio, { struct btrfs_fs_info *fs_info = folio_to_fs_info(dest_folio); struct list_head *workspace; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; int ret; /* * The full destination page range should not exceed the page size. - * And the @destlen should not exceed sectorsize, as this is only called for - * inline file extents, which should not exceed sectorsize. + * And the @destlen should not exceed blocksize, as this is only called for + * inline file extents, which should not exceed blocksize. */ - ASSERT(dest_pgoff + destlen <= PAGE_SIZE && destlen <= sectorsize); + ASSERT(dest_pgoff + destlen <= PAGE_SIZE && destlen <= blocksize); workspace = get_workspace(type, 0); ret = compression_decompress(type, workspace, data_in, dest_folio, diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index a45bc11f8665..f124a408edf3 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -35,19 +35,19 @@ * payload. * One regular LZO compressed extent can have one or more segments. * For inlined LZO compressed extent, only one segment is allowed. - * One segment represents at most one sector of uncompressed data. + * One segment represents at most one block of uncompressed data. * * 2.1 Segment header * Fixed size. LZO_LEN (4) bytes long, LE32. * Records the total size of the segment (not including the header). - * Segment header never crosses sector boundary, thus it's possible to - * have at most 3 padding zeros at the end of the sector. + * Segment header never crosses block boundary, thus it's possible to + * have at most 3 padding zeros at the end of the block. * * 2.2 Data Payload - * Variable size. Size up limit should be lzo1x_worst_compress(sectorsize) - * which is 4419 for a 4KiB sectorsize. + * Variable size. Size up limit should be lzo1x_worst_compress(blocksize) + * which is 4419 for a 4KiB blocksize. * - * Example with 4K sectorsize: + * Example with 4K blocksize: * Page 1: * 0 0x2 0x4 0x6 0x8 0xa 0xc 0xe 0x10 * 0x0000 | Header | SegHdr 01 | Data payload 01 ... | @@ -133,9 +133,9 @@ static int copy_compressed_data_to_page(char *compressed_data, struct folio **out_folios, unsigned long max_nr_folio, u32 *cur_out, - const u32 sectorsize) + const u32 blocksize) { - u32 sector_bytes_left; + u32 block_bytes_left; u32 orig_out; struct folio *cur_folio; char *kaddr; @@ -144,10 +144,10 @@ static int copy_compressed_data_to_page(char *compressed_data, return -E2BIG; /* - * We never allow a segment header crossing sector boundary, previous - * run should ensure we have enough space left inside the sector. + * We never allow a segment header crossing block boundary, previous + * run should ensure we have enough space left inside the block. */ - ASSERT((*cur_out / sectorsize) == (*cur_out + LZO_LEN - 1) / sectorsize); + ASSERT((*cur_out / blocksize) == (*cur_out + LZO_LEN - 1) / blocksize); cur_folio = out_folios[*cur_out / PAGE_SIZE]; /* Allocate a new page */ @@ -167,7 +167,7 @@ static int copy_compressed_data_to_page(char *compressed_data, /* Copy compressed data */ while (*cur_out - orig_out < compressed_size) { - u32 copy_len = min_t(u32, sectorsize - *cur_out % sectorsize, + u32 copy_len = min_t(u32, blocksize - *cur_out % blocksize, orig_out + compressed_size - *cur_out); kunmap_local(kaddr); @@ -193,16 +193,16 @@ static int copy_compressed_data_to_page(char *compressed_data, /* * Check if we can fit the next segment header into the remaining space - * of the sector. + * of the block. */ - sector_bytes_left = round_up(*cur_out, sectorsize) - *cur_out; - if (sector_bytes_left >= LZO_LEN || sector_bytes_left == 0) + block_bytes_left = round_up(*cur_out, blocksize) - *cur_out; + if (block_bytes_left >= LZO_LEN || block_bytes_left == 0) goto out; /* The remaining size is not enough, pad it with zeros */ memset(kaddr + offset_in_page(*cur_out), 0, - sector_bytes_left); - *cur_out += sector_bytes_left; + block_bytes_left); + *cur_out += block_bytes_left; out: kunmap_local(kaddr); @@ -214,7 +214,7 @@ int lzo_compress_folios(struct list_head *ws, struct address_space *mapping, unsigned long *total_in, unsigned long *total_out) { struct workspace *workspace = list_entry(ws, struct workspace, list); - const u32 sectorsize = inode_to_fs_info(mapping->host)->sectorsize; + const u32 blocksize = inode_to_fs_info(mapping->host)->blocksize; struct folio *folio_in = NULL; char *sizes_ptr; const unsigned long max_nr_folio = *out_folios; @@ -237,8 +237,8 @@ int lzo_compress_folios(struct list_head *ws, struct address_space *mapping, cur_out += LZO_LEN; while (cur_in < start + len) { char *data_in; - const u32 sectorsize_mask = sectorsize - 1; - u32 sector_off = (cur_in - start) & sectorsize_mask; + const u32 blocksize_mask = blocksize - 1; + u32 block_off = (cur_in - start) & blocksize_mask; u32 in_len; size_t out_len; @@ -249,8 +249,8 @@ int lzo_compress_folios(struct list_head *ws, struct address_space *mapping, goto out; } - /* Compress at most one sector of data each time */ - in_len = min_t(u32, start + len - cur_in, sectorsize - sector_off); + /* Compress at most one block of data each time */ + in_len = min_t(u32, start + len - cur_in, blocksize - block_off); ASSERT(in_len); data_in = kmap_local_folio(folio_in, 0); ret = lzo1x_1_compress(data_in + @@ -266,17 +266,17 @@ int lzo_compress_folios(struct list_head *ws, struct address_space *mapping, ret = copy_compressed_data_to_page(workspace->cbuf, out_len, folios, max_nr_folio, - &cur_out, sectorsize); + &cur_out, blocksize); if (ret < 0) goto out; cur_in += in_len; /* - * Check if we're making it bigger after two sectors. And if + * Check if we're making it bigger after two blocks. And if * it is so, give up. */ - if (cur_in - start > sectorsize * 2 && cur_in - start < cur_out) { + if (cur_in - start > blocksize * 2 && cur_in - start < cur_out) { ret = -E2BIG; goto out; } @@ -332,7 +332,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) { struct workspace *workspace = list_entry(ws, struct workspace, list); const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; char *kaddr; int ret; /* Compressed data length, can be unaligned */ @@ -351,11 +351,11 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) * LZO header length check * * The total length should not exceed the maximum extent length, - * and all sectors should be used. + * and all blocks should be used. * If this happens, it means the compressed extent is corrupted. */ if (unlikely(len_in > min_t(size_t, BTRFS_MAX_COMPRESSED, cb->compressed_len) || - round_up(len_in, sectorsize) < cb->compressed_len)) { + round_up(len_in, blocksize) < cb->compressed_len)) { struct btrfs_inode *inode = cb->bbio.inode; btrfs_err(fs_info, @@ -370,15 +370,15 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) struct folio *cur_folio; /* Length of the compressed segment */ u32 seg_len; - u32 sector_bytes_left; - size_t out_len = lzo1x_worst_compress(sectorsize); + u32 block_bytes_left; + size_t out_len = lzo1x_worst_compress(blocksize); /* * We should always have enough space for one segment header - * inside current sector. + * inside current block. */ - ASSERT(cur_in / sectorsize == - (cur_in + LZO_LEN - 1) / sectorsize); + ASSERT(cur_in / blocksize == + (cur_in + LZO_LEN - 1) / blocksize); cur_folio = cb->compressed_folios[cur_in / PAGE_SIZE]; ASSERT(cur_folio); kaddr = kmap_local_folio(cur_folio, 0); @@ -425,13 +425,13 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) return 0; ret = 0; - /* Check if the sector has enough space for a segment header */ - sector_bytes_left = sectorsize - (cur_in % sectorsize); - if (sector_bytes_left >= LZO_LEN) + /* Check if the block has enough space for a segment header */ + block_bytes_left = blocksize - (cur_in % blocksize); + if (block_bytes_left >= LZO_LEN) continue; /* Skip the padding zeros */ - cur_in += sector_bytes_left; + cur_in += block_bytes_left; } return 0; @@ -443,7 +443,7 @@ int lzo_decompress(struct list_head *ws, const u8 *data_in, { struct workspace *workspace = list_entry(ws, struct workspace, list); struct btrfs_fs_info *fs_info = folio_to_fs_info(dest_folio); - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; size_t in_len; size_t out_len; size_t max_segment_len = WORKSPACE_BUF_LENGTH; @@ -464,7 +464,7 @@ int lzo_decompress(struct list_head *ws, const u8 *data_in, } data_in += LZO_LEN; - out_len = sectorsize; + out_len = blocksize; ret = lzo1x_decompress_safe(data_in, in_len, workspace->buf, &out_len); if (unlikely(ret != LZO_E_OK)) { struct btrfs_inode *inode = folio_to_inode(dest_folio); @@ -477,7 +477,7 @@ int lzo_decompress(struct list_head *ws, const u8 *data_in, goto out; } - ASSERT(out_len <= sectorsize); + ASSERT(out_len <= blocksize); memcpy_to_folio(dest_folio, dest_pgoff, workspace->buf, out_len); /* Early end, considered as an error. */ if (unlikely(out_len < destlen)) { diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index ddf0d5a448a7..aee1a9cd35e6 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -431,7 +431,7 @@ int zlib_decompress(struct list_head *ws, const u8 *data_in, } /* - * Everything (in/out buf) should be at most one sector, there should + * Everything (in/out buf) should be at most one block, there should * be no need to switch any input/output buffer. */ ret = zlib_inflate(&workspace->strm, Z_FINISH); diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 5232b56d5892..0c97e534b490 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -663,7 +663,7 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in, { struct workspace *workspace = list_entry(ws, struct workspace, list); struct btrfs_fs_info *fs_info = btrfs_sb(folio_inode(dest_folio)->i_sb); - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; zstd_dstream *stream; int ret = 0; unsigned long to_copy = 0; @@ -687,10 +687,10 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in, workspace->out_buf.dst = workspace->buf; workspace->out_buf.pos = 0; - workspace->out_buf.size = sectorsize; + workspace->out_buf.size = blocksize; /* - * Since both input and output buffers should not exceed one sector, + * Since both input and output buffers should not exceed one block, * one call should end the decompression. */ ret = zstd_decompress_stream(stream, &workspace->out_buf, &workspace->in_buf); From patchwork Wed Dec 18 09:41:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913306 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 966181990DE for ; Wed, 18 Dec 2024 09:42:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514926; cv=none; b=YYK6w0bA6Ndm2zstaLuU/Rd5KcCJi+Y3uxvGH1x1Nb16NtA7Jht/3Ky7G42bzFe92dYKxedbxo1p8YlkVJcbyO4NNjbYPiTZhObKkmLbDdqbMFh7R042JvhtKXp0bKNt4xjoPfZ6D4LrmwyP+sK3YlcW8cXkyJbG38JAM7R6cd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514926; c=relaxed/simple; bh=T5cFn31JK+Vit129HYvQQDZK0QNeBviQtF13J/g9i7w=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oTe35c8srXIRZID2rdh+hMC3ZbBtejrnTDuZMkNXbosX0c8QnxgmuUtD143v6iVmFAFQLE4qUjju15NL3r7GTumA5PIaMe8/K/inehjAiWrbalBMZMsR0xdVgQKFh5yisoIqQl/sQxaSkiiLPKa+qZ3t7Yws3iETJhZiMLqH8u4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=fluifCwg; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=fluifCwg; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="fluifCwg"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="fluifCwg" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C2A7E1F444 for ; Wed, 18 Dec 2024 09:42:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lVvVC9Hc9JbFvOJJq8KNPzOaD2uRl3HwRKVTRku4ZxM=; b=fluifCwgd//d8n+OeTOmGtK9TauosYilcXPPFWAMq1nbzCLFuuyDOCXRucZwknTz5Brs+Z RQqJUROeReLtRANmaR2VtkNSO/Mw9VZOYmcPvyDKuLx/g3xnRVBvIXrZn0bAs0vdhZQkgq pazfjxj691rX3zShyvgi7TvOD+uLkTQ= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lVvVC9Hc9JbFvOJJq8KNPzOaD2uRl3HwRKVTRku4ZxM=; b=fluifCwgd//d8n+OeTOmGtK9TauosYilcXPPFWAMq1nbzCLFuuyDOCXRucZwknTz5Brs+Z RQqJUROeReLtRANmaR2VtkNSO/Mw9VZOYmcPvyDKuLx/g3xnRVBvIXrZn0bAs0vdhZQkgq pazfjxj691rX3zShyvgi7TvOD+uLkTQ= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id EE0CB132EA for ; Wed, 18 Dec 2024 09:42:01 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id yARZKumYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:01 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 07/18] btrfs: migrate free space cache code to use block size terminology Date: Wed, 18 Dec 2024 20:11:23 +1030 Message-ID: <7b07c87c4b56c123bf7b29fe05d2de8a84da1f4c.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:mid,imap1.dmz-prg2.suse.org:helo]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO Straightforward rename from "sector" to "block". Signed-off-by: Qu Wenruo --- fs/btrfs/free-space-cache.c | 8 ++++---- fs/btrfs/free-space-tree.c | 28 ++++++++++++++-------------- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c index 17707c898eae..d02ee2f38b60 100644 --- a/fs/btrfs/free-space-cache.c +++ b/fs/btrfs/free-space-cache.c @@ -2290,7 +2290,7 @@ static bool use_bitmap(struct btrfs_free_space_ctl *ctl, * of cache left then go ahead an dadd them, no sense in adding * the overhead of a bitmap if we don't have to. */ - if (info->bytes <= fs_info->sectorsize * 8) { + if (info->bytes <= fs_info->blocksize * 8) { if (ctl->free_extents * 3 <= ctl->extents_thresh) return false; } else { @@ -2959,7 +2959,7 @@ void btrfs_init_free_space_ctl(struct btrfs_block_group *block_group, struct btrfs_fs_info *fs_info = block_group->fs_info; spin_lock_init(&ctl->tree_lock); - ctl->unit = fs_info->sectorsize; + ctl->unit = fs_info->blocksize; ctl->start = block_group->start; ctl->block_group = block_group; ctl->op = &free_space_op; @@ -3583,10 +3583,10 @@ int btrfs_find_space_cluster(struct btrfs_block_group *block_group, min_bytes = cont1_bytes; } else if (block_group->flags & BTRFS_BLOCK_GROUP_METADATA) { cont1_bytes = bytes; - min_bytes = fs_info->sectorsize; + min_bytes = fs_info->blocksize; } else { cont1_bytes = max(bytes, (bytes + empty_size) >> 2); - min_bytes = fs_info->sectorsize; + min_bytes = fs_info->blocksize; } spin_lock(&ctl->tree_lock); diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c index 7ba50e133921..e6dbf3e39b00 100644 --- a/fs/btrfs/free-space-tree.c +++ b/fs/btrfs/free-space-tree.c @@ -49,7 +49,7 @@ void set_free_space_tree_thresholds(struct btrfs_block_group *cache) * We convert to bitmaps when the disk space required for using extents * exceeds that required for using bitmaps. */ - bitmap_range = cache->fs_info->sectorsize * BTRFS_FREE_SPACE_BITMAP_BITS; + bitmap_range = cache->fs_info->blocksize * BTRFS_FREE_SPACE_BITMAP_BITS; num_bitmaps = div_u64(cache->length + bitmap_range - 1, bitmap_range); bitmap_size = sizeof(struct btrfs_item) + BTRFS_FREE_SPACE_BITMAP_SIZE; total_bitmap_size = num_bitmaps * bitmap_size; @@ -158,7 +158,7 @@ static int btrfs_search_prev_slot(struct btrfs_trans_handle *trans, static inline u32 free_space_bitmap_size(const struct btrfs_fs_info *fs_info, u64 size) { - return DIV_ROUND_UP(size >> fs_info->sectorsize_bits, BITS_PER_BYTE); + return DIV_ROUND_UP(size >> fs_info->blocksize_bits, BITS_PER_BYTE); } static unsigned long *alloc_bitmap(u32 bitmap_size) @@ -258,9 +258,9 @@ int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans, ASSERT(found_key.objectid + found_key.offset <= end); first = div_u64(found_key.objectid - start, - fs_info->sectorsize); + fs_info->blocksize); last = div_u64(found_key.objectid + found_key.offset - start, - fs_info->sectorsize); + fs_info->blocksize); le_bitmap_set(bitmap, first, last - first); extent_count++; @@ -301,7 +301,7 @@ int convert_free_space_to_bitmaps(struct btrfs_trans_handle *trans, } bitmap_cursor = (char *)bitmap; - bitmap_range = fs_info->sectorsize * BTRFS_FREE_SPACE_BITMAP_BITS; + bitmap_range = fs_info->blocksize * BTRFS_FREE_SPACE_BITMAP_BITS; i = start; while (i < end) { unsigned long ptr; @@ -397,7 +397,7 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans, ASSERT(found_key.objectid + found_key.offset <= end); bitmap_pos = div_u64(found_key.objectid - start, - fs_info->sectorsize * + fs_info->blocksize * BITS_PER_BYTE); bitmap_cursor = ((char *)bitmap) + bitmap_pos; data_size = free_space_bitmap_size(fs_info, @@ -433,16 +433,16 @@ int convert_free_space_to_extents(struct btrfs_trans_handle *trans, btrfs_mark_buffer_dirty(trans, leaf); btrfs_release_path(path); - nrbits = block_group->length >> block_group->fs_info->sectorsize_bits; + nrbits = block_group->length >> block_group->fs_info->blocksize_bits; start_bit = find_next_bit_le(bitmap, nrbits, 0); while (start_bit < nrbits) { end_bit = find_next_zero_bit_le(bitmap, nrbits, start_bit); ASSERT(start_bit < end_bit); - key.objectid = start + start_bit * block_group->fs_info->sectorsize; + key.objectid = start + start_bit * block_group->fs_info->blocksize; key.type = BTRFS_FREE_SPACE_EXTENT_KEY; - key.offset = (end_bit - start_bit) * block_group->fs_info->sectorsize; + key.offset = (end_bit - start_bit) * block_group->fs_info->blocksize; ret = btrfs_insert_empty_item(trans, root, path, &key, 0); if (ret) @@ -529,7 +529,7 @@ int free_space_test_bit(struct btrfs_block_group *block_group, ptr = btrfs_item_ptr_offset(leaf, path->slots[0]); i = div_u64(offset - found_start, - block_group->fs_info->sectorsize); + block_group->fs_info->blocksize); return !!extent_buffer_test_bit(leaf, ptr, i); } @@ -558,8 +558,8 @@ static void free_space_set_bits(struct btrfs_trans_handle *trans, end = found_end; ptr = btrfs_item_ptr_offset(leaf, path->slots[0]); - first = (*start - found_start) >> fs_info->sectorsize_bits; - last = (end - found_start) >> fs_info->sectorsize_bits; + first = (*start - found_start) >> fs_info->blocksize_bits; + last = (end - found_start) >> fs_info->blocksize_bits; if (bit) extent_buffer_bitmap_set(leaf, ptr, first, last - first); else @@ -619,7 +619,7 @@ static int modify_free_space_bitmap(struct btrfs_trans_handle *trans, * that block is within the block group. */ if (start > block_group->start) { - u64 prev_block = start - block_group->fs_info->sectorsize; + u64 prev_block = start - block_group->fs_info->blocksize; key.objectid = prev_block; key.type = (u8)-1; @@ -1544,7 +1544,7 @@ static int load_free_space_bitmaps(struct btrfs_caching_control *caching_ctl, extent_count++; } prev_bit = bit; - offset += fs_info->sectorsize; + offset += fs_info->blocksize; } } if (prev_bit == 1) { From patchwork Wed Dec 18 09:41:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913308 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECE4E198A1A for ; Wed, 18 Dec 2024 09:42:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514928; cv=none; b=pdMmcgwPtHOHE7ODYbzerXNkQmjG2XhUnwn05fb3rlaKvYYD3EQP8rJO6C2VsEeVxloJ+H0+F9baRPp2s0xRlj4BBPeYvyG/unCNTs5aVfPGjtmQILEygXQJ3eE0q1p3uZI8SqhT2VTqTFk7kQweeixmL6coN717s6WHmvgkxp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514928; c=relaxed/simple; bh=AbvK4fHcrIdamgVU1I1UkMReFDXK6mOhUMFeg1c33xk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d6SH0nO9wXlszuQ7eEUnaVQq6CgLt3qWZISYLZ7o7rUFPOpPIgjsLjBcy1DrZesF8DSo3PW2hqBGhwbFd8VIq4yPLP0jltmMxb4nQyKW5bsyGd2hvE329XkG7R6MMK7M/kSf12QJl/jL02teTMglB+SKdlULAX5COFR/qIAuWOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=Gmi5JTY5; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=Gmi5JTY5; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="Gmi5JTY5"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="Gmi5JTY5" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 1E5E521167 for ; Wed, 18 Dec 2024 09:42:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514924; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZtgpT0ATC9SBBgScEhieFkp0pkzxW7Tbe3TRujoH1Ks=; b=Gmi5JTY5gf1asyohWdOOc5i1DoHNiKB9LsZfpX0k9orPeHaW7PY1Rq23KpaSeSjVW4Zy9N lxsJDGvlZLIwudDq7HQJNqYQyB+mFUtwRGplcCX5hPskAupmexpv4zUocbaMBuq210+1zy 8gWZgvFYEIqDO81lVQnxy01zxWPbO6w= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=Gmi5JTY5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514924; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZtgpT0ATC9SBBgScEhieFkp0pkzxW7Tbe3TRujoH1Ks=; b=Gmi5JTY5gf1asyohWdOOc5i1DoHNiKB9LsZfpX0k9orPeHaW7PY1Rq23KpaSeSjVW4Zy9N lxsJDGvlZLIwudDq7HQJNqYQyB+mFUtwRGplcCX5hPskAupmexpv4zUocbaMBuq210+1zy 8gWZgvFYEIqDO81lVQnxy01zxWPbO6w= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4B621132EA for ; Wed, 18 Dec 2024 09:42:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6KKVAuuYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:03 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 08/18] btrfs: migrate file-item.c to use block size terminology Date: Wed, 18 Dec 2024 20:11:24 +1030 Message-ID: <7580377dcbe6844964379df7c9760a8077c81f6c.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 1E5E521167 X-Spam-Score: -3.01 X-Rspamd-Action: no action X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo,suse.com:dkim,suse.com:mid,suse.com:email]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Flag: NO X-Spam-Level: Straightforward rename from "sector" to "block", except the bio interface. Signed-off-by: Qu Wenruo --- fs/btrfs/file-item.c | 94 ++++++++++++++++++++++---------------------- 1 file changed, 47 insertions(+), 47 deletions(-) diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c index 886749b39672..89b37b59c324 100644 --- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -77,7 +77,7 @@ void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_siz * Does not need to call this in the case where we're replacing an existing file * extent, however if not sure it's fine to call this multiple times. * - * The start and len must match the file extent item, so thus must be sectorsize + * The start and len must match the file extent item, so thus must be blocksize * aligned. */ int btrfs_inode_set_file_extent_range(struct btrfs_inode *inode, u64 start, @@ -89,7 +89,7 @@ int btrfs_inode_set_file_extent_range(struct btrfs_inode *inode, u64 start, if (len == 0) return 0; - ASSERT(IS_ALIGNED(start + len, inode->root->fs_info->sectorsize)); + ASSERT(IS_ALIGNED(start + len, inode->root->fs_info->blocksize)); return set_extent_bit(inode->file_extent_tree, start, start + len - 1, EXTENT_DIRTY, NULL); @@ -106,7 +106,7 @@ int btrfs_inode_set_file_extent_range(struct btrfs_inode *inode, u64 start, * need to be called for cases where we're replacing a file extent, like when * we've COWed a file extent. * - * The start and len must match the file extent item, so thus must be sectorsize + * The start and len must match the file extent item, so thus must be blocksize * aligned. */ int btrfs_inode_clear_file_extent_range(struct btrfs_inode *inode, u64 start, @@ -118,7 +118,7 @@ int btrfs_inode_clear_file_extent_range(struct btrfs_inode *inode, u64 start, if (len == 0) return 0; - ASSERT(IS_ALIGNED(start + len, inode->root->fs_info->sectorsize) || + ASSERT(IS_ALIGNED(start + len, inode->root->fs_info->blocksize) || len == (u64)-1); return clear_extent_bit(inode->file_extent_tree, start, @@ -127,16 +127,16 @@ int btrfs_inode_clear_file_extent_range(struct btrfs_inode *inode, u64 start, static size_t bytes_to_csum_size(const struct btrfs_fs_info *fs_info, u32 bytes) { - ASSERT(IS_ALIGNED(bytes, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(bytes, fs_info->blocksize)); - return (bytes >> fs_info->sectorsize_bits) * fs_info->csum_size; + return (bytes >> fs_info->blocksize_bits) * fs_info->csum_size; } static size_t csum_size_to_bytes(const struct btrfs_fs_info *fs_info, u32 csum_size) { ASSERT(IS_ALIGNED(csum_size, fs_info->csum_size)); - return (csum_size / fs_info->csum_size) << fs_info->sectorsize_bits; + return (csum_size / fs_info->csum_size) << fs_info->blocksize_bits; } static inline u32 max_ordered_sum_bytes(const struct btrfs_fs_info *fs_info) @@ -230,7 +230,7 @@ btrfs_lookup_csum(struct btrfs_trans_handle *trans, goto fail; csum_offset = (bytenr - found_key.offset) >> - fs_info->sectorsize_bits; + fs_info->blocksize_bits; csums_in_item = btrfs_item_size(leaf, path->slots[0]); csums_in_item /= csum_size; @@ -271,9 +271,9 @@ int btrfs_lookup_file_extent(struct btrfs_trans_handle *trans, * Find checksums for logical bytenr range [disk_bytenr, disk_bytenr + len) and * store the result to @dst. * - * Return >0 for the number of sectors we found. - * Return 0 for the range [disk_bytenr, disk_bytenr + sectorsize) has no csum - * for it. Caller may want to try next sector until one range is hit. + * Return >0 for the number of blocks we found. + * Return 0 for the range [disk_bytenr, disk_bytenr + blocksize) has no csum + * for it. Caller may want to try next block until one range is hit. * Return <0 for fatal error. */ static int search_csum_tree(struct btrfs_fs_info *fs_info, @@ -283,15 +283,15 @@ static int search_csum_tree(struct btrfs_fs_info *fs_info, struct btrfs_root *csum_root; struct btrfs_csum_item *item = NULL; struct btrfs_key key; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; const u32 csum_size = fs_info->csum_size; u32 itemsize; int ret; u64 csum_start; u64 csum_len; - ASSERT(IS_ALIGNED(disk_bytenr, sectorsize) && - IS_ALIGNED(len, sectorsize)); + ASSERT(IS_ALIGNED(disk_bytenr, blocksize) && + IS_ALIGNED(len, blocksize)); /* Check if the current csum item covers disk_bytenr */ if (path->nodes[0]) { @@ -301,7 +301,7 @@ static int search_csum_tree(struct btrfs_fs_info *fs_info, itemsize = btrfs_item_size(path->nodes[0], path->slots[0]); csum_start = key.offset; - csum_len = (itemsize / csum_size) * sectorsize; + csum_len = (itemsize / csum_size) * blocksize; if (in_range(disk_bytenr, csum_start, csum_len)) goto found; @@ -319,12 +319,12 @@ static int search_csum_tree(struct btrfs_fs_info *fs_info, itemsize = btrfs_item_size(path->nodes[0], path->slots[0]); csum_start = key.offset; - csum_len = (itemsize / csum_size) * sectorsize; + csum_len = (itemsize / csum_size) * blocksize; ASSERT(in_range(disk_bytenr, csum_start, csum_len)); found: ret = (min(csum_start + csum_len, disk_bytenr + len) - - disk_bytenr) >> fs_info->sectorsize_bits; + disk_bytenr) >> fs_info->blocksize_bits; read_extent_buffer(path->nodes[0], dst, (unsigned long)item, ret * csum_size); out: @@ -344,11 +344,11 @@ blk_status_t btrfs_lookup_bio_sums(struct btrfs_bio *bbio) struct btrfs_fs_info *fs_info = inode->root->fs_info; struct bio *bio = &bbio->bio; struct btrfs_path *path; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; const u32 csum_size = fs_info->csum_size; u32 orig_len = bio->bi_iter.bi_size; u64 orig_disk_bytenr = bio->bi_iter.bi_sector << SECTOR_SHIFT; - const unsigned int nblocks = orig_len >> fs_info->sectorsize_bits; + const unsigned int nblocks = orig_len >> fs_info->blocksize_bits; blk_status_t ret = BLK_STS_OK; u32 bio_offset = 0; @@ -384,7 +384,7 @@ blk_status_t btrfs_lookup_bio_sums(struct btrfs_bio *bbio) } /* - * If requested number of sectors is larger than one leaf can contain, + * If requested number of blocks is larger than one leaf can contain, * kick the readahead for csum tree. */ if (nblocks > fs_info->csums_per_leaf) @@ -405,7 +405,7 @@ blk_status_t btrfs_lookup_bio_sums(struct btrfs_bio *bbio) int count; u64 cur_disk_bytenr = orig_disk_bytenr + bio_offset; u8 *csum_dst = bbio->csum + - (bio_offset >> fs_info->sectorsize_bits) * csum_size; + (bio_offset >> fs_info->blocksize_bits) * csum_size; count = search_csum_tree(fs_info, path, cur_disk_bytenr, orig_len - bio_offset, csum_dst); @@ -435,15 +435,15 @@ blk_status_t btrfs_lookup_bio_sums(struct btrfs_bio *bbio) u64 file_offset = bbio->file_offset + bio_offset; set_extent_bit(&inode->io_tree, file_offset, - file_offset + sectorsize - 1, + file_offset + blocksize - 1, EXTENT_NODATASUM, NULL); } else { btrfs_warn_rl(fs_info, "csum hole found for disk bytenr range [%llu, %llu)", - cur_disk_bytenr, cur_disk_bytenr + sectorsize); + cur_disk_bytenr, cur_disk_bytenr + blocksize); } } - bio_offset += count * sectorsize; + bio_offset += count * blocksize; } btrfs_free_path(path); @@ -476,8 +476,8 @@ int btrfs_lookup_csums_list(struct btrfs_root *root, u64 start, u64 end, int ret; bool found_csums = false; - ASSERT(IS_ALIGNED(start, fs_info->sectorsize) && - IS_ALIGNED(end + 1, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(start, fs_info->blocksize) && + IS_ALIGNED(end + 1, fs_info->blocksize)); path = btrfs_alloc_path(); if (!path) @@ -605,7 +605,7 @@ int btrfs_lookup_csums_list(struct btrfs_root *root, u64 start, u64 end, * * This version will set the corresponding bits in @csum_bitmap to represent * that there is a csum found. - * Each bit represents a sector. Thus caller should ensure @csum_buf passed + * Each bit represents a block. Thus caller should ensure @csum_buf passed * in is large enough to contain all csums. */ int btrfs_lookup_csums_bitmap(struct btrfs_root *root, struct btrfs_path *path, @@ -620,8 +620,8 @@ int btrfs_lookup_csums_bitmap(struct btrfs_root *root, struct btrfs_path *path, bool free_path = false; int ret; - ASSERT(IS_ALIGNED(start, fs_info->sectorsize) && - IS_ALIGNED(end + 1, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(start, fs_info->blocksize) && + IS_ALIGNED(end + 1, fs_info->blocksize)); if (!path) { path = btrfs_alloc_path(); @@ -723,8 +723,8 @@ int btrfs_lookup_csums_bitmap(struct btrfs_root *root, struct btrfs_path *path, bytes_to_csum_size(fs_info, size)); bitmap_set(csum_bitmap, - (start - orig_start) >> fs_info->sectorsize_bits, - size >> fs_info->sectorsize_bits); + (start - orig_start) >> fs_info->blocksize_bits, + size >> fs_info->blocksize_bits); start += size; } @@ -774,14 +774,14 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_bio *bbio) bio_for_each_segment(bvec, bio, iter) { blockcount = BTRFS_BYTES_TO_BLKS(fs_info, - bvec.bv_len + fs_info->sectorsize + bvec.bv_len + fs_info->blocksize - 1); for (i = 0; i < blockcount; i++) { data = bvec_kmap_local(&bvec); crypto_shash_digest(shash, - data + (i * fs_info->sectorsize), - fs_info->sectorsize, + data + (i * fs_info->blocksize), + fs_info->blocksize, sums->sums + index); kunmap_local(data); index += fs_info->csum_size; @@ -832,7 +832,7 @@ static noinline void truncate_one_csum(struct btrfs_trans_handle *trans, const u32 csum_size = fs_info->csum_size; u64 csum_end; u64 end_byte = bytenr + len; - u32 blocksize_bits = fs_info->sectorsize_bits; + u32 blocksize_bits = fs_info->blocksize_bits; leaf = path->nodes[0]; csum_end = btrfs_item_size(leaf, path->slots[0]) / csum_size; @@ -883,7 +883,7 @@ int btrfs_del_csums(struct btrfs_trans_handle *trans, struct extent_buffer *leaf; int ret = 0; const u32 csum_size = fs_info->csum_size; - u32 blocksize_bits = fs_info->sectorsize_bits; + u32 blocksize_bits = fs_info->blocksize_bits; ASSERT(btrfs_root_id(root) == BTRFS_CSUM_TREE_OBJECTID || btrfs_root_id(root) == BTRFS_TREE_LOG_OBJECTID); @@ -1125,7 +1125,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, if (btrfs_leaf_free_space(leaf) >= csum_size) { btrfs_item_key_to_cpu(leaf, &found_key, path->slots[0]); csum_offset = (bytenr - found_key.offset) >> - fs_info->sectorsize_bits; + fs_info->blocksize_bits; goto extend_csum; } @@ -1145,7 +1145,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, leaf = path->nodes[0]; btrfs_item_key_to_cpu(leaf, &found_key, path->slots[0]); - csum_offset = (bytenr - found_key.offset) >> fs_info->sectorsize_bits; + csum_offset = (bytenr - found_key.offset) >> fs_info->blocksize_bits; if (found_key.type != BTRFS_EXTENT_CSUM_KEY || found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID || @@ -1161,7 +1161,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, u32 diff; tmp = sums->len - total_bytes; - tmp >>= fs_info->sectorsize_bits; + tmp >>= fs_info->blocksize_bits; WARN_ON(tmp < 1); extend_nr = max_t(int, 1, tmp); @@ -1200,7 +1200,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, if (ret < 0) goto out; - tmp = (next_offset - bytenr) >> fs_info->sectorsize_bits; + tmp = (next_offset - bytenr) >> fs_info->blocksize_bits; if (tmp <= INT_MAX) extend_nr = min_t(int, extend_nr, tmp); } @@ -1226,9 +1226,9 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, u64 tmp; tmp = sums->len - total_bytes; - tmp >>= fs_info->sectorsize_bits; + tmp >>= fs_info->blocksize_bits; tmp = min(tmp, (next_offset - file_key.offset) >> - fs_info->sectorsize_bits); + fs_info->blocksize_bits); tmp = max_t(u64, 1, tmp); tmp = min_t(u64, tmp, MAX_CSUM_ITEMS(fs_info, csum_size)); @@ -1248,7 +1248,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, item = (struct btrfs_csum_item *)((unsigned char *)item + csum_offset * csum_size); found: - ins_size = (u32)(sums->len - total_bytes) >> fs_info->sectorsize_bits; + ins_size = (u32)(sums->len - total_bytes) >> fs_info->blocksize_bits; ins_size *= csum_size; ins_size = min_t(u32, (unsigned long)item_end - (unsigned long)item, ins_size); @@ -1257,7 +1257,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, index += ins_size; ins_size /= csum_size; - total_bytes += ins_size * fs_info->sectorsize; + total_bytes += ins_size * fs_info->blocksize; btrfs_mark_buffer_dirty(trans, path->nodes[0]); if (total_bytes < sums->len) { @@ -1322,7 +1322,7 @@ void btrfs_extent_item_to_extent_map(struct btrfs_inode *inode, em->disk_bytenr = EXTENT_MAP_INLINE; em->start = 0; - em->len = fs_info->sectorsize; + em->len = fs_info->blocksize; em->offset = 0; extent_map_set_compression(em, compress_type); } else { @@ -1336,7 +1336,7 @@ void btrfs_extent_item_to_extent_map(struct btrfs_inode *inode, /* * Returns the end offset (non inclusive) of the file extent item the given path * points to. If it points to an inline extent, the returned offset is rounded - * up to the sector size. + * up to the block size. */ u64 btrfs_file_extent_end(const struct btrfs_path *path) { @@ -1351,7 +1351,7 @@ u64 btrfs_file_extent_end(const struct btrfs_path *path) fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item); if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) - end = leaf->fs_info->sectorsize; + end = leaf->fs_info->blocksize; else end = key.offset + btrfs_file_extent_num_bytes(leaf, fi); From patchwork Wed Dec 18 09:41:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913309 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A4B21925A2 for ; Wed, 18 Dec 2024 09:42:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514930; cv=none; b=RKGamwmAFqWSZoKKsO1HELmIZxZaqaZ1DRmCrCYTNBavfOD8x+Srt9f9HlOMbd7C1c1ZjVWGFjM90a/yrdDi3aPgOg8yDwPsLgc8z5onP96OwEiIt/tJxeo81t6w+YJF6J+YuX5nxdvwl3tnh6STtnupq8W1fZhELqsv0CA9TqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514930; c=relaxed/simple; bh=iNgfqr7ro+drTgJewlDUo+hH+I//FOQM2NiYOVPCA0U=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q+nXamKUPck6m3L9paaFrB3swXnjE+f36FuSXCkP5HCTDnzIZj8P4SIz6sPN4pCWxTk7YloY2vNf8I2Qjt8JP/tvZ6XPntCOkeET5+kFnFJVj1nbpcWKbdh82wVSsCrpp7S9sCRJhZ7vWzXsCoiTw72oPjyPrqQV1ZS82CVO1WM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=iwf2NtM1; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=iwf2NtM1; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="iwf2NtM1"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="iwf2NtM1" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 6FEAD1F449 for ; Wed, 18 Dec 2024 09:42:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E4rr3T75bgddpcPovBin2auT0frDieMuwHYA3BprM9Q=; b=iwf2NtM1zBEu9DvEj/hHr2MnAkpHuK73StN/ujtbArvaec2mvTbwjBtwtuyES4Qnr0bJ9u zaaipwr2MbIKdhe6dHxLvyyRyXq3OZBmW8XePp84AUz5NLSodoZsPM/qhYP83QGsZ1h6/g +yMk92n54PEJfwdVQqOfUIrwu3qsNmU= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E4rr3T75bgddpcPovBin2auT0frDieMuwHYA3BprM9Q=; b=iwf2NtM1zBEu9DvEj/hHr2MnAkpHuK73StN/ujtbArvaec2mvTbwjBtwtuyES4Qnr0bJ9u zaaipwr2MbIKdhe6dHxLvyyRyXq3OZBmW8XePp84AUz5NLSodoZsPM/qhYP83QGsZ1h6/g +yMk92n54PEJfwdVQqOfUIrwu3qsNmU= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9BB4E132EA for ; Wed, 18 Dec 2024 09:42:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 2M05FuyYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:04 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 09/18] btrfs: migrate file.c to use block size terminology Date: Wed, 18 Dec 2024 20:11:25 +1030 Message-ID: <24adf3d2c52a53370f628ce8b1c7440f4fb77d4e.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.80 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:helo]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Level: Straightforward rename from "sector" to "block". Signed-off-by: Qu Wenruo --- fs/btrfs/file.c | 138 ++++++++++++++++++++++++------------------------ 1 file changed, 69 insertions(+), 69 deletions(-) diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 4775a17c4ee1..f34f6d99d039 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -44,8 +44,8 @@ static void btrfs_drop_folio(struct btrfs_fs_info *fs_info, struct folio *folio, u64 pos, u64 copied) { - u64 block_start = round_down(pos, fs_info->sectorsize); - u64 block_len = round_up(pos + copied, fs_info->sectorsize) - block_start; + u64 block_start = round_down(pos, fs_info->blocksize); + u64 block_len = round_up(pos + copied, fs_info->blocksize) - block_start; ASSERT(block_len <= U32_MAX); /* @@ -85,9 +85,9 @@ int btrfs_dirty_folio(struct btrfs_inode *inode, struct folio *folio, loff_t pos if (noreserve) extra_bits |= EXTENT_NORESERVE; - start_pos = round_down(pos, fs_info->sectorsize); + start_pos = round_down(pos, fs_info->blocksize); num_bytes = round_up(write_bytes + pos - start_pos, - fs_info->sectorsize); + fs_info->blocksize); ASSERT(num_bytes <= U32_MAX); ASSERT(folio_pos(folio) <= pos && folio_pos(folio) + folio_size(folio) >= pos + write_bytes); @@ -416,7 +416,7 @@ int btrfs_drop_extents(struct btrfs_trans_handle *trans, extent_type == BTRFS_FILE_EXTENT_INLINE) { args->bytes_found += extent_end - key.offset; extent_end = ALIGN(extent_end, - fs_info->sectorsize); + fs_info->blocksize); } else if (update_refs && disk_bytenr > 0) { struct btrfs_ref ref = { .action = BTRFS_DROP_DELAYED_REF, @@ -925,8 +925,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct folio *folio, u64 last_pos; int ret = 0; - start_pos = round_down(pos, fs_info->sectorsize); - last_pos = round_up(pos + write_bytes, fs_info->sectorsize) - 1; + start_pos = round_down(pos, fs_info->blocksize); + last_pos = round_up(pos + write_bytes, fs_info->blocksize) - 1; if (start_pos < inode->vfs_inode.i_size) { struct btrfs_ordered_extent *ordered; @@ -1007,9 +1007,9 @@ int btrfs_check_nocow_lock(struct btrfs_inode *inode, loff_t pos, if (!btrfs_drew_try_write_lock(&root->snapshot_lock)) return -EAGAIN; - lockstart = round_down(pos, fs_info->sectorsize); + lockstart = round_down(pos, fs_info->blocksize); lockend = round_up(pos + *write_bytes, - fs_info->sectorsize) - 1; + fs_info->blocksize) - 1; num_bytes = lockend - lockstart + 1; if (nowait) { @@ -1074,11 +1074,11 @@ int btrfs_write_check(struct kiocb *iocb, size_t count) inode_inc_iversion(inode); } - start_pos = round_down(pos, fs_info->sectorsize); + start_pos = round_down(pos, fs_info->blocksize); oldsize = i_size_read(inode); if (start_pos > oldsize) { /* Expand hole size to cover write data, preventing empty gap */ - loff_t end_pos = round_up(pos + count, fs_info->sectorsize); + loff_t end_pos = round_up(pos + count, fs_info->blocksize); ret = btrfs_cont_expand(BTRFS_I(inode), oldsize, end_pos); if (ret) @@ -1125,12 +1125,12 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) while (iov_iter_count(i) > 0) { struct extent_state *cached_state = NULL; size_t offset = offset_in_page(pos); - size_t sector_offset; + size_t block_offset; size_t write_bytes = min(iov_iter_count(i), PAGE_SIZE - offset); size_t reserve_bytes; size_t copied; - size_t dirty_sectors; - size_t num_sectors; + size_t dirty_blocks; + size_t num_blocks; struct folio *folio = NULL; int extents_locked; bool force_page_uptodate = false; @@ -1145,7 +1145,7 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) } only_release_metadata = false; - sector_offset = pos & (fs_info->sectorsize - 1); + block_offset = pos & (fs_info->blocksize - 1); extent_changeset_release(data_reserved); ret = btrfs_check_data_free_space(BTRFS_I(inode), @@ -1175,8 +1175,8 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) only_release_metadata = true; } - reserve_bytes = round_up(write_bytes + sector_offset, - fs_info->sectorsize); + reserve_bytes = round_up(write_bytes + block_offset, + fs_info->blocksize); WARN_ON(reserve_bytes == 0); ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), reserve_bytes, @@ -1229,8 +1229,8 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) /* * If we get a partial write, we can end up with partially - * uptodate page. Although if sector size < page size we can - * handle it, but if it's not sector aligned it can cause + * uptodate page. Although if block size < page size we can + * handle it, but if it's not block aligned it can cause * a lot of complexity, so make sure they don't happen by * forcing retry this copy. */ @@ -1241,35 +1241,35 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) } } - num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, reserve_bytes); - dirty_sectors = round_up(copied + sector_offset, - fs_info->sectorsize); - dirty_sectors = BTRFS_BYTES_TO_BLKS(fs_info, dirty_sectors); + num_blocks = BTRFS_BYTES_TO_BLKS(fs_info, reserve_bytes); + dirty_blocks = round_up(copied + block_offset, + fs_info->blocksize); + dirty_blocks = BTRFS_BYTES_TO_BLKS(fs_info, dirty_blocks); if (copied == 0) { force_page_uptodate = true; - dirty_sectors = 0; + dirty_blocks = 0; } else { force_page_uptodate = false; } - if (num_sectors > dirty_sectors) { - /* release everything except the sectors we dirtied */ - release_bytes -= dirty_sectors << fs_info->sectorsize_bits; + if (num_blocks > dirty_blocks) { + /* release everything except the blocks we dirtied */ + release_bytes -= dirty_blocks << fs_info->blocksize_bits; if (only_release_metadata) { btrfs_delalloc_release_metadata(BTRFS_I(inode), release_bytes, true); } else { u64 release_start = round_up(pos + copied, - fs_info->sectorsize); + fs_info->blocksize); btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, release_start, release_bytes, true); } } - release_bytes = round_up(copied + sector_offset, - fs_info->sectorsize); + release_bytes = round_up(copied + block_offset, + fs_info->blocksize); ret = btrfs_dirty_folio(BTRFS_I(inode), folio, pos, copied, &cached_state, only_release_metadata); @@ -1313,7 +1313,7 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i) } else { btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, - round_down(pos, fs_info->sectorsize), + round_down(pos, fs_info->blocksize), release_bytes, true); } } @@ -1861,7 +1861,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) } if (folio->index == ((size - 1) >> PAGE_SHIFT)) { - reserved_space = round_up(size - page_start, fs_info->sectorsize); + reserved_space = round_up(size - page_start, fs_info->blocksize); if (reserved_space < PAGE_SIZE) { end = page_start + reserved_space - 1; btrfs_delalloc_release_space(BTRFS_I(inode), @@ -2081,8 +2081,8 @@ static int find_first_non_hole(struct btrfs_inode *inode, u64 *start, u64 *len) int ret = 0; em = btrfs_get_extent(inode, NULL, - round_down(*start, fs_info->sectorsize), - round_up(*len, fs_info->sectorsize)); + round_down(*start, fs_info->blocksize), + round_up(*len, fs_info->blocksize)); if (IS_ERR(em)) return PTR_ERR(em); @@ -2245,7 +2245,7 @@ int btrfs_replace_file_extents(struct btrfs_inode *inode, struct btrfs_root *root = inode->root; struct btrfs_fs_info *fs_info = root->fs_info; u64 min_size = btrfs_calc_insert_metadata_size(fs_info, 1); - u64 ino_size = round_up(inode->vfs_inode.i_size, fs_info->sectorsize); + u64 ino_size = round_up(inode->vfs_inode.i_size, fs_info->blocksize); struct btrfs_trans_handle *trans = NULL; struct btrfs_block_rsv *rsv; unsigned int rsv_count; @@ -2520,7 +2520,7 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len) if (ret) goto out_only_mutex; - ino_size = round_up(inode->i_size, fs_info->sectorsize); + ino_size = round_up(inode->i_size, fs_info->blocksize); ret = find_first_non_hole(BTRFS_I(inode), &offset, &len); if (ret < 0) goto out_only_mutex; @@ -2534,8 +2534,8 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len) if (ret) goto out_only_mutex; - lockstart = round_up(offset, fs_info->sectorsize); - lockend = round_down(offset + len, fs_info->sectorsize) - 1; + lockstart = round_up(offset, fs_info->blocksize); + lockend = round_down(offset + len, fs_info->blocksize) - 1; same_block = (BTRFS_BYTES_TO_BLKS(fs_info, offset)) == (BTRFS_BYTES_TO_BLKS(fs_info, offset + len - 1)); /* @@ -2546,7 +2546,7 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len) * Only do this if we are in the same block and we aren't doing the * entire block. */ - if (same_block && len < fs_info->sectorsize) { + if (same_block && len < fs_info->blocksize) { if (offset < ino_size) { truncated_block = true; ret = btrfs_truncate_block(BTRFS_I(inode), offset, len, @@ -2735,12 +2735,12 @@ enum { static int btrfs_zero_range_check_range_boundary(struct btrfs_inode *inode, u64 offset) { - const u64 sectorsize = inode->root->fs_info->sectorsize; + const u64 blocksize = inode->root->fs_info->blocksize; struct extent_map *em; int ret; - offset = round_down(offset, sectorsize); - em = btrfs_get_extent(inode, NULL, offset, sectorsize); + offset = round_down(offset, blocksize); + em = btrfs_get_extent(inode, NULL, offset, blocksize); if (IS_ERR(em)) return PTR_ERR(em); @@ -2765,9 +2765,9 @@ static int btrfs_zero_range(struct inode *inode, struct extent_changeset *data_reserved = NULL; int ret; u64 alloc_hint = 0; - const u64 sectorsize = fs_info->sectorsize; - u64 alloc_start = round_down(offset, sectorsize); - u64 alloc_end = round_up(offset + len, sectorsize); + const u64 blocksize = fs_info->blocksize; + u64 alloc_start = round_down(offset, blocksize); + u64 alloc_end = round_up(offset + len, blocksize); u64 bytes_to_reserve = 0; bool space_reserved = false; @@ -2805,7 +2805,7 @@ static int btrfs_zero_range(struct inode *inode, * only on the remaining part of the range. */ alloc_start = em_end; - ASSERT(IS_ALIGNED(alloc_start, sectorsize)); + ASSERT(IS_ALIGNED(alloc_start, blocksize)); len = offset + len - alloc_start; offset = alloc_start; alloc_hint = extent_map_block_start(em) + em->len; @@ -2814,7 +2814,7 @@ static int btrfs_zero_range(struct inode *inode, if (BTRFS_BYTES_TO_BLKS(fs_info, offset) == BTRFS_BYTES_TO_BLKS(fs_info, offset + len - 1)) { - em = btrfs_get_extent(BTRFS_I(inode), NULL, alloc_start, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, alloc_start, blocksize); if (IS_ERR(em)) { ret = PTR_ERR(em); goto out; @@ -2826,7 +2826,7 @@ static int btrfs_zero_range(struct inode *inode, mode); goto out; } - if (len < sectorsize && em->disk_bytenr != EXTENT_MAP_HOLE) { + if (len < blocksize && em->disk_bytenr != EXTENT_MAP_HOLE) { free_extent_map(em); ret = btrfs_truncate_block(BTRFS_I(inode), offset, len, 0); @@ -2837,13 +2837,13 @@ static int btrfs_zero_range(struct inode *inode, return ret; } free_extent_map(em); - alloc_start = round_down(offset, sectorsize); - alloc_end = alloc_start + sectorsize; + alloc_start = round_down(offset, blocksize); + alloc_end = alloc_start + blocksize; goto reserve_space; } - alloc_start = round_up(offset, sectorsize); - alloc_end = round_down(offset + len, sectorsize); + alloc_start = round_up(offset, blocksize); + alloc_end = round_down(offset + len, blocksize); /* * For unaligned ranges, check the pages at the boundaries, they might @@ -2851,13 +2851,13 @@ static int btrfs_zero_range(struct inode *inode, * they might map to a hole, in which case we need our allocation range * to cover them. */ - if (!IS_ALIGNED(offset, sectorsize)) { + if (!IS_ALIGNED(offset, blocksize)) { ret = btrfs_zero_range_check_range_boundary(BTRFS_I(inode), offset); if (ret < 0) goto out; if (ret == RANGE_BOUNDARY_HOLE) { - alloc_start = round_down(offset, sectorsize); + alloc_start = round_down(offset, blocksize); ret = 0; } else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) { ret = btrfs_truncate_block(BTRFS_I(inode), offset, 0, 0); @@ -2868,13 +2868,13 @@ static int btrfs_zero_range(struct inode *inode, } } - if (!IS_ALIGNED(offset + len, sectorsize)) { + if (!IS_ALIGNED(offset + len, blocksize)) { ret = btrfs_zero_range_check_range_boundary(BTRFS_I(inode), offset + len); if (ret < 0) goto out; if (ret == RANGE_BOUNDARY_HOLE) { - alloc_end = round_up(offset + len, sectorsize); + alloc_end = round_up(offset + len, blocksize); ret = 0; } else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) { ret = btrfs_truncate_block(BTRFS_I(inode), offset + len, @@ -2909,7 +2909,7 @@ static int btrfs_zero_range(struct inode *inode, } ret = btrfs_prealloc_file_range(inode, mode, alloc_start, alloc_end - alloc_start, - fs_info->sectorsize, + fs_info->blocksize, offset + len, &alloc_hint); unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend, &cached_state); @@ -2949,7 +2949,7 @@ static long btrfs_fallocate(struct file *file, int mode, u64 data_space_reserved = 0; u64 qgroup_reserved = 0; struct extent_map *em; - int blocksize = BTRFS_I(inode)->root->fs_info->sectorsize; + int blocksize = BTRFS_I(inode)->root->fs_info->blocksize; int ret; /* Do not allow fallocate in ZONED mode */ @@ -3158,7 +3158,7 @@ static bool find_delalloc_subrange(struct btrfs_inode *inode, u64 start, u64 end if (delalloc_len > 0) { /* - * If delalloc was found then *delalloc_start_ret has a sector size + * If delalloc was found then *delalloc_start_ret has a block size * aligned value (rounded down). */ *delalloc_end_ret = *delalloc_start_ret + delalloc_len - 1; @@ -3235,13 +3235,13 @@ static bool find_delalloc_subrange(struct btrfs_inode *inode, u64 start, u64 end * * @inode: The inode. * @start: The start offset of the range. It does not need to be - * sector size aligned. + * block size aligned. * @end: The end offset (inclusive value) of the search range. - * It does not need to be sector size aligned. + * It does not need to be block size aligned. * @cached_state: Extent state record used for speeding up delalloc * searches in the inode's io_tree. Can be NULL. * @delalloc_start_ret: Output argument, set to the start offset of the - * subrange found with delalloc (may not be sector size + * subrange found with delalloc (may not be block size * aligned). * @delalloc_end_ret: Output argument, set to he end offset (inclusive value) * of the subrange found with delalloc. @@ -3254,7 +3254,7 @@ bool btrfs_find_delalloc_in_range(struct btrfs_inode *inode, u64 start, u64 end, struct extent_state **cached_state, u64 *delalloc_start_ret, u64 *delalloc_end_ret) { - u64 cur_offset = round_down(start, inode->root->fs_info->sectorsize); + u64 cur_offset = round_down(start, inode->root->fs_info->blocksize); u64 prev_delalloc_end = 0; bool search_io_tree = true; bool ret = false; @@ -3298,14 +3298,14 @@ bool btrfs_find_delalloc_in_range(struct btrfs_inode *inode, u64 start, u64 end, * * @inode: The inode. * @whence: Seek mode (SEEK_DATA or SEEK_HOLE). - * @start: Start offset of the hole region. It does not need to be sector + * @start: Start offset of the hole region. It does not need to be block * size aligned. * @end: End offset (inclusive value) of the hole region. It does not - * need to be sector size aligned. + * need to be block size aligned. * @start_ret: Return parameter, used to set the start of the subrange in the * hole that matches the search criteria (seek mode), if such * subrange is found (return value of the function is true). - * The value returned here may not be sector size aligned. + * The value returned here may not be block size aligned. * * Returns true if a subrange matching the given seek mode is found, and if one * is found, it updates @start_ret with the start of the subrange. @@ -3442,10 +3442,10 @@ static loff_t find_desired_extent(struct file *file, loff_t offset, int whence) */ start = max_t(loff_t, 0, offset); - lockstart = round_down(start, fs_info->sectorsize); - lockend = round_up(i_size, fs_info->sectorsize); + lockstart = round_down(start, fs_info->blocksize); + lockend = round_up(i_size, fs_info->blocksize); if (lockend <= lockstart) - lockend = lockstart + fs_info->sectorsize; + lockend = lockstart + fs_info->blocksize; lockend--; path = btrfs_alloc_path(); From patchwork Wed Dec 18 09:41:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913310 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82FA919993B for ; Wed, 18 Dec 2024 09:42:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514931; cv=none; b=SaDUdfNLB4A4NSGhSNumVxgRoHOAN5XGCzJrCypWIAyZnIoyXeL3kZMPMmvI6ENJGi1z3ssufa+l2ul2ljvxBgUheqDN5ytNxnwL6tZtsMeABRVMLS04MhRN7FgnteQvVEHnqC8UzOw8mmZ5Gsso/5qfXpYmQTWIl5RNWLls5Zc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514931; c=relaxed/simple; bh=1NxYpO+JGDF24TkZnaWvjrR9wtbvzUfHZXTCOSoRV88=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MbM7OHTGfgAeEfg/tE4Bk7DChqMyjChfxHTv8YWUNqyJ+wtsyxFkdsoAqhgvF2+iNJ/ImGwrgvHhQFi7DnhwRBLTwntMep7vAgVrDmv63+7KLaiOELA8SSrnFa3+IAdPlBpCREMzTIxdy6YTofDkdLnzbqXwoKdgIDBpsIhxi1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=RAb8y8hE; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=RAb8y8hE; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="RAb8y8hE"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="RAb8y8hE" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BFBA22115D for ; Wed, 18 Dec 2024 09:42:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/bV9y//KnqW3t20XAkwBIB5aR19J+gazNpkeyIT+5dw=; b=RAb8y8hE3mJQw0GXIwbPhabJbuA0FbKjsSfn27knAN+701LKfg6CLzW1DUpMCXCnMwf2l6 PgTbKhLgwOwT+knbMtNHtuLPtKSKLqzW5kBqJDKdcDU+fS6l8sC7u8iKTf7pNd2jnzBvlF KitmDCbUAHFPcQ38Hg7TPPZHYchJuY8= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=RAb8y8hE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/bV9y//KnqW3t20XAkwBIB5aR19J+gazNpkeyIT+5dw=; b=RAb8y8hE3mJQw0GXIwbPhabJbuA0FbKjsSfn27knAN+701LKfg6CLzW1DUpMCXCnMwf2l6 PgTbKhLgwOwT+knbMtNHtuLPtKSKLqzW5kBqJDKdcDU+fS6l8sC7u8iKTf7pNd2jnzBvlF KitmDCbUAHFPcQ38Hg7TPPZHYchJuY8= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id ED664132EA for ; Wed, 18 Dec 2024 09:42:05 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0CkiKu2YYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:05 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 10/18] btrfs: migrate inode.c and btrfs_inode.h to use block size terminology Date: Wed, 18 Dec 2024 20:11:26 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: BFBA22115D X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:dkim,suse.com:mid,imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO This affects the exported function btrfs_check_sector_csum(), thus also rename it to btrfs_check_block_csum(). Signed-off-by: Qu Wenruo --- fs/btrfs/btrfs_inode.h | 2 +- fs/btrfs/inode.c | 140 ++++++++++++++++++++--------------------- fs/btrfs/raid56.c | 6 +- fs/btrfs/scrub.c | 2 +- 4 files changed, 75 insertions(+), 75 deletions(-) diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index b2fa33911c28..8ae914aa759d 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -520,7 +520,7 @@ static inline void btrfs_assert_inode_locked(struct btrfs_inode *inode) #define CSUM_FMT "0x%*phN" #define CSUM_FMT_VALUE(size, bytes) size, bytes -int btrfs_check_sector_csum(struct btrfs_fs_info *fs_info, struct page *page, +int btrfs_check_block_csum(struct btrfs_fs_info *fs_info, struct page *page, u32 pgoff, u8 *csum, const u8 * const csum_expected); bool btrfs_data_csum_ok(struct btrfs_bio *bbio, struct btrfs_device *dev, u32 bio_offset, struct bio_vec *bv); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 8a173a24ac05..6f70c88f6f07 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -185,7 +185,7 @@ static int data_reloc_print_warning_inode(u64 inum, u64 offset, u64 num_bytes, btrfs_warn(fs_info, "checksum error at logical %llu mirror %u root %llu inode %llu offset %llu length %u links %u (path: %s)", warn->logical, warn->mirror_num, root, inum, offset, - fs_info->sectorsize, nlink, + fs_info->blocksize, nlink, (char *)(unsigned long)ipath->fspath->val[i]); } @@ -495,7 +495,7 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans, { struct btrfs_root *root = inode->root; struct extent_buffer *leaf; - const u32 sectorsize = trans->fs_info->sectorsize; + const u32 blocksize = trans->fs_info->blocksize; char *kaddr; unsigned long ptr; struct btrfs_file_extent_item *ei; @@ -504,18 +504,18 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans, u64 i_size; /* - * The decompressed size must still be no larger than a sector. Under + * The decompressed size must still be no larger than a block. Under * heavy race, we can have size == 0 passed in, but that shouldn't be a * big deal and we can continue the insertion. */ - ASSERT(size <= sectorsize); + ASSERT(size <= blocksize); /* - * The compressed size also needs to be no larger than a sector. + * The compressed size also needs to be no larger than a block. * That's also why we only need one page as the parameter. */ if (compressed_folio) - ASSERT(compressed_size <= sectorsize); + ASSERT(compressed_size <= blocksize); else ASSERT(compressed_size == 0); @@ -568,11 +568,11 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans, btrfs_release_path(path); /* - * We align size to sectorsize for inline extents just for simplicity + * We align size to blocksize for inline extents just for simplicity * sake. */ ret = btrfs_inode_set_file_extent_range(inode, 0, - ALIGN(size, root->fs_info->sectorsize)); + ALIGN(size, root->fs_info->blocksize)); if (ret) goto fail; @@ -607,7 +607,7 @@ static bool can_cow_file_range_inline(struct btrfs_inode *inode, /* * Due to the page size limit, for subpage we can only trigger the - * writeback for the dirty sectors of page, that means data writeback + * writeback for the dirty blocks of page, that means data writeback * is doing more writeback than what we want. * * This is especially unexpected for some call sites like fallocate, @@ -615,11 +615,11 @@ static bool can_cow_file_range_inline(struct btrfs_inode *inode, * This means we can trigger inline extent even if we didn't want to. * So here we skip inline extent creation completely. */ - if (fs_info->sectorsize != PAGE_SIZE) + if (fs_info->blocksize != PAGE_SIZE) return false; - /* Inline extents are limited to sectorsize. */ - if (size > fs_info->sectorsize) + /* Inline extents are limited to blocksize. */ + if (size > fs_info->blocksize) return false; /* We cannot exceed the maximum inline data size. */ @@ -672,7 +672,7 @@ static noinline int __cow_file_range_inline(struct btrfs_inode *inode, drop_args.path = path; drop_args.start = 0; - drop_args.end = fs_info->sectorsize; + drop_args.end = fs_info->blocksize; drop_args.drop_cache = true; drop_args.replace_extent = true; drop_args.extent_item_size = btrfs_file_extent_calc_inline_size(data_len); @@ -831,7 +831,7 @@ static inline int inode_need_compress(struct btrfs_inode *inode, u64 start, return 0; } /* - * Only enable sector perfect compression for experimental builds. + * Only enable block perfect compression for experimental builds. * * This is a big feature change for subpage cases, and can hit * different corner cases, so only limit this feature for @@ -839,7 +839,7 @@ static inline int inode_need_compress(struct btrfs_inode *inode, u64 start, * * ETA for moving this out of experimental builds is 6.15. */ - if (fs_info->sectorsize < PAGE_SIZE && + if (fs_info->blocksize < PAGE_SIZE && !IS_ENABLED(CONFIG_BTRFS_EXPERIMENTAL)) { if (!PAGE_ALIGNED(start) || !PAGE_ALIGNED(end + 1)) @@ -912,7 +912,7 @@ static void compress_file_range(struct btrfs_work *work) struct btrfs_inode *inode = async_chunk->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; struct address_space *mapping = inode->vfs_inode.i_mapping; - u64 blocksize = fs_info->sectorsize; + u64 blocksize = fs_info->blocksize; u64 start = async_chunk->start; u64 end = async_chunk->end; u64 actual_end; @@ -1057,9 +1057,9 @@ static void compress_file_range(struct btrfs_work *work) /* * One last check to make sure the compression is really a win, compare * the page count read with the blocks on disk, compression must free at - * least one sector. + * least one block. */ - total_in = round_up(total_in, fs_info->sectorsize); + total_in = round_up(total_in, fs_info->blocksize); if (total_compressed + blocksize > total_in) goto mark_incompressible; @@ -1334,7 +1334,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, u64 num_bytes; u64 cur_alloc_size = 0; u64 min_alloc_size; - u64 blocksize = fs_info->sectorsize; + u64 blocksize = fs_info->blocksize; struct btrfs_key ins; struct extent_map *em; unsigned clear_bits; @@ -1386,7 +1386,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode, if (btrfs_is_data_reloc_root(root)) min_alloc_size = num_bytes; else - min_alloc_size = fs_info->sectorsize; + min_alloc_size = fs_info->blocksize; while (num_bytes > 0) { struct btrfs_ordered_extent *ordered; @@ -2868,7 +2868,7 @@ static int insert_reserved_file_extent(struct btrfs_trans_handle *trans, u64 qgroup_reserved) { struct btrfs_root *root = inode->root; - const u64 sectorsize = root->fs_info->sectorsize; + const u64 blocksize = root->fs_info->blocksize; struct btrfs_path *path; struct extent_buffer *leaf; struct btrfs_key ins; @@ -2928,13 +2928,13 @@ static int insert_reserved_file_extent(struct btrfs_trans_handle *trans, * The remaining of the range will be processed when clearning the * EXTENT_DELALLOC_BIT bit through the ordered extent completion. */ - if (file_pos == 0 && !IS_ALIGNED(drop_args.bytes_found, sectorsize)) { - u64 inline_size = round_down(drop_args.bytes_found, sectorsize); + if (file_pos == 0 && !IS_ALIGNED(drop_args.bytes_found, blocksize)) { + u64 inline_size = round_down(drop_args.bytes_found, blocksize); inline_size = drop_args.bytes_found - inline_size; - btrfs_update_inode_bytes(inode, sectorsize, inline_size); + btrfs_update_inode_bytes(inode, blocksize, inline_size); drop_args.bytes_found -= inline_size; - num_bytes -= sectorsize; + num_bytes -= blocksize; } if (update_inode_bytes) @@ -3267,21 +3267,21 @@ int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered) } /* - * Verify the checksum for a single sector without any extra action that depend + * Verify the checksum for a single block without any extra action that depend * on the type of I/O. */ -int btrfs_check_sector_csum(struct btrfs_fs_info *fs_info, struct page *page, +int btrfs_check_block_csum(struct btrfs_fs_info *fs_info, struct page *page, u32 pgoff, u8 *csum, const u8 * const csum_expected) { SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); char *kaddr; - ASSERT(pgoff + fs_info->sectorsize <= PAGE_SIZE); + ASSERT(pgoff + fs_info->blocksize <= PAGE_SIZE); shash->tfm = fs_info->csum_shash; kaddr = kmap_local_page(page) + pgoff; - crypto_shash_digest(shash, kaddr, fs_info->sectorsize, csum); + crypto_shash_digest(shash, kaddr, fs_info->blocksize, csum); kunmap_local(kaddr); if (memcmp(csum, csum_expected, fs_info->csum_size)) @@ -3290,17 +3290,17 @@ int btrfs_check_sector_csum(struct btrfs_fs_info *fs_info, struct page *page, } /* - * Verify the checksum of a single data sector. + * Verify the checksum of a single data block. * * @bbio: btrfs_io_bio which contains the csum - * @dev: device the sector is on + * @dev: device the block is on * @bio_offset: offset to the beginning of the bio (in bytes) * @bv: bio_vec to check * * Check if the checksum on a data block is valid. When a checksum mismatch is * detected, report the error and fill the corrupted range with zero. * - * Return %true if the sector is ok or had no checksum to start with, else %false. + * Return %true if the block is ok or had no checksum to start with, else %false. */ bool btrfs_data_csum_ok(struct btrfs_bio *bbio, struct btrfs_device *dev, u32 bio_offset, struct bio_vec *bv) @@ -3312,7 +3312,7 @@ bool btrfs_data_csum_ok(struct btrfs_bio *bbio, struct btrfs_device *dev, u8 *csum_expected; u8 csum[BTRFS_CSUM_SIZE]; - ASSERT(bv->bv_len == fs_info->sectorsize); + ASSERT(bv->bv_len == fs_info->blocksize); if (!bbio->csum) return true; @@ -3326,9 +3326,9 @@ bool btrfs_data_csum_ok(struct btrfs_bio *bbio, struct btrfs_device *dev, return true; } - csum_expected = bbio->csum + (bio_offset >> fs_info->sectorsize_bits) * + csum_expected = bbio->csum + (bio_offset >> fs_info->blocksize_bits) * fs_info->csum_size; - if (btrfs_check_sector_csum(fs_info, bv->bv_page, bv->bv_offset, csum, + if (btrfs_check_block_csum(fs_info, bv->bv_page, bv->bv_offset, csum, csum_expected)) goto zeroit; return true; @@ -3848,7 +3848,7 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path) i_gid_write(inode, btrfs_inode_gid(leaf, inode_item)); btrfs_i_size_write(BTRFS_I(inode), btrfs_inode_size(leaf, inode_item)); btrfs_inode_set_file_extent_range(BTRFS_I(inode), 0, - round_up(i_size_read(inode), fs_info->sectorsize)); + round_up(i_size_read(inode), fs_info->blocksize)); inode_set_atime(inode, btrfs_timespec_sec(leaf, &inode_item->atime), btrfs_timespec_nsec(leaf, &inode_item->atime)); @@ -4737,7 +4737,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len, struct extent_state *cached_state = NULL; struct extent_changeset *data_reserved = NULL; bool only_release_metadata = false; - u32 blocksize = fs_info->sectorsize; + u32 blocksize = fs_info->blocksize; pgoff_t index = from >> PAGE_SHIFT; unsigned offset = from & (blocksize - 1); struct folio *folio; @@ -4931,8 +4931,8 @@ int btrfs_cont_expand(struct btrfs_inode *inode, loff_t oldsize, loff_t size) struct extent_io_tree *io_tree = &inode->io_tree; struct extent_map *em = NULL; struct extent_state *cached_state = NULL; - u64 hole_start = ALIGN(oldsize, fs_info->sectorsize); - u64 block_end = ALIGN(size, fs_info->sectorsize); + u64 hole_start = ALIGN(oldsize, fs_info->blocksize); + u64 block_end = ALIGN(size, fs_info->blocksize); u64 last_byte; u64 cur_offset; u64 hole_size; @@ -4961,7 +4961,7 @@ int btrfs_cont_expand(struct btrfs_inode *inode, loff_t oldsize, loff_t size) break; } last_byte = min(extent_map_end(em), block_end); - last_byte = ALIGN(last_byte, fs_info->sectorsize); + last_byte = ALIGN(last_byte, fs_info->blocksize); hole_size = last_byte - cur_offset; if (!(em->flags & EXTENT_FLAG_PREALLOC)) { @@ -5067,7 +5067,7 @@ static int btrfs_setsize(struct inode *inode, struct iattr *attr) if (btrfs_is_zoned(fs_info)) { ret = btrfs_wait_ordered_range(BTRFS_I(inode), - ALIGN(newsize, fs_info->sectorsize), + ALIGN(newsize, fs_info->blocksize), (u64)-1); if (ret) return ret; @@ -6949,7 +6949,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, * Other members are not utilized for inline extents. */ ASSERT(em->disk_bytenr == EXTENT_MAP_INLINE); - ASSERT(em->len == fs_info->sectorsize); + ASSERT(em->len == fs_info->blocksize); ret = read_inline_extent(path, folio); if (ret < 0) @@ -7095,7 +7095,7 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len, u64 range_end; range_end = round_up(offset + nocow_args.file_extent.num_bytes, - root->fs_info->sectorsize) - 1; + root->fs_info->blocksize) - 1; ret = test_range_bit_exists(io_tree, offset, range_end, EXTENT_DELALLOC); if (ret) { ret = -EAGAIN; @@ -7291,7 +7291,7 @@ static void btrfs_invalidate_folio(struct folio *folio, size_t offset, /* * For subpage case, we have call sites like * btrfs_punch_hole_lock_range() which passes range not aligned to - * sectorsize. + * blocksize. * If the range doesn't cover the full folio, we don't need to and * shouldn't clear page extent mapped, as folio->private can still * record subpage dirty bits for other part of the range. @@ -7440,7 +7440,7 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback) struct btrfs_block_rsv *rsv; int ret; struct btrfs_trans_handle *trans; - u64 mask = fs_info->sectorsize - 1; + u64 mask = fs_info->blocksize - 1; const u64 min_size = btrfs_calc_metadata_size(fs_info, 1); if (!skip_writeback) { @@ -7513,7 +7513,7 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback) while (1) { struct extent_state *cached_state = NULL; const u64 new_size = inode->vfs_inode.i_size; - const u64 lock_start = ALIGN_DOWN(new_size, fs_info->sectorsize); + const u64 lock_start = ALIGN_DOWN(new_size, fs_info->blocksize); control.new_size = new_size; lock_extent(&inode->io_tree, lock_start, (u64)-1, &cached_state); @@ -7523,7 +7523,7 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback) * block of the extent just the way it is. */ btrfs_drop_extent_map_range(inode, - ALIGN(new_size, fs_info->sectorsize), + ALIGN(new_size, fs_info->blocksize), (u64)-1, false); ret = btrfs_truncate_inode_items(trans, root, &control); @@ -7829,7 +7829,7 @@ static int btrfs_getattr(struct mnt_idmap *idmap, u64 delalloc_bytes; u64 inode_bytes; struct inode *inode = d_inode(path->dentry); - u32 blocksize = btrfs_sb(inode->i_sb)->sectorsize; + u32 blocksize = btrfs_sb(inode->i_sb)->blocksize; u32 bi_flags = BTRFS_I(inode)->flags; u32 bi_ro_flags = BTRFS_I(inode)->ro_flags; @@ -8966,13 +8966,13 @@ int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info, return BTRFS_ENCODED_IO_COMPRESSION_ZLIB; case BTRFS_COMPRESS_LZO: /* - * The LZO format depends on the sector size. 64K is the maximum - * sector size that we support. + * The LZO format depends on the block size. 64K is the maximum + * block size that we support. */ - if (fs_info->sectorsize < SZ_4K || fs_info->sectorsize > SZ_64K) + if (fs_info->blocksize < SZ_4K || fs_info->blocksize > SZ_64K) return -EINVAL; return BTRFS_ENCODED_IO_COMPRESSION_LZO_4K + - (fs_info->sectorsize_bits - 12); + (fs_info->blocksize_bits - 12); case BTRFS_COMPRESS_ZSTD: return BTRFS_ENCODED_IO_COMPRESSION_ZSTD; default: @@ -9261,7 +9261,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter, btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); return 0; } - start = ALIGN_DOWN(iocb->ki_pos, fs_info->sectorsize); + start = ALIGN_DOWN(iocb->ki_pos, fs_info->blocksize); /* * We don't know how long the extent containing iocb->ki_pos is, but if * it's compressed we know that it won't be longer than this. @@ -9374,7 +9374,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter, count = start + *disk_io_size - iocb->ki_pos; encoded->len = count; encoded->unencoded_len = count; - *disk_io_size = ALIGN(*disk_io_size, fs_info->sectorsize); + *disk_io_size = ALIGN(*disk_io_size, fs_info->blocksize); } free_extent_map(em); em = NULL; @@ -9437,10 +9437,10 @@ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from, case BTRFS_ENCODED_IO_COMPRESSION_LZO_16K: case BTRFS_ENCODED_IO_COMPRESSION_LZO_32K: case BTRFS_ENCODED_IO_COMPRESSION_LZO_64K: - /* The sector size must match for LZO. */ + /* The block size must match for LZO. */ if (encoded->compression - BTRFS_ENCODED_IO_COMPRESSION_LZO_4K + 12 != - fs_info->sectorsize_bits) + fs_info->blocksize_bits) return -EINVAL; compression = BTRFS_COMPRESS_LZO; break; @@ -9473,41 +9473,41 @@ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from, * extents. * * Note that this is less strict than the current check we have that the - * compressed data must be at least one sector smaller than the + * compressed data must be at least one block smaller than the * decompressed data. We only want to enforce the weaker requirement * from old kernels that it is at least one byte smaller. */ if (orig_count >= encoded->unencoded_len) return -EINVAL; - /* The extent must start on a sector boundary. */ + /* The extent must start on a block boundary. */ start = iocb->ki_pos; - if (!IS_ALIGNED(start, fs_info->sectorsize)) + if (!IS_ALIGNED(start, fs_info->blocksize)) return -EINVAL; /* - * The extent must end on a sector boundary. However, we allow a write + * The extent must end on a block boundary. However, we allow a write * which ends at or extends i_size to have an unaligned length; we round * up the extent size and set i_size to the unaligned end. */ if (start + encoded->len < inode->vfs_inode.i_size && - !IS_ALIGNED(start + encoded->len, fs_info->sectorsize)) + !IS_ALIGNED(start + encoded->len, fs_info->blocksize)) return -EINVAL; - /* Finally, the offset in the unencoded data must be sector-aligned. */ - if (!IS_ALIGNED(encoded->unencoded_offset, fs_info->sectorsize)) + /* Finally, the offset in the unencoded data must be block-aligned. */ + if (!IS_ALIGNED(encoded->unencoded_offset, fs_info->blocksize)) return -EINVAL; - num_bytes = ALIGN(encoded->len, fs_info->sectorsize); - ram_bytes = ALIGN(encoded->unencoded_len, fs_info->sectorsize); + num_bytes = ALIGN(encoded->len, fs_info->blocksize); + ram_bytes = ALIGN(encoded->unencoded_len, fs_info->blocksize); end = start + num_bytes - 1; /* * If the extent cannot be inline, the compressed data on disk must be - * sector-aligned. For convenience, we extend it with zeroes if it + * block-aligned. For convenience, we extend it with zeroes if it * isn't. */ - disk_num_bytes = ALIGN(orig_count, fs_info->sectorsize); + disk_num_bytes = ALIGN(orig_count, fs_info->blocksize); nr_folios = DIV_ROUND_UP(disk_num_bytes, PAGE_SIZE); folios = kvcalloc(nr_folios, sizeof(struct folio *), GFP_KERNEL_ACCOUNT); if (!folios) @@ -9903,7 +9903,7 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file, atomic_inc(&root->nr_swapfiles); spin_unlock(&root->root_item_lock); - isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize); + isize = ALIGN_DOWN(inode->i_size, fs_info->blocksize); lock_extent(io_tree, 0, isize - 1, &cached_state); while (prev_extent_end < isize) { @@ -10144,9 +10144,9 @@ void btrfs_update_inode_bytes(struct btrfs_inode *inode, * Verify that there are no ordered extents for a given file range. * * @inode: The target inode. - * @start: Start offset of the file range, should be sector size aligned. + * @start: Start offset of the file range, should be block size aligned. * @end: End offset (inclusive) of the file range, its value +1 should be - * sector size aligned. + * block size aligned. * * This should typically be used for cases where we locked an inode's VFS lock in * exclusive mode, we have also locked the inode's i_mmap_lock in exclusive mode, diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index cdd373c27784..0c5b19c2d0db 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1589,7 +1589,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, if (!test_bit(total_sector_nr, rbio->csum_bitmap)) continue; - ret = btrfs_check_sector_csum(fs_info, bvec->bv_page, + ret = btrfs_check_block_csum(fs_info, bvec->bv_page, bv_offset, csum_buf, expected_csum); if (ret < 0) set_bit(total_sector_nr, rbio->error_bitmap); @@ -1814,8 +1814,8 @@ static int verify_one_sector(struct btrfs_raid_bio *rbio, csum_expected = rbio->csum_buf + (stripe_nr * rbio->stripe_nsectors + sector_nr) * fs_info->csum_size; - ret = btrfs_check_sector_csum(fs_info, sector->page, sector->pgoff, - csum_buf, csum_expected); + ret = btrfs_check_block_csum(fs_info, sector->page, sector->pgoff, + csum_buf, csum_expected); return ret; } diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 5cec0875a707..383f0859202d 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -737,7 +737,7 @@ static void scrub_verify_one_block(struct scrub_stripe *stripe, int block_nr) return; } - ret = btrfs_check_sector_csum(fs_info, page, pgoff, csum_buf, block->csum); + ret = btrfs_check_block_csum(fs_info, page, pgoff, csum_buf, block->csum); if (ret < 0) { set_bit(block_nr, &stripe->csum_error_bitmap); set_bit(block_nr, &stripe->error_bitmap); From patchwork Wed Dec 18 09:41:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913312 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE629194AFB for ; Wed, 18 Dec 2024 09:42:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514934; cv=none; b=Lb2fk6EPv50wlYWxz1+zxv2IB/lfGh2Fe10Rpqs1x8BAaFB/0Yw09KvyhuSS+6nMdO7qq+b0DRPtgsIMe3KyKXVbEVqzP/9NIadFOYv6Uzoyr7m7vfhRYt76sapUF35yIn8+LDvGLkd3qVqvXMqVubPGCZ8uY9YAGUCTDpxusZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514934; c=relaxed/simple; bh=xmO1UBcbrdLCKSaefHfwkd4tWZtmMsIQ4xC7T9BzSCQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DjKnjZSYuLsUYdBWGZs47ua8NvBbV+ulQ9XilUDeTffgZGIH1bethKa5XuA0bhvl9drHc2CeLwr2WNkZ3VqkCd3TBYRMPNFgc5kvmitFlJJoHGA30uzqd/ps+C6LsJAfUIIsk+ijz2I4MHEKRFGK4dUGC2FlMsHeBgCztFTLBU4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=uuDAnc8k; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=uuDAnc8k; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="uuDAnc8k"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="uuDAnc8k" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 1DDAB2115F for ; Wed, 18 Dec 2024 09:42:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514928; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PTAt9017a09iJgDHkVJb4jaU/hz7G+YJd/9n2CpHXAQ=; b=uuDAnc8kO3zOdIw/UeZ4k6Bh1rq4UfWtd43sj2G2r5l5HXDYx8sH3xJPEgTO7jEZ3xjJdx +rpX+gMg4e22t16oEFZaBLYofxtQp+tW4QXrZ2y0ICviQxsxWMEhsJyo5eP0O7MJ1KmFEc 9mu7c2zr6+ndS8KYnOiQHuzjN7q2YgA= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514928; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PTAt9017a09iJgDHkVJb4jaU/hz7G+YJd/9n2CpHXAQ=; b=uuDAnc8kO3zOdIw/UeZ4k6Bh1rq4UfWtd43sj2G2r5l5HXDYx8sH3xJPEgTO7jEZ3xjJdx +rpX+gMg4e22t16oEFZaBLYofxtQp+tW4QXrZ2y0ICviQxsxWMEhsJyo5eP0O7MJ1KmFEc 9mu7c2zr6+ndS8KYnOiQHuzjN7q2YgA= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4A96C132EA for ; Wed, 18 Dec 2024 09:42:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id eJNqAu+YYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:07 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 11/18] btrfs: migrate raid56.[ch] to use block size terminology Date: Wed, 18 Dec 2024 20:11:27 +1030 Message-ID: <750e822aa037c4910bf71cedaa592d94370f9172.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO Raid56 is a heavy user of the sector terminology, including a lot of raid56 internal structure names and comments, thus this involves quite a lot of renames. And since we're here, also fix the "vertical" typo too. Signed-off-by: Qu Wenruo --- fs/btrfs/raid56.c | 806 +++++++++++++++++++++++----------------------- fs/btrfs/raid56.h | 36 +-- 2 files changed, 421 insertions(+), 421 deletions(-) diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 0c5b19c2d0db..c4ca2453c414 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -66,9 +66,9 @@ static void btrfs_dump_rbio(const struct btrfs_fs_info *fs_info, dump_bioc(fs_info, rbio->bioc); btrfs_crit(fs_info, -"rbio flags=0x%lx nr_sectors=%u nr_data=%u real_stripes=%u stripe_nsectors=%u scrubp=%u dbitmap=0x%lx", - rbio->flags, rbio->nr_sectors, rbio->nr_data, - rbio->real_stripes, rbio->stripe_nsectors, +"rbio flags=0x%lx nr_blocks=%u nr_data=%u real_stripes=%u stripe_nblocks=%u scrubp=%u dbitmap=0x%lx", + rbio->flags, rbio->nr_blocks, rbio->nr_data, + rbio->real_stripes, rbio->stripe_nblocks, rbio->scrubp, rbio->dbitmap); } @@ -95,14 +95,14 @@ static void btrfs_dump_rbio(const struct btrfs_fs_info *fs_info, ASSERT((expr)); \ }) -#define ASSERT_RBIO_SECTOR(expr, rbio, sector_nr) \ +#define ASSERT_RBIO_BLOCK(expr, rbio, block_nr) \ ({ \ if (IS_ENABLED(CONFIG_BTRFS_ASSERT) && unlikely(!(expr))) { \ const struct btrfs_fs_info *__fs_info = (rbio)->bioc ? \ (rbio)->bioc->fs_info : NULL; \ \ btrfs_dump_rbio(__fs_info, (rbio)); \ - btrfs_crit(__fs_info, "sector_nr=%d", (sector_nr)); \ + btrfs_crit(__fs_info, "block_nr=%d", (block_nr)); \ } \ ASSERT((expr)); \ }) @@ -134,11 +134,11 @@ struct btrfs_stripe_hash_table { }; /* - * A bvec like structure to present a sector inside a page. + * A bvec like structure to present a block inside a page. * - * Unlike bvec we don't need bvlen, as it's fixed to sectorsize. + * Unlike bvec we don't need bvlen, as it's fixed to blocksize. */ -struct sector_ptr { +struct block_ptr { struct page *page; unsigned int pgoff:24; unsigned int uptodate:8; @@ -156,8 +156,8 @@ static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio) { bitmap_free(rbio->error_bitmap); kfree(rbio->stripe_pages); - kfree(rbio->bio_sectors); - kfree(rbio->stripe_sectors); + kfree(rbio->bio_blocks); + kfree(rbio->stripe_blocks); kfree(rbio->finish_pointers); } @@ -235,7 +235,7 @@ int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info) /* * caching an rbio means to copy anything from the - * bio_sectors array into the stripe_pages array. We + * bio_blocks array into the stripe_pages array. We * use the page uptodate bit in the stripe cache array * to indicate if it has valid data * @@ -251,26 +251,26 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio) if (ret) return; - for (i = 0; i < rbio->nr_sectors; i++) { + for (i = 0; i < rbio->nr_blocks; i++) { /* Some range not covered by bio (partial write), skip it */ - if (!rbio->bio_sectors[i].page) { + if (!rbio->bio_blocks[i].page) { /* - * Even if the sector is not covered by bio, if it is - * a data sector it should still be uptodate as it is + * Even if the block is not covered by bio, if it is + * a data block it should still be uptodate as it is * read from disk. */ - if (i < rbio->nr_data * rbio->stripe_nsectors) - ASSERT(rbio->stripe_sectors[i].uptodate); + if (i < rbio->nr_data * rbio->stripe_nblocks) + ASSERT(rbio->stripe_blocks[i].uptodate); continue; } - ASSERT(rbio->stripe_sectors[i].page); - memcpy_page(rbio->stripe_sectors[i].page, - rbio->stripe_sectors[i].pgoff, - rbio->bio_sectors[i].page, - rbio->bio_sectors[i].pgoff, - rbio->bioc->fs_info->sectorsize); - rbio->stripe_sectors[i].uptodate = 1; + ASSERT(rbio->stripe_blocks[i].page); + memcpy_page(rbio->stripe_blocks[i].page, + rbio->stripe_blocks[i].pgoff, + rbio->bio_blocks[i].page, + rbio->bio_blocks[i].pgoff, + rbio->bioc->fs_info->blocksize); + rbio->stripe_blocks[i].uptodate = 1; } set_bit(RBIO_CACHE_READY_BIT, &rbio->flags); } @@ -293,49 +293,49 @@ static int rbio_bucket(struct btrfs_raid_bio *rbio) return hash_64(num >> 16, BTRFS_STRIPE_HASH_TABLE_BITS); } -static bool full_page_sectors_uptodate(struct btrfs_raid_bio *rbio, +static bool full_page_blocks_uptodate(struct btrfs_raid_bio *rbio, unsigned int page_nr) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; - const u32 sectors_per_page = PAGE_SIZE / sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; + const u32 blocks_per_page = PAGE_SIZE / blocksize; int i; ASSERT(page_nr < rbio->nr_pages); - for (i = sectors_per_page * page_nr; - i < sectors_per_page * page_nr + sectors_per_page; + for (i = blocks_per_page * page_nr; + i < blocks_per_page * page_nr + blocks_per_page; i++) { - if (!rbio->stripe_sectors[i].uptodate) + if (!rbio->stripe_blocks[i].uptodate) return false; } return true; } /* - * Update the stripe_sectors[] array to use correct page and pgoff + * Update the stripe_blocks[] array to use correct page and pgoff * * Should be called every time any page pointer in stripes_pages[] got modified. */ -static void index_stripe_sectors(struct btrfs_raid_bio *rbio) +static void index_stripe_blocks(struct btrfs_raid_bio *rbio) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; u32 offset; int i; - for (i = 0, offset = 0; i < rbio->nr_sectors; i++, offset += sectorsize) { + for (i = 0, offset = 0; i < rbio->nr_blocks; i++, offset += blocksize) { int page_index = offset >> PAGE_SHIFT; ASSERT(page_index < rbio->nr_pages); - rbio->stripe_sectors[i].page = rbio->stripe_pages[page_index]; - rbio->stripe_sectors[i].pgoff = offset_in_page(offset); + rbio->stripe_blocks[i].page = rbio->stripe_pages[page_index]; + rbio->stripe_blocks[i].pgoff = offset_in_page(offset); } } static void steal_rbio_page(struct btrfs_raid_bio *src, struct btrfs_raid_bio *dest, int page_nr) { - const u32 sectorsize = src->bioc->fs_info->sectorsize; - const u32 sectors_per_page = PAGE_SIZE / sectorsize; + const u32 blocksize = src->bioc->fs_info->blocksize; + const u32 blocks_per_page = PAGE_SIZE / blocksize; int i; if (dest->stripe_pages[page_nr]) @@ -343,32 +343,32 @@ static void steal_rbio_page(struct btrfs_raid_bio *src, dest->stripe_pages[page_nr] = src->stripe_pages[page_nr]; src->stripe_pages[page_nr] = NULL; - /* Also update the sector->uptodate bits. */ - for (i = sectors_per_page * page_nr; - i < sectors_per_page * page_nr + sectors_per_page; i++) - dest->stripe_sectors[i].uptodate = true; + /* Also update the block->uptodate bits. */ + for (i = blocks_per_page * page_nr; + i < blocks_per_page * page_nr + blocks_per_page; i++) + dest->stripe_blocks[i].uptodate = true; } static bool is_data_stripe_page(struct btrfs_raid_bio *rbio, int page_nr) { - const int sector_nr = (page_nr << PAGE_SHIFT) >> - rbio->bioc->fs_info->sectorsize_bits; + const int block_nr = (page_nr << PAGE_SHIFT) >> + rbio->bioc->fs_info->blocksize_bits; /* - * We have ensured PAGE_SIZE is aligned with sectorsize, thus + * We have ensured PAGE_SIZE is aligned with blocksize, thus * we won't have a page which is half data half parity. * - * Thus if the first sector of the page belongs to data stripes, then + * Thus if the first block of the page belongs to data stripes, then * the full page belongs to data stripes. */ - return (sector_nr < rbio->nr_data * rbio->stripe_nsectors); + return (block_nr < rbio->nr_data * rbio->stripe_nblocks); } /* * Stealing an rbio means taking all the uptodate pages from the stripe array * in the source rbio and putting them into the destination rbio. * - * This will also update the involved stripe_sectors[] which are referring to + * This will also update the involved stripe_blocks[] which are referring to * the old pages. */ static void steal_rbio(struct btrfs_raid_bio *src, struct btrfs_raid_bio *dest) @@ -393,11 +393,11 @@ static void steal_rbio(struct btrfs_raid_bio *src, struct btrfs_raid_bio *dest) * all data stripe pages present and uptodate. */ ASSERT(p); - ASSERT(full_page_sectors_uptodate(src, i)); + ASSERT(full_page_blocks_uptodate(src, i)); steal_rbio_page(src, dest, i); } - index_stripe_sectors(dest); - index_stripe_sectors(src); + index_stripe_blocks(dest); + index_stripe_blocks(src); } /* @@ -414,7 +414,7 @@ static void merge_rbio(struct btrfs_raid_bio *dest, dest->bio_list_bytes += victim->bio_list_bytes; /* Also inherit the bitmaps from @victim. */ bitmap_or(&dest->dbitmap, &victim->dbitmap, &dest->dbitmap, - dest->stripe_nsectors); + dest->stripe_nblocks); } /* @@ -667,39 +667,39 @@ static int rbio_can_merge(struct btrfs_raid_bio *last, return 1; } -static unsigned int rbio_stripe_sector_index(const struct btrfs_raid_bio *rbio, +static unsigned int rbio_stripe_block_index(const struct btrfs_raid_bio *rbio, unsigned int stripe_nr, - unsigned int sector_nr) + unsigned int block_nr) { ASSERT_RBIO_STRIPE(stripe_nr < rbio->real_stripes, rbio, stripe_nr); - ASSERT_RBIO_SECTOR(sector_nr < rbio->stripe_nsectors, rbio, sector_nr); + ASSERT_RBIO_BLOCK(block_nr < rbio->stripe_nblocks, rbio, block_nr); - return stripe_nr * rbio->stripe_nsectors + sector_nr; + return stripe_nr * rbio->stripe_nblocks + block_nr; } -/* Return a sector from rbio->stripe_sectors, not from the bio list */ -static struct sector_ptr *rbio_stripe_sector(const struct btrfs_raid_bio *rbio, +/* Return a block from rbio->stripe_blocks, not from the bio list */ +static struct block_ptr *rbio_stripe_block(const struct btrfs_raid_bio *rbio, unsigned int stripe_nr, - unsigned int sector_nr) + unsigned int block_nr) { - return &rbio->stripe_sectors[rbio_stripe_sector_index(rbio, stripe_nr, - sector_nr)]; + return &rbio->stripe_blocks[rbio_stripe_block_index(rbio, stripe_nr, + block_nr)]; } -/* Grab a sector inside P stripe */ -static struct sector_ptr *rbio_pstripe_sector(const struct btrfs_raid_bio *rbio, - unsigned int sector_nr) +/* Grab a block inside P stripe */ +static struct block_ptr *rbio_pstripe_block(const struct btrfs_raid_bio *rbio, + unsigned int block_nr) { - return rbio_stripe_sector(rbio, rbio->nr_data, sector_nr); + return rbio_stripe_block(rbio, rbio->nr_data, block_nr); } -/* Grab a sector inside Q stripe, return NULL if not RAID6 */ -static struct sector_ptr *rbio_qstripe_sector(const struct btrfs_raid_bio *rbio, - unsigned int sector_nr) +/* Grab a block inside Q stripe, return NULL if not RAID6 */ +static struct block_ptr *rbio_qstripe_block(const struct btrfs_raid_bio *rbio, + unsigned int block_nr) { if (rbio->nr_data + 1 == rbio->real_stripes) return NULL; - return rbio_stripe_sector(rbio, rbio->nr_data + 1, sector_nr); + return rbio_stripe_block(rbio, rbio->nr_data + 1, block_nr); } /* @@ -914,7 +914,7 @@ static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err) * do this before before unlock_stripe() so there will be no new bio * for this bio. */ - bitmap_clear(&rbio->dbitmap, 0, rbio->stripe_nsectors); + bitmap_clear(&rbio->dbitmap, 0, rbio->stripe_nblocks); /* * At this moment, rbio->bio_list is empty, however since rbio does not @@ -934,44 +934,44 @@ static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err) } /* - * Get a sector pointer specified by its @stripe_nr and @sector_nr. + * Get a block pointer specified by its @stripe_nr and @block_nr. * * @rbio: The raid bio * @stripe_nr: Stripe number, valid range [0, real_stripe) - * @sector_nr: Sector number inside the stripe, - * valid range [0, stripe_nsectors) - * @bio_list_only: Whether to use sectors inside the bio list only. + * @block_nr: Sector number inside the stripe, + * valid range [0, stripe_nblocks) + * @bio_list_only: Whether to use blocks inside the bio list only. * * The read/modify/write code wants to reuse the original bio page as much - * as possible, and only use stripe_sectors as fallback. + * as possible, and only use stripe_blocks as fallback. */ -static struct sector_ptr *sector_in_rbio(struct btrfs_raid_bio *rbio, - int stripe_nr, int sector_nr, +static struct block_ptr *block_in_rbio(struct btrfs_raid_bio *rbio, + int stripe_nr, int block_nr, bool bio_list_only) { - struct sector_ptr *sector; + struct block_ptr *block; int index; ASSERT_RBIO_STRIPE(stripe_nr >= 0 && stripe_nr < rbio->real_stripes, rbio, stripe_nr); - ASSERT_RBIO_SECTOR(sector_nr >= 0 && sector_nr < rbio->stripe_nsectors, - rbio, sector_nr); + ASSERT_RBIO_BLOCK(block_nr >= 0 && block_nr < rbio->stripe_nblocks, + rbio, block_nr); - index = stripe_nr * rbio->stripe_nsectors + sector_nr; - ASSERT(index >= 0 && index < rbio->nr_sectors); + index = stripe_nr * rbio->stripe_nblocks + block_nr; + ASSERT(index >= 0 && index < rbio->nr_blocks); spin_lock(&rbio->bio_list_lock); - sector = &rbio->bio_sectors[index]; - if (sector->page || bio_list_only) { - /* Don't return sector without a valid page pointer */ - if (!sector->page) - sector = NULL; + block = &rbio->bio_blocks[index]; + if (block->page || bio_list_only) { + /* Don't return block without a valid page pointer */ + if (!block->page) + block = NULL; spin_unlock(&rbio->bio_list_lock); - return sector; + return block; } spin_unlock(&rbio->bio_list_lock); - return &rbio->stripe_sectors[index]; + return &rbio->stripe_blocks[index]; } /* @@ -984,18 +984,18 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info, const unsigned int real_stripes = bioc->num_stripes - bioc->replace_nr_stripes; const unsigned int stripe_npages = BTRFS_STRIPE_LEN >> PAGE_SHIFT; const unsigned int num_pages = stripe_npages * real_stripes; - const unsigned int stripe_nsectors = - BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits; - const unsigned int num_sectors = stripe_nsectors * real_stripes; + const unsigned int stripe_nblocks = + BTRFS_STRIPE_LEN >> fs_info->blocksize_bits; + const unsigned int num_blocks = stripe_nblocks * real_stripes; struct btrfs_raid_bio *rbio; - /* PAGE_SIZE must also be aligned to sectorsize for subpage support */ - ASSERT(IS_ALIGNED(PAGE_SIZE, fs_info->sectorsize)); + /* PAGE_SIZE must also be aligned to blocksize for subpage support */ + ASSERT(IS_ALIGNED(PAGE_SIZE, fs_info->blocksize)); /* - * Our current stripe len should be fixed to 64k thus stripe_nsectors + * Our current stripe len should be fixed to 64k thus stripe_nblocks * (at most 16) should be no larger than BITS_PER_LONG. */ - ASSERT(stripe_nsectors <= BITS_PER_LONG); + ASSERT(stripe_nblocks <= BITS_PER_LONG); /* * Real stripes must be between 2 (2 disks RAID5, aka RAID1) and 256 @@ -1009,14 +1009,14 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info, return ERR_PTR(-ENOMEM); rbio->stripe_pages = kcalloc(num_pages, sizeof(struct page *), GFP_NOFS); - rbio->bio_sectors = kcalloc(num_sectors, sizeof(struct sector_ptr), + rbio->bio_blocks = kcalloc(num_blocks, sizeof(struct block_ptr), GFP_NOFS); - rbio->stripe_sectors = kcalloc(num_sectors, sizeof(struct sector_ptr), + rbio->stripe_blocks = kcalloc(num_blocks, sizeof(struct block_ptr), GFP_NOFS); rbio->finish_pointers = kcalloc(real_stripes, sizeof(void *), GFP_NOFS); - rbio->error_bitmap = bitmap_zalloc(num_sectors, GFP_NOFS); + rbio->error_bitmap = bitmap_zalloc(num_blocks, GFP_NOFS); - if (!rbio->stripe_pages || !rbio->bio_sectors || !rbio->stripe_sectors || + if (!rbio->stripe_pages || !rbio->bio_blocks || !rbio->stripe_blocks || !rbio->finish_pointers || !rbio->error_bitmap) { free_raid_bio_pointers(rbio); kfree(rbio); @@ -1032,10 +1032,10 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info, btrfs_get_bioc(bioc); rbio->bioc = bioc; rbio->nr_pages = num_pages; - rbio->nr_sectors = num_sectors; + rbio->nr_blocks = num_blocks; rbio->real_stripes = real_stripes; rbio->stripe_npages = stripe_npages; - rbio->stripe_nsectors = stripe_nsectors; + rbio->stripe_nblocks = stripe_nblocks; refcount_set(&rbio->refs, 1); atomic_set(&rbio->stripes_pending, 0); @@ -1054,8 +1054,8 @@ static int alloc_rbio_pages(struct btrfs_raid_bio *rbio) ret = btrfs_alloc_page_array(rbio->nr_pages, rbio->stripe_pages, false); if (ret < 0) return ret; - /* Mapping all sectors */ - index_stripe_sectors(rbio); + /* Mapping all blocks */ + index_stripe_blocks(rbio); return 0; } @@ -1070,17 +1070,17 @@ static int alloc_rbio_parity_pages(struct btrfs_raid_bio *rbio) if (ret < 0) return ret; - index_stripe_sectors(rbio); + index_stripe_blocks(rbio); return 0; } /* - * Return the total number of errors found in the vertical stripe of @sector_nr. + * Return the total number of errors found in the vertical stripe of @block_nr. * * @faila and @failb will also be updated to the first and second stripe * number of the errors. */ -static int get_rbio_veritical_errors(struct btrfs_raid_bio *rbio, int sector_nr, +static int get_rbio_vertical_errors(struct btrfs_raid_bio *rbio, int block_nr, int *faila, int *failb) { int stripe_nr; @@ -1097,9 +1097,9 @@ static int get_rbio_veritical_errors(struct btrfs_raid_bio *rbio, int sector_nr, } for (stripe_nr = 0; stripe_nr < rbio->real_stripes; stripe_nr++) { - int total_sector_nr = stripe_nr * rbio->stripe_nsectors + sector_nr; + int total_block_nr = stripe_nr * rbio->stripe_nblocks + block_nr; - if (test_bit(total_sector_nr, rbio->error_bitmap)) { + if (test_bit(total_block_nr, rbio->error_bitmap)) { found_errors++; if (faila) { /* Update faila and failb. */ @@ -1114,19 +1114,19 @@ static int get_rbio_veritical_errors(struct btrfs_raid_bio *rbio, int sector_nr, } /* - * Add a single sector @sector into our list of bios for IO. + * Add a single block @block into our list of bios for IO. * * Return 0 if everything went well. * Return <0 for error. */ -static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, +static int rbio_add_io_block(struct btrfs_raid_bio *rbio, struct bio_list *bio_list, - struct sector_ptr *sector, + struct block_ptr *block, unsigned int stripe_nr, - unsigned int sector_nr, + unsigned int block_nr, enum req_op op) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; struct bio *last = bio_list->tail; int ret; struct bio *bio; @@ -1140,22 +1140,22 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, */ ASSERT_RBIO_STRIPE(stripe_nr >= 0 && stripe_nr < rbio->bioc->num_stripes, rbio, stripe_nr); - ASSERT_RBIO_SECTOR(sector_nr >= 0 && sector_nr < rbio->stripe_nsectors, - rbio, sector_nr); - ASSERT(sector->page); + ASSERT_RBIO_BLOCK(block_nr >= 0 && block_nr < rbio->stripe_nblocks, + rbio, block_nr); + ASSERT(block->page); stripe = &rbio->bioc->stripes[stripe_nr]; - disk_start = stripe->physical + sector_nr * sectorsize; + disk_start = stripe->physical + block_nr * blocksize; /* if the device is missing, just fail this stripe */ if (!stripe->dev->bdev) { int found_errors; - set_bit(stripe_nr * rbio->stripe_nsectors + sector_nr, + set_bit(stripe_nr * rbio->stripe_nblocks + block_nr, rbio->error_bitmap); /* Check if we have reached tolerance early. */ - found_errors = get_rbio_veritical_errors(rbio, sector_nr, + found_errors = get_rbio_vertical_errors(rbio, block_nr, NULL, NULL); if (found_errors > rbio->bioc->max_errors) return -EIO; @@ -1173,9 +1173,9 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, */ if (last_end == disk_start && !last->bi_status && last->bi_bdev == stripe->dev->bdev) { - ret = bio_add_page(last, sector->page, sectorsize, - sector->pgoff); - if (ret == sectorsize) + ret = bio_add_page(last, block->page, blocksize, + block->pgoff); + if (ret == blocksize) return 0; } } @@ -1187,14 +1187,14 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, bio->bi_iter.bi_sector = disk_start >> SECTOR_SHIFT; bio->bi_private = rbio; - __bio_add_page(bio, sector->page, sectorsize, sector->pgoff); + __bio_add_page(bio, block->page, blocksize, block->pgoff); bio_list_add(bio_list, bio); return 0; } static void index_one_bio(struct btrfs_raid_bio *rbio, struct bio *bio) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; struct bio_vec bvec; struct bvec_iter iter; u32 offset = (bio->bi_iter.bi_sector << SECTOR_SHIFT) - @@ -1204,13 +1204,13 @@ static void index_one_bio(struct btrfs_raid_bio *rbio, struct bio *bio) u32 bvec_offset; for (bvec_offset = 0; bvec_offset < bvec.bv_len; - bvec_offset += sectorsize, offset += sectorsize) { - int index = offset / sectorsize; - struct sector_ptr *sector = &rbio->bio_sectors[index]; + bvec_offset += blocksize, offset += blocksize) { + int index = offset / blocksize; + struct block_ptr *block = &rbio->bio_blocks[index]; - sector->page = bvec.bv_page; - sector->pgoff = bvec.bv_offset + bvec_offset; - ASSERT(sector->pgoff < PAGE_SIZE); + block->page = bvec.bv_page; + block->pgoff = bvec.bv_offset + bvec_offset; + ASSERT(block->pgoff < PAGE_SIZE); } } } @@ -1290,43 +1290,43 @@ static void assert_rbio(struct btrfs_raid_bio *rbio) } /* Generate PQ for one vertical stripe. */ -static void generate_pq_vertical(struct btrfs_raid_bio *rbio, int sectornr) +static void generate_pq_vertical(struct btrfs_raid_bio *rbio, int blocknr) { void **pointers = rbio->finish_pointers; - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; - struct sector_ptr *sector; + const u32 blocksize = rbio->bioc->fs_info->blocksize; + struct block_ptr *block; int stripe; const bool has_qstripe = rbio->bioc->map_type & BTRFS_BLOCK_GROUP_RAID6; - /* First collect one sector from each data stripe */ + /* First collect one block from each data stripe */ for (stripe = 0; stripe < rbio->nr_data; stripe++) { - sector = sector_in_rbio(rbio, stripe, sectornr, 0); - pointers[stripe] = kmap_local_page(sector->page) + - sector->pgoff; + block = block_in_rbio(rbio, stripe, blocknr, 0); + pointers[stripe] = kmap_local_page(block->page) + + block->pgoff; } /* Then add the parity stripe */ - sector = rbio_pstripe_sector(rbio, sectornr); - sector->uptodate = 1; - pointers[stripe++] = kmap_local_page(sector->page) + sector->pgoff; + block = rbio_pstripe_block(rbio, blocknr); + block->uptodate = 1; + pointers[stripe++] = kmap_local_page(block->page) + block->pgoff; if (has_qstripe) { /* * RAID6, add the qstripe and call the library function * to fill in our p/q */ - sector = rbio_qstripe_sector(rbio, sectornr); - sector->uptodate = 1; - pointers[stripe++] = kmap_local_page(sector->page) + - sector->pgoff; + block = rbio_qstripe_block(rbio, blocknr); + block->uptodate = 1; + pointers[stripe++] = kmap_local_page(block->page) + + block->pgoff; assert_rbio(rbio); - raid6_call.gen_syndrome(rbio->real_stripes, sectorsize, + raid6_call.gen_syndrome(rbio->real_stripes, blocksize, pointers); } else { /* raid5 */ - memcpy(pointers[rbio->nr_data], pointers[0], sectorsize); - run_xor(pointers + 1, rbio->nr_data - 1, sectorsize); + memcpy(pointers[rbio->nr_data], pointers[0], blocksize); + run_xor(pointers + 1, rbio->nr_data - 1, blocksize); } for (stripe = stripe - 1; stripe >= 0; stripe--) kunmap_local(pointers[stripe]); @@ -1335,48 +1335,48 @@ static void generate_pq_vertical(struct btrfs_raid_bio *rbio, int sectornr) static int rmw_assemble_write_bios(struct btrfs_raid_bio *rbio, struct bio_list *bio_list) { - /* The total sector number inside the full stripe. */ - int total_sector_nr; - int sectornr; + /* The total block number inside the full stripe. */ + int total_block_nr; + int blocknr; int stripe; int ret; ASSERT(bio_list_size(bio_list) == 0); - /* We should have at least one data sector. */ - ASSERT(bitmap_weight(&rbio->dbitmap, rbio->stripe_nsectors)); + /* We should have at least one data block. */ + ASSERT(bitmap_weight(&rbio->dbitmap, rbio->stripe_nblocks)); /* * Reset errors, as we may have errors inherited from from degraded * write. */ - bitmap_clear(rbio->error_bitmap, 0, rbio->nr_sectors); + bitmap_clear(rbio->error_bitmap, 0, rbio->nr_blocks); /* * Start assembly. Make bios for everything from the higher layers (the * bio_list in our rbio) and our P/Q. Ignore everything else. */ - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { - struct sector_ptr *sector; + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { + struct block_ptr *block; - stripe = total_sector_nr / rbio->stripe_nsectors; - sectornr = total_sector_nr % rbio->stripe_nsectors; + stripe = total_block_nr / rbio->stripe_nblocks; + blocknr = total_block_nr % rbio->stripe_nblocks; /* This vertical stripe has no data, skip it. */ - if (!test_bit(sectornr, &rbio->dbitmap)) + if (!test_bit(blocknr, &rbio->dbitmap)) continue; if (stripe < rbio->nr_data) { - sector = sector_in_rbio(rbio, stripe, sectornr, 1); - if (!sector) + block = block_in_rbio(rbio, stripe, blocknr, 1); + if (!block) continue; } else { - sector = rbio_stripe_sector(rbio, stripe, sectornr); + block = rbio_stripe_block(rbio, stripe, blocknr); } - ret = rbio_add_io_sector(rbio, bio_list, sector, stripe, - sectornr, REQ_OP_WRITE); + ret = rbio_add_io_block(rbio, bio_list, block, stripe, + blocknr, REQ_OP_WRITE); if (ret) goto error; } @@ -1391,12 +1391,12 @@ static int rmw_assemble_write_bios(struct btrfs_raid_bio *rbio, */ ASSERT(rbio->bioc->replace_stripe_src >= 0); - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { - struct sector_ptr *sector; + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { + struct block_ptr *block; - stripe = total_sector_nr / rbio->stripe_nsectors; - sectornr = total_sector_nr % rbio->stripe_nsectors; + stripe = total_block_nr / rbio->stripe_nblocks; + blocknr = total_block_nr % rbio->stripe_nblocks; /* * For RAID56, there is only one device that can be replaced, @@ -1406,28 +1406,28 @@ static int rmw_assemble_write_bios(struct btrfs_raid_bio *rbio, if (stripe != rbio->bioc->replace_stripe_src) { /* * We can skip the whole stripe completely, note - * total_sector_nr will be increased by one anyway. + * total_block_nr will be increased by one anyway. */ - ASSERT(sectornr == 0); - total_sector_nr += rbio->stripe_nsectors - 1; + ASSERT(blocknr == 0); + total_block_nr += rbio->stripe_nblocks - 1; continue; } /* This vertical stripe has no data, skip it. */ - if (!test_bit(sectornr, &rbio->dbitmap)) + if (!test_bit(blocknr, &rbio->dbitmap)) continue; if (stripe < rbio->nr_data) { - sector = sector_in_rbio(rbio, stripe, sectornr, 1); - if (!sector) + block = block_in_rbio(rbio, stripe, blocknr, 1); + if (!block) continue; } else { - sector = rbio_stripe_sector(rbio, stripe, sectornr); + block = rbio_stripe_block(rbio, stripe, blocknr); } - ret = rbio_add_io_sector(rbio, bio_list, sector, + ret = rbio_add_io_block(rbio, bio_list, block, rbio->real_stripes, - sectornr, REQ_OP_WRITE); + blocknr, REQ_OP_WRITE); if (ret) goto error; } @@ -1443,12 +1443,12 @@ static void set_rbio_range_error(struct btrfs_raid_bio *rbio, struct bio *bio) struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; u32 offset = (bio->bi_iter.bi_sector << SECTOR_SHIFT) - rbio->bioc->full_stripe_logical; - int total_nr_sector = offset >> fs_info->sectorsize_bits; + int total_nr_block = offset >> fs_info->blocksize_bits; - ASSERT(total_nr_sector < rbio->nr_data * rbio->stripe_nsectors); + ASSERT(total_nr_block < rbio->nr_data * rbio->stripe_nblocks); - bitmap_set(rbio->error_bitmap, total_nr_sector, - bio->bi_iter.bi_size >> fs_info->sectorsize_bits); + bitmap_set(rbio->error_bitmap, total_nr_block, + bio->bi_iter.bi_size >> fs_info->blocksize_bits); /* * Special handling for raid56_alloc_missing_rbio() used by @@ -1464,8 +1464,8 @@ static void set_rbio_range_error(struct btrfs_raid_bio *rbio, struct bio *bio) if (!rbio->bioc->stripes[stripe_nr].dev->bdev) { found_missing = true; bitmap_set(rbio->error_bitmap, - stripe_nr * rbio->stripe_nsectors, - rbio->stripe_nsectors); + stripe_nr * rbio->stripe_nblocks, + rbio->stripe_nblocks); } } ASSERT(found_missing); @@ -1474,19 +1474,19 @@ static void set_rbio_range_error(struct btrfs_raid_bio *rbio, struct bio *bio) /* * For subpage case, we can no longer set page Up-to-date directly for - * stripe_pages[], thus we need to locate the sector. + * stripe_pages[], thus we need to locate the block. */ -static struct sector_ptr *find_stripe_sector(struct btrfs_raid_bio *rbio, +static struct block_ptr *find_stripe_block(struct btrfs_raid_bio *rbio, struct page *page, unsigned int pgoff) { int i; - for (i = 0; i < rbio->nr_sectors; i++) { - struct sector_ptr *sector = &rbio->stripe_sectors[i]; + for (i = 0; i < rbio->nr_blocks; i++) { + struct block_ptr *block = &rbio->stripe_blocks[i]; - if (sector->page == page && sector->pgoff == pgoff) - return sector; + if (block->page == page && block->pgoff == pgoff) + return block; } return NULL; } @@ -1497,48 +1497,48 @@ static struct sector_ptr *find_stripe_sector(struct btrfs_raid_bio *rbio, */ static void set_bio_pages_uptodate(struct btrfs_raid_bio *rbio, struct bio *bio) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; struct bio_vec *bvec; struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); bio_for_each_segment_all(bvec, bio, iter_all) { - struct sector_ptr *sector; + struct block_ptr *block; int pgoff; for (pgoff = bvec->bv_offset; pgoff - bvec->bv_offset < bvec->bv_len; - pgoff += sectorsize) { - sector = find_stripe_sector(rbio, bvec->bv_page, pgoff); - ASSERT(sector); - if (sector) - sector->uptodate = 1; + pgoff += blocksize) { + block = find_stripe_block(rbio, bvec->bv_page, pgoff); + ASSERT(block); + if (block) + block->uptodate = 1; } } } -static int get_bio_sector_nr(struct btrfs_raid_bio *rbio, struct bio *bio) +static int get_bio_block_nr(struct btrfs_raid_bio *rbio, struct bio *bio) { struct bio_vec *bv = bio_first_bvec_all(bio); int i; - for (i = 0; i < rbio->nr_sectors; i++) { - struct sector_ptr *sector; + for (i = 0; i < rbio->nr_blocks; i++) { + struct block_ptr *block; - sector = &rbio->stripe_sectors[i]; - if (sector->page == bv->bv_page && sector->pgoff == bv->bv_offset) + block = &rbio->stripe_blocks[i]; + if (block->page == bv->bv_page && block->pgoff == bv->bv_offset) break; - sector = &rbio->bio_sectors[i]; - if (sector->page == bv->bv_page && sector->pgoff == bv->bv_offset) + block = &rbio->bio_blocks[i]; + if (block->page == bv->bv_page && block->pgoff == bv->bv_offset) break; } - ASSERT(i < rbio->nr_sectors); + ASSERT(i < rbio->nr_blocks); return i; } static void rbio_update_error_bitmap(struct btrfs_raid_bio *rbio, struct bio *bio) { - int total_sector_nr = get_bio_sector_nr(rbio, bio); + int total_block_nr = get_bio_block_nr(rbio, bio); u32 bio_size = 0; struct bio_vec *bvec; int i; @@ -1552,17 +1552,17 @@ static void rbio_update_error_bitmap(struct btrfs_raid_bio *rbio, struct bio *bi * * Instead use set_bit() for each bit, as set_bit() itself is atomic. */ - for (i = total_sector_nr; i < total_sector_nr + - (bio_size >> rbio->bioc->fs_info->sectorsize_bits); i++) + for (i = total_block_nr; i < total_block_nr + + (bio_size >> rbio->bioc->fs_info->blocksize_bits); i++) set_bit(i, rbio->error_bitmap); } -/* Verify the data sectors at read time. */ -static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, +/* Verify the data blocks at read time. */ +static void verify_bio_data_blocks(struct btrfs_raid_bio *rbio, struct bio *bio) { struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; - int total_sector_nr = get_bio_sector_nr(rbio, bio); + int total_block_nr = get_bio_block_nr(rbio, bio); struct bio_vec *bvec; struct bvec_iter_all iter_all; @@ -1571,7 +1571,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, return; /* P/Q stripes, they have no data csum to verify against. */ - if (total_sector_nr >= rbio->nr_data * rbio->stripe_nsectors) + if (total_block_nr >= rbio->nr_data * rbio->stripe_nblocks) return; bio_for_each_segment_all(bvec, bio, iter_all) { @@ -1579,20 +1579,20 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio, for (bv_offset = bvec->bv_offset; bv_offset < bvec->bv_offset + bvec->bv_len; - bv_offset += fs_info->sectorsize, total_sector_nr++) { + bv_offset += fs_info->blocksize, total_block_nr++) { u8 csum_buf[BTRFS_CSUM_SIZE]; u8 *expected_csum = rbio->csum_buf + - total_sector_nr * fs_info->csum_size; + total_block_nr * fs_info->csum_size; int ret; - /* No csum for this sector, skip to the next sector. */ - if (!test_bit(total_sector_nr, rbio->csum_bitmap)) + /* No csum for this block, skip to the next block. */ + if (!test_bit(total_block_nr, rbio->csum_bitmap)) continue; ret = btrfs_check_block_csum(fs_info, bvec->bv_page, bv_offset, csum_buf, expected_csum); if (ret < 0) - set_bit(total_sector_nr, rbio->error_bitmap); + set_bit(total_block_nr, rbio->error_bitmap); } } } @@ -1605,7 +1605,7 @@ static void raid_wait_read_end_io(struct bio *bio) rbio_update_error_bitmap(rbio, bio); } else { set_bio_pages_uptodate(rbio, bio); - verify_bio_data_sectors(rbio, bio); + verify_bio_data_blocks(rbio, bio); } bio_put(bio); @@ -1643,7 +1643,7 @@ static int alloc_rbio_data_pages(struct btrfs_raid_bio *rbio) if (ret < 0) return ret; - index_stripe_sectors(rbio); + index_stripe_blocks(rbio); return 0; } @@ -1720,7 +1720,7 @@ static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio) const u64 orig_logical = orig_bio->bi_iter.bi_sector << SECTOR_SHIFT; const u64 full_stripe_start = rbio->bioc->full_stripe_logical; const u32 orig_len = orig_bio->bi_iter.bi_size; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; u64 cur_logical; ASSERT_RBIO_LOGICAL(orig_logical >= full_stripe_start && @@ -1733,9 +1733,9 @@ static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio) /* Update the dbitmap. */ for (cur_logical = orig_logical; cur_logical < orig_logical + orig_len; - cur_logical += sectorsize) { + cur_logical += blocksize) { int bit = ((u32)(cur_logical - full_stripe_start) >> - fs_info->sectorsize_bits) % rbio->stripe_nsectors; + fs_info->blocksize_bits) % rbio->stripe_nblocks; set_bit(bit, &rbio->dbitmap); } @@ -1784,11 +1784,11 @@ void raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) start_async_work(rbio, rmw_rbio_work); } -static int verify_one_sector(struct btrfs_raid_bio *rbio, - int stripe_nr, int sector_nr) +static int verify_one_block(struct btrfs_raid_bio *rbio, + int stripe_nr, int block_nr) { struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; - struct sector_ptr *sector; + struct block_ptr *block; u8 csum_buf[BTRFS_CSUM_SIZE]; u8 *csum_expected; int ret; @@ -1804,32 +1804,32 @@ static int verify_one_sector(struct btrfs_raid_bio *rbio, * bio list if possible. */ if (rbio->operation == BTRFS_RBIO_READ_REBUILD) { - sector = sector_in_rbio(rbio, stripe_nr, sector_nr, 0); + block = block_in_rbio(rbio, stripe_nr, block_nr, 0); } else { - sector = rbio_stripe_sector(rbio, stripe_nr, sector_nr); + block = rbio_stripe_block(rbio, stripe_nr, block_nr); } - ASSERT(sector->page); + ASSERT(block->page); csum_expected = rbio->csum_buf + - (stripe_nr * rbio->stripe_nsectors + sector_nr) * + (stripe_nr * rbio->stripe_nblocks + block_nr) * fs_info->csum_size; - ret = btrfs_check_block_csum(fs_info, sector->page, sector->pgoff, + ret = btrfs_check_block_csum(fs_info, block->page, block->pgoff, csum_buf, csum_expected); return ret; } /* - * Recover a vertical stripe specified by @sector_nr. + * Recover a vertical stripe specified by @block_nr. * @*pointers are the pre-allocated pointers by the caller, so we don't * need to allocate/free the pointers again and again. */ -static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, +static int recover_vertical(struct btrfs_raid_bio *rbio, int block_nr, void **pointers, void **unmap_array) { struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; - struct sector_ptr *sector; - const u32 sectorsize = fs_info->sectorsize; + struct block_ptr *block; + const u32 blocksize = fs_info->blocksize; int found_errors; int faila; int failb; @@ -1841,10 +1841,10 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, * which we have data when doing parity scrub. */ if (rbio->operation == BTRFS_RBIO_PARITY_SCRUB && - !test_bit(sector_nr, &rbio->dbitmap)) + !test_bit(block_nr, &rbio->dbitmap)) return 0; - found_errors = get_rbio_veritical_errors(rbio, sector_nr, &faila, + found_errors = get_rbio_vertical_errors(rbio, block_nr, &faila, &failb); /* * No errors in the vertical stripe, skip it. Can happen for recovery @@ -1857,7 +1857,7 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, return -EIO; /* - * Setup our array of pointers with sectors from each stripe + * Setup our array of pointers with blocks from each stripe * * NOTE: store a duplicate array of pointers to preserve the * pointer order. @@ -1868,13 +1868,13 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, * bio list if possible. */ if (rbio->operation == BTRFS_RBIO_READ_REBUILD) { - sector = sector_in_rbio(rbio, stripe_nr, sector_nr, 0); + block = block_in_rbio(rbio, stripe_nr, block_nr, 0); } else { - sector = rbio_stripe_sector(rbio, stripe_nr, sector_nr); + block = rbio_stripe_block(rbio, stripe_nr, block_nr); } - ASSERT(sector->page); - pointers[stripe_nr] = kmap_local_page(sector->page) + - sector->pgoff; + ASSERT(block->page); + pointers[stripe_nr] = kmap_local_page(block->page) + + block->pgoff; unmap_array[stripe_nr] = pointers[stripe_nr]; } @@ -1920,10 +1920,10 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, } if (failb == rbio->real_stripes - 2) { - raid6_datap_recov(rbio->real_stripes, sectorsize, + raid6_datap_recov(rbio->real_stripes, blocksize, faila, pointers); } else { - raid6_2data_recov(rbio->real_stripes, sectorsize, + raid6_2data_recov(rbio->real_stripes, blocksize, faila, failb, pointers); } } else { @@ -1933,7 +1933,7 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, ASSERT(failb == -1); pstripe: /* Copy parity block into failed block to start with */ - memcpy(pointers[faila], pointers[rbio->nr_data], sectorsize); + memcpy(pointers[faila], pointers[rbio->nr_data], blocksize); /* Rearrange the pointer array */ p = pointers[faila]; @@ -1943,35 +1943,35 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, pointers[rbio->nr_data - 1] = p; /* Xor in the rest */ - run_xor(pointers, rbio->nr_data - 1, sectorsize); + run_xor(pointers, rbio->nr_data - 1, blocksize); } /* * No matter if this is a RMW or recovery, we should have all - * failed sectors repaired in the vertical stripe, thus they are now + * failed blocks repaired in the vertical stripe, thus they are now * uptodate. * Especially if we determine to cache the rbio, we need to - * have at least all data sectors uptodate. + * have at least all data blocks uptodate. * - * If possible, also check if the repaired sector matches its data + * If possible, also check if the repaired block matches its data * checksum. */ if (faila >= 0) { - ret = verify_one_sector(rbio, faila, sector_nr); + ret = verify_one_block(rbio, faila, block_nr); if (ret < 0) goto cleanup; - sector = rbio_stripe_sector(rbio, faila, sector_nr); - sector->uptodate = 1; + block = rbio_stripe_block(rbio, faila, block_nr); + block->uptodate = 1; } if (failb >= 0) { - ret = verify_one_sector(rbio, failb, sector_nr); + ret = verify_one_block(rbio, failb, block_nr); if (ret < 0) goto cleanup; - sector = rbio_stripe_sector(rbio, failb, sector_nr); - sector->uptodate = 1; + block = rbio_stripe_block(rbio, failb, block_nr); + block->uptodate = 1; } cleanup: @@ -1980,15 +1980,15 @@ static int recover_vertical(struct btrfs_raid_bio *rbio, int sector_nr, return ret; } -static int recover_sectors(struct btrfs_raid_bio *rbio) +static int recover_blocks(struct btrfs_raid_bio *rbio) { void **pointers = NULL; void **unmap_array = NULL; - int sectornr; + int blocknr; int ret = 0; /* - * @pointers array stores the pointer for each sector. + * @pointers array stores the pointer for each block. * * @unmap_array stores copy of pointers that does not get reordered * during reconstruction so that kunmap_local works. @@ -2008,8 +2008,8 @@ static int recover_sectors(struct btrfs_raid_bio *rbio) index_rbio_pages(rbio); - for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) { - ret = recover_vertical(rbio, sectornr, pointers, unmap_array); + for (blocknr = 0; blocknr < rbio->stripe_nblocks; blocknr++) { + ret = recover_vertical(rbio, blocknr, pointers, unmap_array); if (ret < 0) break; } @@ -2023,16 +2023,16 @@ static int recover_sectors(struct btrfs_raid_bio *rbio) static void recover_rbio(struct btrfs_raid_bio *rbio) { struct bio_list bio_list = BIO_EMPTY_LIST; - int total_sector_nr; + int total_block_nr; int ret = 0; /* * Either we're doing recover for a read failure or degraded write, * caller should have set error bitmap correctly. */ - ASSERT(bitmap_weight(rbio->error_bitmap, rbio->nr_sectors)); + ASSERT(bitmap_weight(rbio->error_bitmap, rbio->nr_blocks)); - /* For recovery, we need to read all sectors including P/Q. */ + /* For recovery, we need to read all blocks including P/Q. */ ret = alloc_rbio_pages(rbio); if (ret < 0) goto out; @@ -2041,17 +2041,17 @@ static void recover_rbio(struct btrfs_raid_bio *rbio) /* * Read everything that hasn't failed. However this time we will - * not trust any cached sector. + * not trust any cached block. * As we may read out some stale data but higher layer is not reading * that stale part. * * So here we always re-read everything in recovery path. */ - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { - int stripe = total_sector_nr / rbio->stripe_nsectors; - int sectornr = total_sector_nr % rbio->stripe_nsectors; - struct sector_ptr *sector; + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { + int stripe = total_block_nr / rbio->stripe_nblocks; + int blocknr = total_block_nr % rbio->stripe_nblocks; + struct block_ptr *block; /* * Skip the range which has error. It can be a range which is @@ -2059,18 +2059,18 @@ static void recover_rbio(struct btrfs_raid_bio *rbio) * device. */ if (!rbio->bioc->stripes[stripe].dev->bdev || - test_bit(total_sector_nr, rbio->error_bitmap)) { + test_bit(total_block_nr, rbio->error_bitmap)) { /* * Also set the error bit for missing device, which * may not yet have its error bit set. */ - set_bit(total_sector_nr, rbio->error_bitmap); + set_bit(total_block_nr, rbio->error_bitmap); continue; } - sector = rbio_stripe_sector(rbio, stripe, sectornr); - ret = rbio_add_io_sector(rbio, &bio_list, sector, stripe, - sectornr, REQ_OP_READ); + block = rbio_stripe_block(rbio, stripe, blocknr); + ret = rbio_add_io_block(rbio, &bio_list, block, stripe, + blocknr, REQ_OP_READ); if (ret < 0) { bio_list_put(&bio_list); goto out; @@ -2078,7 +2078,7 @@ static void recover_rbio(struct btrfs_raid_bio *rbio) } submit_read_wait_bio_list(rbio, &bio_list); - ret = recover_sectors(rbio); + ret = recover_blocks(rbio); out: rbio_orig_end_io(rbio, errno_to_blk_status(ret)); } @@ -2100,7 +2100,7 @@ static void recover_rbio_work_locked(struct work_struct *work) static void set_rbio_raid6_extra_error(struct btrfs_raid_bio *rbio, int mirror_num) { bool found = false; - int sector_nr; + int block_nr; /* * This is for RAID6 extra recovery tries, thus mirror number should @@ -2109,12 +2109,12 @@ static void set_rbio_raid6_extra_error(struct btrfs_raid_bio *rbio, int mirror_n * RAID5 methods. */ ASSERT(mirror_num > 2); - for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) { + for (block_nr = 0; block_nr < rbio->stripe_nblocks; block_nr++) { int found_errors; int faila; int failb; - found_errors = get_rbio_veritical_errors(rbio, sector_nr, + found_errors = get_rbio_vertical_errors(rbio, block_nr, &faila, &failb); /* This vertical stripe doesn't have errors. */ if (!found_errors) @@ -2134,7 +2134,7 @@ static void set_rbio_raid6_extra_error(struct btrfs_raid_bio *rbio, int mirror_n /* Set the extra bit in error bitmap. */ if (failb >= 0) - set_bit(failb * rbio->stripe_nsectors + sector_nr, + set_bit(failb * rbio->stripe_nblocks + block_nr, rbio->error_bitmap); } @@ -2183,8 +2183,8 @@ static void fill_data_csums(struct btrfs_raid_bio *rbio) struct btrfs_root *csum_root = btrfs_csum_root(fs_info, rbio->bioc->full_stripe_logical); const u64 start = rbio->bioc->full_stripe_logical; - const u32 len = (rbio->nr_data * rbio->stripe_nsectors) << - fs_info->sectorsize_bits; + const u32 len = (rbio->nr_data * rbio->stripe_nblocks) << + fs_info->blocksize_bits; int ret; /* The rbio should not have its csum buffer initialized. */ @@ -2205,9 +2205,9 @@ static void fill_data_csums(struct btrfs_raid_bio *rbio) rbio->bioc->map_type & BTRFS_BLOCK_GROUP_METADATA) return; - rbio->csum_buf = kzalloc(rbio->nr_data * rbio->stripe_nsectors * + rbio->csum_buf = kzalloc(rbio->nr_data * rbio->stripe_nblocks * fs_info->csum_size, GFP_NOFS); - rbio->csum_bitmap = bitmap_zalloc(rbio->nr_data * rbio->stripe_nsectors, + rbio->csum_bitmap = bitmap_zalloc(rbio->nr_data * rbio->stripe_nblocks, GFP_NOFS); if (!rbio->csum_buf || !rbio->csum_bitmap) { ret = -ENOMEM; @@ -2218,7 +2218,7 @@ static void fill_data_csums(struct btrfs_raid_bio *rbio) rbio->csum_buf, rbio->csum_bitmap); if (ret < 0) goto error; - if (bitmap_empty(rbio->csum_bitmap, len >> fs_info->sectorsize_bits)) + if (bitmap_empty(rbio->csum_bitmap, len >> fs_info->blocksize_bits)) goto no_csum; return; @@ -2241,30 +2241,30 @@ static void fill_data_csums(struct btrfs_raid_bio *rbio) static int rmw_read_wait_recover(struct btrfs_raid_bio *rbio) { struct bio_list bio_list = BIO_EMPTY_LIST; - int total_sector_nr; + int total_block_nr; int ret = 0; /* * Fill the data csums we need for data verification. We need to fill * the csum_bitmap/csum_buf first, as our endio function will try to - * verify the data sectors. + * verify the data blocks. */ fill_data_csums(rbio); /* - * Build a list of bios to read all sectors (including data and P/Q). + * Build a list of bios to read all blocks (including data and P/Q). * * This behavior is to compensate the later csum verification and recovery. */ - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { - struct sector_ptr *sector; - int stripe = total_sector_nr / rbio->stripe_nsectors; - int sectornr = total_sector_nr % rbio->stripe_nsectors; + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { + struct block_ptr *block; + int stripe = total_block_nr / rbio->stripe_nblocks; + int blocknr = total_block_nr % rbio->stripe_nblocks; - sector = rbio_stripe_sector(rbio, stripe, sectornr); - ret = rbio_add_io_sector(rbio, &bio_list, sector, - stripe, sectornr, REQ_OP_READ); + block = rbio_stripe_block(rbio, stripe, blocknr); + ret = rbio_add_io_block(rbio, &bio_list, block, + stripe, blocknr, REQ_OP_READ); if (ret) { bio_list_put(&bio_list); return ret; @@ -2272,11 +2272,11 @@ static int rmw_read_wait_recover(struct btrfs_raid_bio *rbio) } /* - * We may or may not have any corrupted sectors (including missing dev - * and csum mismatch), just let recover_sectors() to handle them all. + * We may or may not have any corrupted blocks (including missing dev + * and csum mismatch), just let recover_blocks() to handle them all. */ submit_read_wait_bio_list(rbio, &bio_list); - return recover_sectors(rbio); + return recover_blocks(rbio); } static void raid_wait_write_end_io(struct bio *bio) @@ -2311,22 +2311,22 @@ static void submit_write_bios(struct btrfs_raid_bio *rbio, } /* - * To determine if we need to read any sector from the disk. + * To determine if we need to read any block from the disk. * Should only be utilized in RMW path, to skip cached rbio. */ -static bool need_read_stripe_sectors(struct btrfs_raid_bio *rbio) +static bool need_read_stripe_blocks(struct btrfs_raid_bio *rbio) { int i; - for (i = 0; i < rbio->nr_data * rbio->stripe_nsectors; i++) { - struct sector_ptr *sector = &rbio->stripe_sectors[i]; + for (i = 0; i < rbio->nr_data * rbio->stripe_nblocks; i++) { + struct block_ptr *block = &rbio->stripe_blocks[i]; /* - * We have a sector which doesn't have page nor uptodate, + * We have a block which doesn't have page nor uptodate, * thus this rbio can not be cached one, as cached one must - * have all its data sectors present and uptodate. + * have all its data blocks present and uptodate. */ - if (!sector->page || !sector->uptodate) + if (!block->page || !block->uptodate) return true; } return false; @@ -2335,7 +2335,7 @@ static bool need_read_stripe_sectors(struct btrfs_raid_bio *rbio) static void rmw_rbio(struct btrfs_raid_bio *rbio) { struct bio_list bio_list; - int sectornr; + int blocknr; int ret = 0; /* @@ -2347,10 +2347,10 @@ static void rmw_rbio(struct btrfs_raid_bio *rbio) goto out; /* - * Either full stripe write, or we have every data sector already + * Either full stripe write, or we have every data block already * cached, can go to write path immediately. */ - if (!rbio_is_full(rbio) && need_read_stripe_sectors(rbio)) { + if (!rbio_is_full(rbio) && need_read_stripe_blocks(rbio)) { /* * Now we're doing sub-stripe write, also need all data stripes * to do the full RMW. @@ -2375,7 +2375,7 @@ static void rmw_rbio(struct btrfs_raid_bio *rbio) set_bit(RBIO_RMW_LOCKED_BIT, &rbio->flags); spin_unlock(&rbio->bio_list_lock); - bitmap_clear(rbio->error_bitmap, 0, rbio->nr_sectors); + bitmap_clear(rbio->error_bitmap, 0, rbio->nr_blocks); index_rbio_pages(rbio); @@ -2390,8 +2390,8 @@ static void rmw_rbio(struct btrfs_raid_bio *rbio) else clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags); - for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) - generate_pq_vertical(rbio, sectornr); + for (blocknr = 0; blocknr < rbio->stripe_nblocks; blocknr++) + generate_pq_vertical(rbio, blocknr); bio_list_init(&bio_list); ret = rmw_assemble_write_bios(rbio, &bio_list); @@ -2404,10 +2404,10 @@ static void rmw_rbio(struct btrfs_raid_bio *rbio) wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0); /* We may have more errors than our tolerance during the read. */ - for (sectornr = 0; sectornr < rbio->stripe_nsectors; sectornr++) { + for (blocknr = 0; blocknr < rbio->stripe_nblocks; blocknr++) { int found_errors; - found_errors = get_rbio_veritical_errors(rbio, sectornr, NULL, NULL); + found_errors = get_rbio_vertical_errors(rbio, blocknr, NULL, NULL); if (found_errors > rbio->bioc->max_errors) { ret = -EIO; break; @@ -2444,7 +2444,7 @@ static void rmw_rbio_work_locked(struct work_struct *work) struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, struct btrfs_io_context *bioc, struct btrfs_device *scrub_dev, - unsigned long *dbitmap, int stripe_nsectors) + unsigned long *dbitmap, int stripe_nblocks) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; @@ -2474,7 +2474,7 @@ struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, } ASSERT_RBIO_STRIPE(i < rbio->real_stripes, rbio, i); - bitmap_copy(&rbio->dbitmap, dbitmap, stripe_nsectors); + bitmap_copy(&rbio->dbitmap, dbitmap, stripe_nblocks); return rbio; } @@ -2484,16 +2484,16 @@ struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, */ static int alloc_rbio_essential_pages(struct btrfs_raid_bio *rbio) { - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; - int total_sector_nr; + const u32 blocksize = rbio->bioc->fs_info->blocksize; + int total_block_nr; - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { struct page *page; - int sectornr = total_sector_nr % rbio->stripe_nsectors; - int index = (total_sector_nr * sectorsize) >> PAGE_SHIFT; + int blocknr = total_block_nr % rbio->stripe_nblocks; + int index = (total_block_nr * blocksize) >> PAGE_SHIFT; - if (!test_bit(sectornr, &rbio->dbitmap)) + if (!test_bit(blocknr, &rbio->dbitmap)) continue; if (rbio->stripe_pages[index]) continue; @@ -2502,22 +2502,22 @@ static int alloc_rbio_essential_pages(struct btrfs_raid_bio *rbio) return -ENOMEM; rbio->stripe_pages[index] = page; } - index_stripe_sectors(rbio); + index_stripe_blocks(rbio); return 0; } static int finish_parity_scrub(struct btrfs_raid_bio *rbio) { struct btrfs_io_context *bioc = rbio->bioc; - const u32 sectorsize = bioc->fs_info->sectorsize; + const u32 blocksize = bioc->fs_info->blocksize; void **pointers = rbio->finish_pointers; unsigned long *pbitmap = &rbio->finish_pbitmap; int nr_data = rbio->nr_data; int stripe; - int sectornr; + int blocknr; bool has_qstripe; - struct sector_ptr p_sector = { 0 }; - struct sector_ptr q_sector = { 0 }; + struct block_ptr p_block = { 0 }; + struct block_ptr q_block = { 0 }; struct bio_list bio_list; int is_replace = 0; int ret; @@ -2537,7 +2537,7 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio) */ if (bioc->replace_nr_stripes && bioc->replace_stripe_src == rbio->scrubp) { is_replace = 1; - bitmap_copy(pbitmap, &rbio->dbitmap, rbio->stripe_nsectors); + bitmap_copy(pbitmap, &rbio->dbitmap, rbio->stripe_nblocks); } /* @@ -2547,60 +2547,60 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio) */ clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags); - p_sector.page = alloc_page(GFP_NOFS); - if (!p_sector.page) + p_block.page = alloc_page(GFP_NOFS); + if (!p_block.page) return -ENOMEM; - p_sector.pgoff = 0; - p_sector.uptodate = 1; + p_block.pgoff = 0; + p_block.uptodate = 1; if (has_qstripe) { /* RAID6, allocate and map temp space for the Q stripe */ - q_sector.page = alloc_page(GFP_NOFS); - if (!q_sector.page) { - __free_page(p_sector.page); - p_sector.page = NULL; + q_block.page = alloc_page(GFP_NOFS); + if (!q_block.page) { + __free_page(p_block.page); + p_block.page = NULL; return -ENOMEM; } - q_sector.pgoff = 0; - q_sector.uptodate = 1; - pointers[rbio->real_stripes - 1] = kmap_local_page(q_sector.page); + q_block.pgoff = 0; + q_block.uptodate = 1; + pointers[rbio->real_stripes - 1] = kmap_local_page(q_block.page); } - bitmap_clear(rbio->error_bitmap, 0, rbio->nr_sectors); + bitmap_clear(rbio->error_bitmap, 0, rbio->nr_blocks); /* Map the parity stripe just once */ - pointers[nr_data] = kmap_local_page(p_sector.page); + pointers[nr_data] = kmap_local_page(p_block.page); - for_each_set_bit(sectornr, &rbio->dbitmap, rbio->stripe_nsectors) { - struct sector_ptr *sector; + for_each_set_bit(blocknr, &rbio->dbitmap, rbio->stripe_nblocks) { + struct block_ptr *block; void *parity; /* first collect one page from each data stripe */ for (stripe = 0; stripe < nr_data; stripe++) { - sector = sector_in_rbio(rbio, stripe, sectornr, 0); - pointers[stripe] = kmap_local_page(sector->page) + - sector->pgoff; + block = block_in_rbio(rbio, stripe, blocknr, 0); + pointers[stripe] = kmap_local_page(block->page) + + block->pgoff; } if (has_qstripe) { assert_rbio(rbio); /* RAID6, call the library function to fill in our P/Q */ - raid6_call.gen_syndrome(rbio->real_stripes, sectorsize, + raid6_call.gen_syndrome(rbio->real_stripes, blocksize, pointers); } else { /* raid5 */ - memcpy(pointers[nr_data], pointers[0], sectorsize); - run_xor(pointers + 1, nr_data - 1, sectorsize); + memcpy(pointers[nr_data], pointers[0], blocksize); + run_xor(pointers + 1, nr_data - 1, blocksize); } /* Check scrubbing parity and repair it */ - sector = rbio_stripe_sector(rbio, rbio->scrubp, sectornr); - parity = kmap_local_page(sector->page) + sector->pgoff; - if (memcmp(parity, pointers[rbio->scrubp], sectorsize) != 0) - memcpy(parity, pointers[rbio->scrubp], sectorsize); + block = rbio_stripe_block(rbio, rbio->scrubp, blocknr); + parity = kmap_local_page(block->page) + block->pgoff; + if (memcmp(parity, pointers[rbio->scrubp], blocksize) != 0) + memcpy(parity, pointers[rbio->scrubp], blocksize); else /* Parity is right, needn't writeback */ - bitmap_clear(&rbio->dbitmap, sectornr, 1); + bitmap_clear(&rbio->dbitmap, blocknr, 1); kunmap_local(parity); for (stripe = nr_data - 1; stripe >= 0; stripe--) @@ -2608,12 +2608,12 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio) } kunmap_local(pointers[nr_data]); - __free_page(p_sector.page); - p_sector.page = NULL; - if (q_sector.page) { + __free_page(p_block.page); + p_block.page = NULL; + if (q_block.page) { kunmap_local(pointers[rbio->real_stripes - 1]); - __free_page(q_sector.page); - q_sector.page = NULL; + __free_page(q_block.page); + q_block.page = NULL; } /* @@ -2621,12 +2621,12 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio) * higher layers (the bio_list in our rbio) and our p/q. Ignore * everything else. */ - for_each_set_bit(sectornr, &rbio->dbitmap, rbio->stripe_nsectors) { - struct sector_ptr *sector; + for_each_set_bit(blocknr, &rbio->dbitmap, rbio->stripe_nblocks) { + struct block_ptr *block; - sector = rbio_stripe_sector(rbio, rbio->scrubp, sectornr); - ret = rbio_add_io_sector(rbio, &bio_list, sector, rbio->scrubp, - sectornr, REQ_OP_WRITE); + block = rbio_stripe_block(rbio, rbio->scrubp, blocknr); + ret = rbio_add_io_block(rbio, &bio_list, block, rbio->scrubp, + blocknr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -2639,13 +2639,13 @@ static int finish_parity_scrub(struct btrfs_raid_bio *rbio) * the target device. Check we have a valid source stripe number. */ ASSERT_RBIO(rbio->bioc->replace_stripe_src >= 0, rbio); - for_each_set_bit(sectornr, pbitmap, rbio->stripe_nsectors) { - struct sector_ptr *sector; + for_each_set_bit(blocknr, pbitmap, rbio->stripe_nblocks) { + struct block_ptr *block; - sector = rbio_stripe_sector(rbio, rbio->scrubp, sectornr); - ret = rbio_add_io_sector(rbio, &bio_list, sector, + block = rbio_stripe_block(rbio, rbio->scrubp, blocknr); + ret = rbio_add_io_block(rbio, &bio_list, block, rbio->real_stripes, - sectornr, REQ_OP_WRITE); + blocknr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -2670,11 +2670,11 @@ static int recover_scrub_rbio(struct btrfs_raid_bio *rbio) { void **pointers = NULL; void **unmap_array = NULL; - int sector_nr; + int block_nr; int ret = 0; /* - * @pointers array stores the pointer for each sector. + * @pointers array stores the pointer for each block. * * @unmap_array stores copy of pointers that does not get reordered * during reconstruction so that kunmap_local works. @@ -2686,13 +2686,13 @@ static int recover_scrub_rbio(struct btrfs_raid_bio *rbio) goto out; } - for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) { + for (block_nr = 0; block_nr < rbio->stripe_nblocks; block_nr++) { int dfail = 0, failp = -1; int faila; int failb; int found_errors; - found_errors = get_rbio_veritical_errors(rbio, sector_nr, + found_errors = get_rbio_vertical_errors(rbio, block_nr, &faila, &failb); if (found_errors > rbio->bioc->max_errors) { ret = -EIO; @@ -2740,7 +2740,7 @@ static int recover_scrub_rbio(struct btrfs_raid_bio *rbio) goto out; } - ret = recover_vertical(rbio, sector_nr, pointers, unmap_array); + ret = recover_vertical(rbio, block_nr, pointers, unmap_array); if (ret < 0) goto out; } @@ -2753,39 +2753,39 @@ static int recover_scrub_rbio(struct btrfs_raid_bio *rbio) static int scrub_assemble_read_bios(struct btrfs_raid_bio *rbio) { struct bio_list bio_list = BIO_EMPTY_LIST; - int total_sector_nr; + int total_block_nr; int ret = 0; /* Build a list of bios to read all the missing parts. */ - for (total_sector_nr = 0; total_sector_nr < rbio->nr_sectors; - total_sector_nr++) { - int sectornr = total_sector_nr % rbio->stripe_nsectors; - int stripe = total_sector_nr / rbio->stripe_nsectors; - struct sector_ptr *sector; + for (total_block_nr = 0; total_block_nr < rbio->nr_blocks; + total_block_nr++) { + int blocknr = total_block_nr % rbio->stripe_nblocks; + int stripe = total_block_nr / rbio->stripe_nblocks; + struct block_ptr *block; /* No data in the vertical stripe, no need to read. */ - if (!test_bit(sectornr, &rbio->dbitmap)) + if (!test_bit(blocknr, &rbio->dbitmap)) continue; /* - * We want to find all the sectors missing from the rbio and - * read them from the disk. If sector_in_rbio() finds a sector + * We want to find all the blocks missing from the rbio and + * read them from the disk. If block_in_rbio() finds a block * in the bio list we don't need to read it off the stripe. */ - sector = sector_in_rbio(rbio, stripe, sectornr, 1); - if (sector) + block = block_in_rbio(rbio, stripe, blocknr, 1); + if (block) continue; - sector = rbio_stripe_sector(rbio, stripe, sectornr); + block = rbio_stripe_block(rbio, stripe, blocknr); /* - * The bio cache may have handed us an uptodate sector. If so, + * The bio cache may have handed us an uptodate block. If so, * use it. */ - if (sector->uptodate) + if (block->uptodate) continue; - ret = rbio_add_io_sector(rbio, &bio_list, sector, stripe, - sectornr, REQ_OP_READ); + ret = rbio_add_io_block(rbio, &bio_list, block, stripe, + blocknr, REQ_OP_READ); if (ret) { bio_list_put(&bio_list); return ret; @@ -2798,34 +2798,34 @@ static int scrub_assemble_read_bios(struct btrfs_raid_bio *rbio) static void scrub_rbio(struct btrfs_raid_bio *rbio) { - int sector_nr; + int block_nr; int ret; ret = alloc_rbio_essential_pages(rbio); if (ret) goto out; - bitmap_clear(rbio->error_bitmap, 0, rbio->nr_sectors); + bitmap_clear(rbio->error_bitmap, 0, rbio->nr_blocks); ret = scrub_assemble_read_bios(rbio); if (ret < 0) goto out; - /* We may have some failures, recover the failed sectors first. */ + /* We may have some failures, recover the failed blocks first. */ ret = recover_scrub_rbio(rbio); if (ret < 0) goto out; /* - * We have every sector properly prepared. Can finish the scrub + * We have every block properly prepared. Can finish the scrub * and writeback the good content. */ ret = finish_parity_scrub(rbio); wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0); - for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) { + for (block_nr = 0; block_nr < rbio->stripe_nblocks; block_nr++) { int found_errors; - found_errors = get_rbio_veritical_errors(rbio, sector_nr, NULL, NULL); + found_errors = get_rbio_vertical_errors(rbio, block_nr, NULL, NULL); if (found_errors > rbio->bioc->max_errors) { ret = -EIO; break; @@ -2859,8 +2859,8 @@ void raid56_parity_cache_data_pages(struct btrfs_raid_bio *rbio, const u64 offset_in_full_stripe = data_logical - rbio->bioc->full_stripe_logical; const int page_index = offset_in_full_stripe >> PAGE_SHIFT; - const u32 sectorsize = rbio->bioc->fs_info->sectorsize; - const u32 sectors_per_page = PAGE_SIZE / sectorsize; + const u32 blocksize = rbio->bioc->fs_info->blocksize; + const u32 blocks_per_page = PAGE_SIZE / blocksize; int ret; /* @@ -2884,9 +2884,9 @@ void raid56_parity_cache_data_pages(struct btrfs_raid_bio *rbio, struct page *src = data_pages[page_nr]; memcpy_page(dst, 0, src, 0, PAGE_SIZE); - for (int sector_nr = sectors_per_page * page_index; - sector_nr < sectors_per_page * (page_index + 1); - sector_nr++) - rbio->stripe_sectors[sector_nr].uptodate = true; + for (int block_nr = blocks_per_page * page_index; + block_nr < blocks_per_page * (page_index + 1); + block_nr++) + rbio->stripe_blocks[block_nr].uptodate = true; } } diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h index 0d7b4c2fb6ae..353db840ad17 100644 --- a/fs/btrfs/raid56.h +++ b/fs/btrfs/raid56.h @@ -16,7 +16,7 @@ #include "volumes.h" struct page; -struct sector_ptr; +struct block_ptr; struct btrfs_fs_info; enum btrfs_rbio_ops { @@ -67,8 +67,8 @@ struct btrfs_raid_bio { /* How many pages there are for the full stripe including P/Q */ u16 nr_pages; - /* How many sectors there are for the full stripe including P/Q */ - u16 nr_sectors; + /* How many blocks there are for the full stripe including P/Q */ + u16 nr_blocks; /* Number of data stripes (no p/q) */ u8 nr_data; @@ -79,8 +79,8 @@ struct btrfs_raid_bio { /* How many pages there are for each stripe */ u8 stripe_npages; - /* How many sectors there are for each stripe */ - u8 stripe_nsectors; + /* How many blocks there are for each stripe */ + u8 stripe_nblocks; /* Stripe number that we're scrubbing */ u8 scrubp; @@ -100,7 +100,7 @@ struct btrfs_raid_bio { /* Bitmap to record which horizontal stripe has data */ unsigned long dbitmap; - /* Allocated with stripe_nsectors-many bits for finish_*() calls */ + /* Allocated with stripe_nblocks-many bits for finish_*() calls */ unsigned long finish_pbitmap; /* @@ -115,38 +115,38 @@ struct btrfs_raid_bio { */ struct page **stripe_pages; - /* Pointers to the sectors in the bio_list, for faster lookup */ - struct sector_ptr *bio_sectors; + /* Pointers to the blocks in the bio_list, for faster lookup */ + struct block_ptr *bio_blocks; /* - * For subpage support, we need to map each sector to above + * For subpage support, we need to map each block to above * stripe_pages. */ - struct sector_ptr *stripe_sectors; + struct block_ptr *stripe_blocks; /* Allocated with real_stripes-many pointers for finish_*() calls */ void **finish_pointers; /* * The bitmap recording where IO errors happened. - * Each bit is corresponding to one sector in either bio_sectors[] or - * stripe_sectors[] array. + * Each bit is corresponding to one block in either bio_blocks[] or + * stripe_blocks[] array. * - * The reason we don't use another bit in sector_ptr is, we have two - * arrays of sectors, and a lot of IO can use sectors in both arrays. + * The reason we don't use another bit in block_ptr is, we have two + * arrays of blocks, and a lot of IO can use blocks in both arrays. * Thus making it much harder to iterate. */ unsigned long *error_bitmap; /* * Checksum buffer if the rbio is for data. The buffer should cover - * all data sectors (excluding P/Q sectors). + * all data blocks (excluding P/Q blocks). */ u8 *csum_buf; /* - * Each bit represents if the corresponding sector has data csum found. - * Should only cover data sectors (excluding P/Q sectors). + * Each bit represents if the corresponding block has data csum found. + * Should only cover data blocks (excluding P/Q blocks). */ unsigned long *csum_bitmap; }; @@ -198,7 +198,7 @@ void raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc); struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, struct btrfs_io_context *bioc, struct btrfs_device *scrub_dev, - unsigned long *dbitmap, int stripe_nsectors); + unsigned long *dbitmap, int stripe_nblocks); void raid56_parity_submit_scrub_rbio(struct btrfs_raid_bio *rbio); void raid56_parity_cache_data_pages(struct btrfs_raid_bio *rbio, From patchwork Wed Dec 18 09:41:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913311 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5CF3198A1A for ; Wed, 18 Dec 2024 09:42:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514932; cv=none; b=bi0Rrvyh/p20mZZni1zAa/WZqAegtAnX08jNaaMV0nJNyBUw2XvLVyiq/yePO6Aj8fvc2JASk1CtPZnxz//p0bzG24ERuEUvxrsalVjyglEwXg8lRWuahVED8pqj0etp7ptBAMGioX+64P/CrmsmEcasDQ7PRjjEuUV30Vtl8oM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514932; c=relaxed/simple; bh=CZ3j1ilugXVhcETPlizzyby0TepX2Pyh7fdv0ao4r50=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WGNFE8NAg+zHj4k8pZNle/OxbLzTlHDg2pZU1TD4200VUvjg4iBxxBaV9XZDkBC/rrUDjkOtpxE7u0Q0U4jpaV+yfdX9poJQna8vw3BYXcKKetn2flU70NKQeGgHRKlVk/YBIV0unEFHBHLPfguj5kaxAD9lBEPlhtRwnTIkpFE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=vQy0P7SQ; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=vQy0P7SQ; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="vQy0P7SQ"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="vQy0P7SQ" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 6C83D21167 for ; Wed, 18 Dec 2024 09:42:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=THURlZX5p7gb8h13WI0SVDxcmy2zvEtdWx2ITMhi7XM=; b=vQy0P7SQMj6Y6VkiE5trDOf42JJlY5JOqmu1hK/8gdE3O4Pi8zQguvjy03bOwTTSTykugM 5IsTRi5wmRVZ+qZwenA2rXJ01FzCAPxkDJS/X7njKpZWVPJMw2Iw3SVQ/hHC6S8BvDiF9s Ot6g0rm8EzMSwuaP9Dpi9z+4Wi4JjEY= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=THURlZX5p7gb8h13WI0SVDxcmy2zvEtdWx2ITMhi7XM=; b=vQy0P7SQMj6Y6VkiE5trDOf42JJlY5JOqmu1hK/8gdE3O4Pi8zQguvjy03bOwTTSTykugM 5IsTRi5wmRVZ+qZwenA2rXJ01FzCAPxkDJS/X7njKpZWVPJMw2Iw3SVQ/hHC6S8BvDiF9s Ot6g0rm8EzMSwuaP9Dpi9z+4Wi4JjEY= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9A87B132EA for ; Wed, 18 Dec 2024 09:42:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id AELuFfCYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:08 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 12/18] btrfs: migrate defrag.c to use block size terminology Date: Wed, 18 Dec 2024 20:11:28 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.80 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:mid,suse.com:email]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Level: Straightforward rename from "sector" to "block". Signed-off-by: Qu Wenruo --- fs/btrfs/defrag.c | 52 +++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c index 968dae953948..7a96505957b3 100644 --- a/fs/btrfs/defrag.c +++ b/fs/btrfs/defrag.c @@ -272,7 +272,7 @@ static int btrfs_run_defrag_inode(struct btrfs_fs_info *fs_info, if (ret < 0) goto cleanup; - cur = max(cur + fs_info->sectorsize, range.start); + cur = max(cur + fs_info->blocksize, range.start); goto again; cleanup: @@ -749,14 +749,14 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start, struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; struct extent_map *em; - const u32 sectorsize = BTRFS_I(inode)->root->fs_info->sectorsize; + const u32 blocksize = BTRFS_I(inode)->root->fs_info->blocksize; /* * Hopefully we have this extent in the tree already, try without the * full extent lock. */ read_lock(&em_tree->lock); - em = lookup_extent_mapping(em_tree, start, sectorsize); + em = lookup_extent_mapping(em_tree, start, blocksize); read_unlock(&em_tree->lock); /* @@ -775,7 +775,7 @@ static struct extent_map *defrag_lookup_extent(struct inode *inode, u64 start, if (!em) { struct extent_state *cached = NULL; - u64 end = start + sectorsize - 1; + u64 end = start + blocksize - 1; /* Get the big lock and read metadata off disk. */ if (!locked) @@ -1199,7 +1199,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, struct defrag_target_range *tmp; LIST_HEAD(target_list); struct folio **folios; - const u32 sectorsize = inode->root->fs_info->sectorsize; + const u32 blocksize = inode->root->fs_info->blocksize; u64 last_index = (start + len - 1) >> PAGE_SHIFT; u64 start_index = start >> PAGE_SHIFT; unsigned int nr_pages = last_index - start_index + 1; @@ -1207,7 +1207,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, int i; ASSERT(nr_pages <= CLUSTER_SIZE / PAGE_SIZE); - ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(len, sectorsize)); + ASSERT(IS_ALIGNED(start, blocksize) && IS_ALIGNED(len, blocksize)); folios = kcalloc(nr_pages, sizeof(struct folio *), GFP_NOFS); if (!folios) @@ -1270,11 +1270,11 @@ static int defrag_one_cluster(struct btrfs_inode *inode, struct file_ra_state *ra, u64 start, u32 len, u32 extent_thresh, u64 newer_than, bool do_compress, - unsigned long *sectors_defragged, - unsigned long max_sectors, + unsigned long *blocks_defragged, + unsigned long max_blocks, u64 *last_scanned_ret) { - const u32 sectorsize = inode->root->fs_info->sectorsize; + const u32 blocksize = inode->root->fs_info->blocksize; struct defrag_target_range *entry; struct defrag_target_range *tmp; LIST_HEAD(target_list); @@ -1290,14 +1290,14 @@ static int defrag_one_cluster(struct btrfs_inode *inode, u32 range_len = entry->len; /* Reached or beyond the limit */ - if (max_sectors && *sectors_defragged >= max_sectors) { + if (max_blocks && *blocks_defragged >= max_blocks) { ret = 1; break; } - if (max_sectors) + if (max_blocks) range_len = min_t(u32, range_len, - (max_sectors - *sectors_defragged) * sectorsize); + (max_blocks - *blocks_defragged) * blocksize); /* * If defrag_one_range() has updated last_scanned_ret, @@ -1315,7 +1315,7 @@ static int defrag_one_cluster(struct btrfs_inode *inode, /* * Here we may not defrag any range if holes are punched before * we locked the pages. - * But that's fine, it only affects the @sectors_defragged + * But that's fine, it only affects the @blocks_defragged * accounting. */ ret = defrag_one_range(inode, entry->start, range_len, @@ -1323,8 +1323,8 @@ static int defrag_one_cluster(struct btrfs_inode *inode, last_scanned_ret); if (ret < 0) break; - *sectors_defragged += range_len >> - inode->root->fs_info->sectorsize_bits; + *blocks_defragged += range_len >> + inode->root->fs_info->blocksize_bits; } out: list_for_each_entry_safe(entry, tmp, &target_list, list) { @@ -1343,11 +1343,11 @@ static int defrag_one_cluster(struct btrfs_inode *inode, * @ra: readahead state * @range: defrag options including range and flags * @newer_than: minimum transid to defrag - * @max_to_defrag: max number of sectors to be defragged, if 0, the whole inode + * @max_to_defrag: max number of blocks to be defragged, if 0, the whole inode * will be defragged. * * Return <0 for error. - * Return >=0 for the number of sectors defragged, and range->start will be updated + * Return >=0 for the number of blocks defragged, and range->start will be updated * to indicate the file offset where next defrag should be started at. * (Mostly for autodefrag, which sets @max_to_defrag thus we may exit early without * defragging all the range). @@ -1357,7 +1357,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, u64 newer_than, unsigned long max_to_defrag) { struct btrfs_fs_info *fs_info = inode_to_fs_info(inode); - unsigned long sectors_defragged = 0; + unsigned long blocks_defragged = 0; u64 isize = i_size_read(inode); u64 cur; u64 last_byte; @@ -1394,8 +1394,8 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, } /* Align the range */ - cur = round_down(range->start, fs_info->sectorsize); - last_byte = round_up(last_byte, fs_info->sectorsize) - 1; + cur = round_down(range->start, fs_info->blocksize); + last_byte = round_up(last_byte, fs_info->blocksize) - 1; /* * Make writeback start from the beginning of the range, so that the @@ -1406,7 +1406,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, inode->i_mapping->writeback_index = start_index; while (cur < last_byte) { - const unsigned long prev_sectors_defragged = sectors_defragged; + const unsigned long prev_blocks_defragged = blocks_defragged; u64 last_scanned = cur; u64 cluster_end; @@ -1434,10 +1434,10 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, BTRFS_I(inode)->defrag_compress = compress_type; ret = defrag_one_cluster(BTRFS_I(inode), ra, cur, cluster_end + 1 - cur, extent_thresh, - newer_than, do_compress, §ors_defragged, + newer_than, do_compress, &blocks_defragged, max_to_defrag, &last_scanned); - if (sectors_defragged > prev_sectors_defragged) + if (blocks_defragged > prev_blocks_defragged) balance_dirty_pages_ratelimited(inode->i_mapping); btrfs_inode_unlock(BTRFS_I(inode), 0); @@ -1456,9 +1456,9 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, * in next run. */ range->start = cur; - if (sectors_defragged) { + if (blocks_defragged) { /* - * We have defragged some sectors, for compression case they + * We have defragged some blocks, for compression case they * need to be written back immediately. */ if (range->flags & BTRFS_DEFRAG_RANGE_START_IO) { @@ -1471,7 +1471,7 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, btrfs_set_fs_incompat(fs_info, COMPRESS_LZO); else if (range->compress_type == BTRFS_COMPRESS_ZSTD) btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD); - ret = sectors_defragged; + ret = blocks_defragged; } if (do_compress) { btrfs_inode_lock(BTRFS_I(inode), 0); From patchwork Wed Dec 18 09:41:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913313 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9886619882B for ; Wed, 18 Dec 2024 09:42:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514934; cv=none; b=UkCoaAQjDE25XfVhiJzP56olRxjFcfVsoZUY6A98jh2MDRj4v8W0n/b/e4MFlRjHB0N6lxjExmjVsJ6R+vtvhuM0t8/pCRgd/4/Dvw2Zve+GF+TwY7r7vYLZ8tRFxKpKzAnvtU6jSO8dkNQSFleEbMCMPbspfFB738d8tPsWCtw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514934; c=relaxed/simple; bh=V58cabt4qzSK9LtAklmd7HS7mf4/5f7nZxhTvx/kDac=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XVdqi8J0Ij+Jjhl70Yn/owISgdE1LlLf15yCIxyKgvkthhm4+gbx/r1ngI0IfjF0yT40Vq+GFKz+7ylAPjfIunGt2amGcjw1SN3m6hnZGNLMrMIVhVSZinUGz8nRW/uGG6RYxZnkgXGPDNmxYUVX6wPPX32Ny0RHrR2yU1QMVOI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=goMgoou6; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=goMgoou6; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="goMgoou6"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="goMgoou6" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id BC0F82116B for ; Wed, 18 Dec 2024 09:42:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514930; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gampsxrfR9f7Q09hTb7MMFyRTVXUpnDisbF47NW+AWs=; b=goMgoou6yRagXYbv8S6lULqkeuqwe0CSGfVKCJFMRNCnXup4W/4scwXwN1jMtxr2DScHRi IVr1ChcwfAMLIhlD7+1jR4SiuhAiWNyXSIN79puObkcAvxBxBysR6HNENS+ztHWfk/eD6X OOO5Q8zp4n4jpFbkg99jjmdOiCHCK2U= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=goMgoou6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514930; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gampsxrfR9f7Q09hTb7MMFyRTVXUpnDisbF47NW+AWs=; b=goMgoou6yRagXYbv8S6lULqkeuqwe0CSGfVKCJFMRNCnXup4W/4scwXwN1jMtxr2DScHRi IVr1ChcwfAMLIhlD7+1jR4SiuhAiWNyXSIN79puObkcAvxBxBysR6HNENS+ztHWfk/eD6X OOO5Q8zp4n4jpFbkg99jjmdOiCHCK2U= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E9F62132EA for ; Wed, 18 Dec 2024 09:42:09 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id iGheKfGYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:09 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 13/18] btrfs: migrate bio.[ch] to use block size terminology Date: Wed, 18 Dec 2024 20:11:29 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: BC0F82116B X-Spam-Score: -3.01 X-Rspamd-Action: no action X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Flag: NO X-Spam-Level: Despite the regular sectorsize rename, also rename BTRFS_MAX_BIO_SECTORS BTRFS_MAX_BIO_BLOCKS. Signed-off-by: Qu Wenruo --- fs/btrfs/bio.c | 24 ++++++++++++------------ fs/btrfs/bio.h | 4 ++-- fs/btrfs/direct-io.c | 2 +- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c index bc80ee4f95a5..ea327a67a2bc 100644 --- a/fs/btrfs/bio.c +++ b/fs/btrfs/bio.c @@ -198,7 +198,7 @@ static void btrfs_end_repair_bio(struct btrfs_bio *repair_bbio, do { mirror = prev_repair_mirror(fbio, mirror); btrfs_repair_io_failure(fs_info, btrfs_ino(inode), - repair_bbio->file_offset, fs_info->sectorsize, + repair_bbio->file_offset, fs_info->blocksize, repair_bbio->saved_iter.bi_sector << SECTOR_SHIFT, page_folio(bv->bv_page), bv->bv_offset, mirror); } while (mirror != fbio->bbio->mirror_num); @@ -209,20 +209,20 @@ static void btrfs_end_repair_bio(struct btrfs_bio *repair_bbio, } /* - * Try to kick off a repair read to the next available mirror for a bad sector. + * Try to kick off a repair read to the next available mirror for a bad block. * * This primarily tries to recover good data to serve the actual read request, * but also tries to write the good data back to the bad mirror(s) when a * read succeeded to restore the redundancy. */ -static struct btrfs_failed_bio *repair_one_sector(struct btrfs_bio *failed_bbio, +static struct btrfs_failed_bio *repair_one_block(struct btrfs_bio *failed_bbio, u32 bio_offset, struct bio_vec *bv, struct btrfs_failed_bio *fbio) { struct btrfs_inode *inode = failed_bbio->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; const u64 logical = (failed_bbio->saved_iter.bi_sector << SECTOR_SHIFT); struct btrfs_bio *repair_bbio; struct bio *repair_bio; @@ -232,7 +232,7 @@ static struct btrfs_failed_bio *repair_one_sector(struct btrfs_bio *failed_bbio, btrfs_debug(fs_info, "repair read error: read error at %llu", failed_bbio->file_offset + bio_offset); - num_copies = btrfs_num_copies(fs_info, logical, sectorsize); + num_copies = btrfs_num_copies(fs_info, logical, blocksize); if (num_copies == 1) { btrfs_debug(fs_info, "no copy to repair from"); failed_bbio->bio.bi_status = BLK_STS_IOERR; @@ -268,7 +268,7 @@ static void btrfs_check_read_bio(struct btrfs_bio *bbio, struct btrfs_device *de { struct btrfs_inode *inode = bbio->inode; struct btrfs_fs_info *fs_info = inode->root->fs_info; - u32 sectorsize = fs_info->sectorsize; + u32 blocksize = fs_info->blocksize; struct bvec_iter *iter = &bbio->saved_iter; blk_status_t status = bbio->bio.bi_status; struct btrfs_failed_bio *fbio = NULL; @@ -292,12 +292,12 @@ static void btrfs_check_read_bio(struct btrfs_bio *bbio, struct btrfs_device *de while (iter->bi_size) { struct bio_vec bv = bio_iter_iovec(&bbio->bio, *iter); - bv.bv_len = min(bv.bv_len, sectorsize); + bv.bv_len = min(bv.bv_len, blocksize); if (status || !btrfs_data_csum_ok(bbio, dev, offset, &bv)) - fbio = repair_one_sector(bbio, offset, &bv, fbio); + fbio = repair_one_block(bbio, offset, &bv, fbio); - bio_advance_iter_single(&bbio->bio, iter, sectorsize); - offset += sectorsize; + bio_advance_iter_single(&bbio->bio, iter, blocksize); + offset += blocksize; } if (bbio->csum != bbio->csum_inline) @@ -655,10 +655,10 @@ static u64 btrfs_append_map_length(struct btrfs_bio *bbio, u64 map_length) if (sector_offset) { /* * bio_split_rw_at() could split at a size smaller than our - * sectorsize and thus cause unaligned I/Os. Fix that by + * blocksize and thus cause unaligned I/Os. Fix that by * always rounding down to the nearest boundary. */ - return ALIGN_DOWN(sector_offset << SECTOR_SHIFT, bbio->fs_info->sectorsize); + return ALIGN_DOWN(sector_offset << SECTOR_SHIFT, bbio->fs_info->blocksize); } return map_length; } diff --git a/fs/btrfs/bio.h b/fs/btrfs/bio.h index e2fe16074ad6..25a3ba7e0bfb 100644 --- a/fs/btrfs/bio.h +++ b/fs/btrfs/bio.h @@ -19,11 +19,11 @@ struct btrfs_inode; #define BTRFS_BIO_INLINE_CSUM_SIZE 64 /* - * Maximum number of sectors for a single bio to limit the size of the + * Maximum number of blocks for a single bio to limit the size of the * checksum array. This matches the number of bio_vecs per bio and thus the * I/O size for buffered I/O. */ -#define BTRFS_MAX_BIO_SECTORS (256) +#define BTRFS_MAX_BIO_BLOCKS (256) typedef void (*btrfs_bio_end_io_t)(struct btrfs_bio *bbio); diff --git a/fs/btrfs/direct-io.c b/fs/btrfs/direct-io.c index 8567af46e16f..3229f07f5d6d 100644 --- a/fs/btrfs/direct-io.c +++ b/fs/btrfs/direct-io.c @@ -385,7 +385,7 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start, * to allocate a contiguous array for the checksums. */ if (!write) - len = min_t(u64, len, fs_info->sectorsize * BTRFS_MAX_BIO_SECTORS); + len = min_t(u64, len, fs_info->sectorsize * BTRFS_MAX_BIO_BLOCKS); lockstart = start; lockend = start + len - 1; From patchwork Wed Dec 18 09:41:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913315 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE09119B5A7 for ; Wed, 18 Dec 2024 09:42:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514938; cv=none; b=VzjNZQn6nv5d9MeNl0hPfZq7yEiAD8ARfynTeun3oQ+W5EclwLINUFbd1PJrGEzEM3UMY8/c6+TvBpuIfFluB89Qfl1EQDZVB5itn6kPmX2gOgkaKWL7jwSrqWbcMOZfnTbJ4GG/8ZD+NG+g0KBiG1DCp7vFQcDIWO+5Q4eW7yc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514938; c=relaxed/simple; bh=1+zqYI94I7B3ES/vOmOy1jr0GEhLq6x3wvBp0YSucMQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BNReoEbcCmMuGt7M70HVCLXsTQ2eqx089VoHtf4ODdiL+Q82ifC1nNJA++veR/TNSyCW7KTSq/wZKbg5g4sF0feVbk6Zwd2CXExwDn+rFyPNItauRTLLSp0BsjBTBHJtqw+Q1+cODQEupeq1+4E0RRy4ToZKSkSbBU58+b8UbfE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=XA+vpUpB; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=XA+vpUpB; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="XA+vpUpB"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="XA+vpUpB" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 186C62115D for ; Wed, 18 Dec 2024 09:42:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514932; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RKI+Pg1xH10mDnpPOe5AKZUIm+f2GfcOQnaAD1/pSBk=; b=XA+vpUpBBGsbVZ+lkkufMzYaul2uvy3hDybUtUwxASHb+zAInhbQYL5c20dWDeVPnkbBvH /TsHqzbGdubSH6OuKMiA7ub9Frzl/3Ht6EI6rYaUVV1gKdqOEJSMekWCLbfmVWz7oLgX6i VoZctqSPWUH81lBcFrwAtKX4cGOHam8= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=XA+vpUpB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514932; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RKI+Pg1xH10mDnpPOe5AKZUIm+f2GfcOQnaAD1/pSBk=; b=XA+vpUpBBGsbVZ+lkkufMzYaul2uvy3hDybUtUwxASHb+zAInhbQYL5c20dWDeVPnkbBvH /TsHqzbGdubSH6OuKMiA7ub9Frzl/3Ht6EI6rYaUVV1gKdqOEJSMekWCLbfmVWz7oLgX6i VoZctqSPWUH81lBcFrwAtKX4cGOHam8= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 44DAC132EA for ; Wed, 18 Dec 2024 09:42:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id cCchAfOYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:11 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 14/18] btrfs: migrate the remaining sector size users to use block size terminology Date: Wed, 18 Dec 2024 20:11:30 +1030 Message-ID: <36297da5c8f2583ea449444e504262d9863f94fc.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: 186C62115D X-Spam-Score: -3.01 X-Rspamd-Action: no action X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Flag: NO X-Spam-Level: Those files are minor users of the old sector size terminology, so just migrate them all in one go. Note that, btrfs_device::sector_size is not renamed, as we keep the "sector" usage for block devices. Signed-off-by: Qu Wenruo --- fs/btrfs/accessors.h | 2 +- fs/btrfs/block-group.c | 4 ++-- fs/btrfs/delalloc-space.c | 26 +++++++++++++------------- fs/btrfs/delayed-inode.c | 2 +- fs/btrfs/delayed-ref.c | 12 ++++++------ fs/btrfs/delayed-ref.h | 4 ++-- fs/btrfs/dev-replace.c | 12 ++++++------ fs/btrfs/direct-io.c | 6 +++--- fs/btrfs/extent-tree.c | 14 +++++++------- fs/btrfs/extent_map.h | 2 +- fs/btrfs/fiemap.c | 6 +++--- fs/btrfs/inode-item.c | 8 ++++---- fs/btrfs/ioctl.c | 14 +++++++------- fs/btrfs/print-tree.c | 14 +++++++------- fs/btrfs/qgroup.c | 10 +++++----- fs/btrfs/qgroup.h | 2 +- fs/btrfs/reflink.c | 22 +++++++++++----------- fs/btrfs/relocation.c | 16 ++++++++-------- fs/btrfs/send.c | 36 ++++++++++++++++++------------------ fs/btrfs/super.c | 10 +++++----- fs/btrfs/sysfs.c | 4 ++-- fs/btrfs/tree-log.c | 6 +++--- fs/btrfs/volumes.c | 26 +++++++++++++------------- fs/btrfs/zoned.c | 6 +++--- 24 files changed, 132 insertions(+), 132 deletions(-) diff --git a/fs/btrfs/accessors.h b/fs/btrfs/accessors.h index 7a7e0ef69973..a796ec3fcb67 100644 --- a/fs/btrfs/accessors.h +++ b/fs/btrfs/accessors.h @@ -131,7 +131,7 @@ static inline void btrfs_set_device_total_bytes(const struct extent_buffer *eb, u64 val) { static_assert(sizeof(u64) == sizeof_field(struct btrfs_dev_item, total_bytes)); - WARN_ON(!IS_ALIGNED(val, eb->fs_info->sectorsize)); + WARN_ON(!IS_ALIGNED(val, eb->fs_info->blocksize)); btrfs_set_64(eb, s, offsetof(struct btrfs_dev_item, total_bytes), val); } diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 5be029734cfa..e1dc9345310f 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -489,7 +489,7 @@ static void fragment_free_space(struct btrfs_block_group *block_group) u64 start = block_group->start; u64 len = block_group->length; u64 chunk = block_group->flags & BTRFS_BLOCK_GROUP_METADATA ? - fs_info->nodesize : fs_info->sectorsize; + fs_info->nodesize : fs_info->blocksize; u64 step = chunk << 1; while (len > chunk) { @@ -3267,7 +3267,7 @@ static int cache_save_setup(struct btrfs_block_group *block_group, cache_size = 1; cache_size *= 16; - cache_size *= fs_info->sectorsize; + cache_size *= fs_info->blocksize; ret = btrfs_check_data_free_space(BTRFS_I(inode), &data_reserved, 0, cache_size, false); diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c index 88e900e5a43d..c18de463c02a 100644 --- a/fs/btrfs/delalloc-space.c +++ b/fs/btrfs/delalloc-space.c @@ -117,8 +117,8 @@ int btrfs_alloc_data_chunk_ondemand(const struct btrfs_inode *inode, u64 bytes) struct btrfs_fs_info *fs_info = root->fs_info; enum btrfs_reserve_flush_enum flush = BTRFS_RESERVE_FLUSH_DATA; - /* Make sure bytes are sectorsize aligned */ - bytes = ALIGN(bytes, fs_info->sectorsize); + /* Make sure bytes are blocksize aligned */ + bytes = ALIGN(bytes, fs_info->blocksize); if (btrfs_is_free_space_inode(inode)) flush = BTRFS_RESERVE_FLUSH_FREE_SPACE_INODE; @@ -135,9 +135,9 @@ int btrfs_check_data_free_space(struct btrfs_inode *inode, int ret; /* align the range */ - len = round_up(start + len, fs_info->sectorsize) - - round_down(start, fs_info->sectorsize); - start = round_down(start, fs_info->sectorsize); + len = round_up(start + len, fs_info->blocksize) - + round_down(start, fs_info->blocksize); + start = round_down(start, fs_info->blocksize); if (noflush) flush = BTRFS_RESERVE_NO_FLUSH; @@ -173,7 +173,7 @@ void btrfs_free_reserved_data_space_noquota(struct btrfs_fs_info *fs_info, { struct btrfs_space_info *data_sinfo; - ASSERT(IS_ALIGNED(len, fs_info->sectorsize)); + ASSERT(IS_ALIGNED(len, fs_info->blocksize)); data_sinfo = fs_info->data_sinfo; btrfs_space_info_free_bytes_may_use(data_sinfo, len); @@ -191,10 +191,10 @@ void btrfs_free_reserved_data_space(struct btrfs_inode *inode, { struct btrfs_fs_info *fs_info = inode->root->fs_info; - /* Make sure the range is aligned to sectorsize */ - len = round_up(start + len, fs_info->sectorsize) - - round_down(start, fs_info->sectorsize); - start = round_down(start, fs_info->sectorsize); + /* Make sure the range is aligned to blocksize */ + len = round_up(start + len, fs_info->blocksize) - + round_down(start, fs_info->blocksize); + start = round_down(start, fs_info->blocksize); btrfs_free_reserved_data_space_noquota(fs_info, len); btrfs_qgroup_free_data(inode, reserved, start, len, NULL); @@ -329,8 +329,8 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes, flush = BTRFS_RESERVE_FLUSH_LIMIT; } - num_bytes = ALIGN(num_bytes, fs_info->sectorsize); - disk_num_bytes = ALIGN(disk_num_bytes, fs_info->sectorsize); + num_bytes = ALIGN(num_bytes, fs_info->blocksize); + disk_num_bytes = ALIGN(disk_num_bytes, fs_info->blocksize); /* * We always want to do it this way, every other way is wrong and ends @@ -397,7 +397,7 @@ void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 num_bytes, { struct btrfs_fs_info *fs_info = inode->root->fs_info; - num_bytes = ALIGN(num_bytes, fs_info->sectorsize); + num_bytes = ALIGN(num_bytes, fs_info->blocksize); spin_lock(&inode->lock); if (!(inode->flags & BTRFS_INODE_NODATASUM)) inode->csum_bytes -= num_bytes; diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c index 508bdbae29a0..024254229dda 100644 --- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -1887,7 +1887,7 @@ int btrfs_fill_inode(struct inode *inode, u32 *rdev) i_gid_write(inode, btrfs_stack_inode_gid(inode_item)); btrfs_i_size_write(BTRFS_I(inode), btrfs_stack_inode_size(inode_item)); btrfs_inode_set_file_extent_range(BTRFS_I(inode), 0, - round_up(i_size_read(inode), fs_info->sectorsize)); + round_up(i_size_read(inode), fs_info->blocksize)); inode->i_mode = btrfs_stack_inode_mode(inode_item); set_nlink(inode, btrfs_stack_inode_nlink(inode_item)); inode_set_bytes(inode, btrfs_stack_inode_nbytes(inode_item)); diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c index 30f7079fa28e..1c88970b1cab 100644 --- a/fs/btrfs/delayed-ref.c +++ b/fs/btrfs/delayed-ref.c @@ -496,7 +496,7 @@ struct btrfs_delayed_ref_head *btrfs_select_ref_head( spin_lock(&delayed_refs->lock); again: - start_index = (delayed_refs->run_delayed_start >> fs_info->sectorsize_bits); + start_index = (delayed_refs->run_delayed_start >> fs_info->blocksize_bits); xa_for_each_start(&delayed_refs->head_refs, found_index, head, start_index) { if (!head->processing) { found_head = true; @@ -546,7 +546,7 @@ void btrfs_delete_ref_head(const struct btrfs_fs_info *fs_info, struct btrfs_delayed_ref_root *delayed_refs, struct btrfs_delayed_ref_head *head) { - const unsigned long index = (head->bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (head->bytenr >> fs_info->blocksize_bits); lockdep_assert_held(&delayed_refs->lock); lockdep_assert_held(&head->lock); @@ -825,7 +825,7 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info = trans->fs_info; struct btrfs_delayed_ref_head *existing; struct btrfs_delayed_ref_root *delayed_refs; - const unsigned long index = (head_ref->bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (head_ref->bytenr >> fs_info->blocksize_bits); bool qrecord_inserted = false; delayed_refs = &trans->transaction->delayed_refs; @@ -1006,7 +1006,7 @@ static int add_delayed_ref(struct btrfs_trans_handle *trans, struct btrfs_delayed_ref_head *new_head_ref; struct btrfs_delayed_ref_root *delayed_refs; struct btrfs_qgroup_extent_record *record = NULL; - const unsigned long index = (generic_ref->bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (generic_ref->bytenr >> fs_info->blocksize_bits); bool qrecord_reserved = false; bool qrecord_inserted; int action = generic_ref->action; @@ -1121,7 +1121,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_trans_handle *trans, u64 bytenr, u64 num_bytes, u8 level, struct btrfs_delayed_extent_op *extent_op) { - const unsigned long index = (bytenr >> trans->fs_info->sectorsize_bits); + const unsigned long index = (bytenr >> trans->fs_info->blocksize_bits); struct btrfs_delayed_ref_head *head_ref; struct btrfs_delayed_ref_head *head_ref_ret; struct btrfs_delayed_ref_root *delayed_refs; @@ -1185,7 +1185,7 @@ btrfs_find_delayed_ref_head(const struct btrfs_fs_info *fs_info, struct btrfs_delayed_ref_root *delayed_refs, u64 bytenr) { - const unsigned long index = (bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (bytenr >> fs_info->blocksize_bits); lockdep_assert_held(&delayed_refs->lock); diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h index a35067cebb97..aa2616a08e1a 100644 --- a/fs/btrfs/delayed-ref.h +++ b/fs/btrfs/delayed-ref.h @@ -202,7 +202,7 @@ struct btrfs_delayed_ref_root { /* * Track head references. * The keys correspond to the logical address of the extent ("bytenr") - * right shifted by fs_info->sectorsize_bits. This is both to get a more + * right shifted by fs_info->blocksize_bits. This is both to get a more * dense index space (optimizes xarray structure) and because indexes in * xarrays are of "unsigned long" type, meaning they are 32 bits wide on * 32 bits platforms, limiting the extent range to 4G which is too low @@ -214,7 +214,7 @@ struct btrfs_delayed_ref_root { /* * Track dirty extent records. * The keys correspond to the logical address of the extent ("bytenr") - * right shifted by fs_info->sectorsize_bits, for same reasons as above. + * right shifted by fs_info->blocksize_bits, for same reasons as above. */ struct xarray dirty_extents; diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c index ac8e97ed13f7..727b619bf280 100644 --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -216,9 +216,9 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info) &dev_replace->tgtdev->dev_state); WARN_ON(fs_info->fs_devices->rw_devices == 0); - dev_replace->tgtdev->io_width = fs_info->sectorsize; - dev_replace->tgtdev->io_align = fs_info->sectorsize; - dev_replace->tgtdev->sector_size = fs_info->sectorsize; + dev_replace->tgtdev->io_width = fs_info->blocksize; + dev_replace->tgtdev->io_align = fs_info->blocksize; + dev_replace->tgtdev->sector_size = fs_info->blocksize; dev_replace->tgtdev->fs_info = fs_info; set_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &dev_replace->tgtdev->dev_state); @@ -302,9 +302,9 @@ static int btrfs_init_dev_replace_tgtdev(struct btrfs_fs_info *fs_info, set_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state); device->generation = 0; - device->io_width = fs_info->sectorsize; - device->io_align = fs_info->sectorsize; - device->sector_size = fs_info->sectorsize; + device->io_width = fs_info->blocksize; + device->io_align = fs_info->blocksize; + device->sector_size = fs_info->blocksize; device->total_bytes = btrfs_device_get_total_bytes(srcdev); device->disk_total_bytes = btrfs_device_get_disk_total_bytes(srcdev); device->bytes_used = btrfs_device_get_bytes_used(srcdev); diff --git a/fs/btrfs/direct-io.c b/fs/btrfs/direct-io.c index 3229f07f5d6d..843bba0b995e 100644 --- a/fs/btrfs/direct-io.c +++ b/fs/btrfs/direct-io.c @@ -183,7 +183,7 @@ static struct extent_map *btrfs_new_extent_direct(struct btrfs_inode *inode, alloc_hint = btrfs_get_extent_allocation_hint(inode, start, len); again: - ret = btrfs_reserve_extent(root, len, len, fs_info->sectorsize, + ret = btrfs_reserve_extent(root, len, len, fs_info->blocksize, 0, alloc_hint, &ins, 1, 1); if (ret == -EAGAIN) { ASSERT(btrfs_is_zoned(fs_info)); @@ -385,7 +385,7 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start, * to allocate a contiguous array for the checksums. */ if (!write) - len = min_t(u64, len, fs_info->sectorsize * BTRFS_MAX_BIO_BLOCKS); + len = min_t(u64, len, fs_info->blocksize * BTRFS_MAX_BIO_BLOCKS); lockstart = start; lockend = start + len - 1; @@ -778,7 +778,7 @@ static struct iomap_dio *btrfs_dio_write(struct kiocb *iocb, struct iov_iter *it static ssize_t check_direct_IO(struct btrfs_fs_info *fs_info, const struct iov_iter *iter, loff_t offset) { - const u32 blocksize_mask = fs_info->sectorsize - 1; + const u32 blocksize_mask = fs_info->blocksize - 1; if (offset & blocksize_mask) return -EINVAL; diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index e849fc34d8d9..bd282a760c51 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -356,9 +356,9 @@ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb, ASSERT(fs_info); /* * Every shared one has parent tree block, - * which must be aligned to sector size. + * which must be aligned to block size. */ - if (offset && IS_ALIGNED(offset, fs_info->sectorsize)) + if (offset && IS_ALIGNED(offset, fs_info->blocksize)) return type; } } else if (is_data == BTRFS_REF_TYPE_DATA) { @@ -368,10 +368,10 @@ int btrfs_get_extent_inline_ref_type(const struct extent_buffer *eb, ASSERT(fs_info); /* * Every shared one has parent tree block, - * which must be aligned to sector size. + * which must be aligned to block size. */ if (offset && - IS_ALIGNED(offset, fs_info->sectorsize)) + IS_ALIGNED(offset, fs_info->blocksize)) return type; } } else { @@ -4363,7 +4363,7 @@ static noinline int find_free_extent(struct btrfs_root *root, struct btrfs_space_info *space_info; bool full_search = false; - WARN_ON(ffe_ctl->num_bytes < fs_info->sectorsize); + WARN_ON(ffe_ctl->num_bytes < fs_info->blocksize); ffe_ctl->search_start = 0; /* For clustered allocation */ @@ -4666,7 +4666,7 @@ int btrfs_reserve_extent(struct btrfs_root *root, u64 ram_bytes, flags = get_alloc_profile_by_root(root, is_data); again: - WARN_ON(num_bytes < fs_info->sectorsize); + WARN_ON(num_bytes < fs_info->blocksize); ffe_ctl.ram_bytes = ram_bytes; ffe_ctl.num_bytes = num_bytes; @@ -4685,7 +4685,7 @@ int btrfs_reserve_extent(struct btrfs_root *root, u64 ram_bytes, if (!final_tried && ins->offset) { num_bytes = min(num_bytes >> 1, ins->offset); num_bytes = round_down(num_bytes, - fs_info->sectorsize); + fs_info->blocksize); num_bytes = max(num_bytes, min_alloc_size); ram_bytes = num_bytes; if (num_bytes == min_alloc_size) diff --git a/fs/btrfs/extent_map.h b/fs/btrfs/extent_map.h index cd123b266b64..2a7412f294e1 100644 --- a/fs/btrfs/extent_map.h +++ b/fs/btrfs/extent_map.h @@ -54,7 +54,7 @@ struct extent_map { * Length of the file extent. * * For non-inlined file extents it's btrfs_file_extent_item::num_bytes. - * For inline extents it's sectorsize, since inline data starts at + * For inline extents it's blocksize, since inline data starts at * offsetof(struct btrfs_file_extent_item, disk_bytenr) thus * btrfs_file_extent_item::num_bytes is not valid. */ diff --git a/fs/btrfs/fiemap.c b/fs/btrfs/fiemap.c index b80c07ad8c5e..37020fa980bf 100644 --- a/fs/btrfs/fiemap.c +++ b/fs/btrfs/fiemap.c @@ -641,7 +641,7 @@ static int extent_fiemap(struct btrfs_inode *inode, u64 prev_extent_end; u64 range_start; u64 range_end; - const u64 sectorsize = inode->root->fs_info->sectorsize; + const u64 blocksize = inode->root->fs_info->blocksize; bool stopped = false; int ret; @@ -657,8 +657,8 @@ static int extent_fiemap(struct btrfs_inode *inode, } restart: - range_start = round_down(start, sectorsize); - range_end = round_up(start + len, sectorsize); + range_start = round_down(start, blocksize); + range_end = round_up(start + len, blocksize); prev_extent_end = range_start; lock_extent(&inode->io_tree, range_start, range_end, &cached_state); diff --git a/fs/btrfs/inode-item.c b/fs/btrfs/inode-item.c index 29572dfaf878..7fd7bcdda7a7 100644 --- a/fs/btrfs/inode-item.c +++ b/fs/btrfs/inode-item.c @@ -582,8 +582,8 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, btrfs_file_extent_num_bytes(leaf, fi); extent_num_bytes = ALIGN(new_size - found_key.offset, - fs_info->sectorsize); - clear_start = ALIGN(new_size, fs_info->sectorsize); + fs_info->blocksize); + clear_start = ALIGN(new_size, fs_info->blocksize); btrfs_set_file_extent_num_bytes(leaf, fi, extent_num_bytes); @@ -627,10 +627,10 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, } else { /* * Inline extents are special, we just treat - * them as a full sector worth in the file + * them as a full block worth in the file * extent tree just for simplicity sake. */ - clear_len = fs_info->sectorsize; + clear_len = fs_info->blocksize; } control->sub_bytes += item_end + 1 - new_size; diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 7872de140489..888f7b97434c 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -457,9 +457,9 @@ static noinline int btrfs_ioctl_fitrim(struct btrfs_fs_info *fs_info, /* * NOTE: Don't truncate the range using super->total_bytes. Bytenr of * block group is in the logical address space, which can be any - * sectorsize aligned bytenr in the range [0, U64_MAX]. + * blocksize aligned bytenr in the range [0, U64_MAX]. */ - if (range.len < fs_info->sectorsize) + if (range.len < fs_info->blocksize) return -EINVAL; range.minlen = max(range.minlen, minlen); @@ -1155,7 +1155,7 @@ static noinline int btrfs_ioctl_resize(struct file *file, goto out_finish; } - new_size = round_down(new_size, fs_info->sectorsize); + new_size = round_down(new_size, fs_info->blocksize); if (new_size > old_size) { trans = btrfs_start_transaction(root, 0); @@ -2781,8 +2781,8 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info, memcpy(&fi_args->fsid, fs_devices->fsid, sizeof(fi_args->fsid)); fi_args->nodesize = fs_info->nodesize; - fi_args->sectorsize = fs_info->sectorsize; - fi_args->clone_alignment = fs_info->sectorsize; + fi_args->sectorsize = fs_info->blocksize; + fi_args->clone_alignment = fs_info->blocksize; if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) { fi_args->csum_type = btrfs_super_csum_type(fs_info->super_copy); @@ -4489,7 +4489,7 @@ static int btrfs_ioctl_encoded_read(struct file *file, void __user *argp, bool unlocked = false; u64 start, lockend, count; - start = ALIGN_DOWN(kiocb.ki_pos, fs_info->sectorsize); + start = ALIGN_DOWN(kiocb.ki_pos, fs_info->blocksize); lockend = start + BTRFS_MAX_UNCOMPRESSED - 1; if (args.compression) @@ -4865,7 +4865,7 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue if (issue_flags & IO_URING_F_NONBLOCK) kiocb.ki_flags |= IOCB_NOWAIT; - start = ALIGN_DOWN(pos, fs_info->sectorsize); + start = ALIGN_DOWN(pos, fs_info->blocksize); lockend = start + BTRFS_MAX_UNCOMPRESSED - 1; ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state, diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c index fc821aa446f0..5c5428329490 100644 --- a/fs/btrfs/print-tree.c +++ b/fs/btrfs/print-tree.c @@ -150,10 +150,10 @@ static void print_extent_item(const struct extent_buffer *eb, int slot, int type * offset is supposed to be a tree block which * must be aligned to nodesize. */ - if (!IS_ALIGNED(offset, eb->fs_info->sectorsize)) + if (!IS_ALIGNED(offset, eb->fs_info->blocksize)) pr_info( - "\t\t\t(parent %llu not aligned to sectorsize %u)\n", - offset, eb->fs_info->sectorsize); + "\t\t\t(parent %llu not aligned to blocksize %u)\n", + offset, eb->fs_info->blocksize); break; case BTRFS_EXTENT_DATA_REF_KEY: dref = (struct btrfs_extent_data_ref *)(&iref->offset); @@ -165,12 +165,12 @@ static void print_extent_item(const struct extent_buffer *eb, int slot, int type offset, btrfs_shared_data_ref_count(eb, sref)); /* * Offset is supposed to be a tree block which must be - * aligned to sectorsize. + * aligned to blocksize. */ - if (!IS_ALIGNED(offset, eb->fs_info->sectorsize)) + if (!IS_ALIGNED(offset, eb->fs_info->blocksize)) pr_info( - "\t\t\t(parent %llu not aligned to sectorsize %u)\n", - offset, eb->fs_info->sectorsize); + "\t\t\t(parent %llu not aligned to blocksize %u)\n", + offset, eb->fs_info->blocksize); break; case BTRFS_EXTENT_OWNER_REF_KEY: oref = (struct btrfs_extent_owner_ref *)(&iref->offset); diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 993b5e803699..b66f63bf7b02 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2015,7 +2015,7 @@ int btrfs_qgroup_trace_extent_nolock(struct btrfs_fs_info *fs_info, u64 bytenr) { struct btrfs_qgroup_extent_record *existing, *ret; - const unsigned long index = (bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (bytenr >> fs_info->blocksize_bits); if (!btrfs_qgroup_full_accounting(fs_info)) return 1; @@ -2150,7 +2150,7 @@ int btrfs_qgroup_trace_extent(struct btrfs_trans_handle *trans, u64 bytenr, struct btrfs_fs_info *fs_info = trans->fs_info; struct btrfs_qgroup_extent_record *record; struct btrfs_delayed_ref_root *delayed_refs = &trans->transaction->delayed_refs; - const unsigned long index = (bytenr >> fs_info->sectorsize_bits); + const unsigned long index = (bytenr >> fs_info->blocksize_bits); int ret; if (!btrfs_qgroup_full_accounting(fs_info) || bytenr == 0 || num_bytes == 0) @@ -3048,7 +3048,7 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans) delayed_refs = &trans->transaction->delayed_refs; qgroup_to_skip = delayed_refs->qgroup_to_skip; xa_for_each(&delayed_refs->dirty_extents, index, record) { - const u64 bytenr = (((u64)index) << fs_info->sectorsize_bits); + const u64 bytenr = (((u64)index) << fs_info->blocksize_bits); num_dirty_extents++; trace_btrfs_qgroup_account_extents(fs_info, record, bytenr); @@ -4317,8 +4317,8 @@ static int qgroup_free_reserved_data(struct btrfs_inode *inode, int ret; extent_changeset_init(&changeset); - len = round_up(start + len, root->fs_info->sectorsize); - start = round_down(start, root->fs_info->sectorsize); + len = round_up(start + len, root->fs_info->blocksize); + start = round_down(start, root->fs_info->blocksize); ULIST_ITER_INIT(&uiter); while ((unode = ulist_next(&reserved->range_changed, &uiter))) { diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h index e233cc79af18..2b23df1b777b 100644 --- a/fs/btrfs/qgroup.h +++ b/fs/btrfs/qgroup.h @@ -130,7 +130,7 @@ struct btrfs_qgroup_extent_record { /* * The bytenr of the extent is given by its index in the dirty_extents * xarray of struct btrfs_delayed_ref_root left shifted by - * fs_info->sectorsize_bits. + * fs_info->blocksize_bits. */ u64 num_bytes; diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index f0824c948cb7..a5804e403a5e 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -61,7 +61,7 @@ static int copy_inline_to_page(struct btrfs_inode *inode, const u8 comp_type) { struct btrfs_fs_info *fs_info = inode->root->fs_info; - const u32 block_size = fs_info->sectorsize; + const u32 block_size = fs_info->blocksize; const u64 range_end = file_offset + block_size - 1; const size_t inline_size = size - btrfs_file_extent_calc_inline_size(0); char *data_start = inline_data + btrfs_file_extent_calc_inline_size(0); @@ -178,7 +178,7 @@ static int clone_copy_inline_extent(struct inode *dst, struct btrfs_fs_info *fs_info = inode_to_fs_info(dst); struct btrfs_root *root = BTRFS_I(dst)->root; const u64 aligned_end = ALIGN(new_key->offset + datal, - fs_info->sectorsize); + fs_info->blocksize); struct btrfs_trans_handle *trans = NULL; struct btrfs_drop_extents_args drop_args = { 0 }; int ret; @@ -511,17 +511,17 @@ static int btrfs_clone(struct inode *src, struct inode *inode, ASSERT(type == BTRFS_FILE_EXTENT_INLINE); /* * Inline extents always have to start at file offset 0 - * and can never be bigger then the sector size. We can + * and can never be bigger then the block size. We can * never clone only parts of an inline extent, since all - * reflink operations must start at a sector size aligned + * reflink operations must start at a block size aligned * offset, and the length must be aligned too or end at * the i_size (which implies the whole inlined data). */ ASSERT(key.offset == 0); - ASSERT(datal <= fs_info->sectorsize); + ASSERT(datal <= fs_info->blocksize); if (WARN_ON(type != BTRFS_FILE_EXTENT_INLINE) || WARN_ON(key.offset != 0) || - WARN_ON(datal > fs_info->sectorsize)) { + WARN_ON(datal > fs_info->blocksize)) { ret = -EUCLEAN; goto out; } @@ -554,7 +554,7 @@ static int btrfs_clone(struct inode *src, struct inode *inode, BTRFS_I(inode)->last_reflink_trans = trans->transid; last_dest_end = ALIGN(new_key.offset + datal, - fs_info->sectorsize); + fs_info->blocksize); ret = clone_finish_inode_update(trans, inode, last_dest_end, destoff, olen, no_time_update); if (ret) @@ -637,7 +637,7 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 len, const u64 end = dst_loff + len - 1; struct extent_state *cached_state = NULL; struct btrfs_fs_info *fs_info = BTRFS_I(src)->root->fs_info; - const u64 bs = fs_info->sectorsize; + const u64 bs = fs_info->blocksize; int ret; /* @@ -707,7 +707,7 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, int ret; int wb_ret; u64 len = olen; - u64 bs = fs_info->sectorsize; + u64 bs = fs_info->blocksize; u64 end; /* @@ -727,7 +727,7 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, return ret; /* * We may have truncated the last block if the inode's size is - * not sector size aligned, so we need to wait for writeback to + * not block size aligned, so we need to wait for writeback to * complete before proceeding further, otherwise we can race * with cloning and attempt to increment a reference to an * extent that no longer exists (writeback completed right after @@ -777,7 +777,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in, { struct inode *inode_in = file_inode(file_in); struct inode *inode_out = file_inode(file_out); - u64 bs = BTRFS_I(inode_out)->root->fs_info->sectorsize; + u64 bs = BTRFS_I(inode_out)->root->fs_info->blocksize; u64 wb_len; int ret; diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index cdd9a7b15a11..e43a351a98f2 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -905,8 +905,8 @@ int replace_file_extents(struct btrfs_trans_handle *trans, end = key.offset + btrfs_file_extent_num_bytes(leaf, fi); WARN_ON(!IS_ALIGNED(key.offset, - fs_info->sectorsize)); - WARN_ON(!IS_ALIGNED(end, fs_info->sectorsize)); + fs_info->blocksize)); + WARN_ON(!IS_ALIGNED(end, fs_info->blocksize)); end--; /* Take mmap lock to serialize with reflinks. */ if (!down_read_trylock(&inode->i_mmap_lock)) @@ -1361,7 +1361,7 @@ static int invalidate_extent_cache(struct btrfs_root *root, start = 0; else { start = min_key->offset; - WARN_ON(!IS_ALIGNED(start, fs_info->sectorsize)); + WARN_ON(!IS_ALIGNED(start, fs_info->blocksize)); } } else { start = 0; @@ -1376,7 +1376,7 @@ static int invalidate_extent_cache(struct btrfs_root *root, if (max_key->offset == 0) continue; end = max_key->offset; - WARN_ON(!IS_ALIGNED(end, fs_info->sectorsize)); + WARN_ON(!IS_ALIGNED(end, fs_info->blocksize)); end--; } } else { @@ -2683,11 +2683,11 @@ static noinline_for_stack int prealloc_file_extent_cluster(struct reloc_control if (!PAGE_ALIGNED(i_size)) { struct address_space *mapping = inode->vfs_inode.i_mapping; struct btrfs_fs_info *fs_info = inode->root->fs_info; - const u32 sectorsize = fs_info->sectorsize; + const u32 blocksize = fs_info->blocksize; struct folio *folio; - ASSERT(sectorsize < PAGE_SIZE); - ASSERT(IS_ALIGNED(i_size, sectorsize)); + ASSERT(blocksize < PAGE_SIZE); + ASSERT(IS_ALIGNED(i_size, blocksize)); /* * Subpage can't handle page with DIRTY but without UPTODATE @@ -2936,7 +2936,7 @@ static int relocate_one_folio(struct reloc_control *rc, u64 boundary_start = cluster->boundary[*cluster_nr] - offset; u64 boundary_end = boundary_start + - fs_info->sectorsize - 1; + fs_info->blocksize - 1; set_extent_bit(&BTRFS_I(inode)->io_tree, boundary_start, boundary_end, diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index f437138fefbc..d3c83653f4d7 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -1407,7 +1407,7 @@ static bool lookup_backref_cache(u64 leaf_bytenr, void *ctx, struct backref_ctx *bctx = ctx; struct send_ctx *sctx = bctx->sctx; struct btrfs_fs_info *fs_info = sctx->send_root->fs_info; - const u64 key = leaf_bytenr >> fs_info->sectorsize_bits; + const u64 key = leaf_bytenr >> fs_info->blocksize_bits; struct btrfs_lru_cache_entry *raw_entry; struct backref_cache_entry *entry; @@ -1462,7 +1462,7 @@ static void store_backref_cache(u64 leaf_bytenr, const struct ulist *root_ids, if (!new_entry) return; - new_entry->entry.key = leaf_bytenr >> fs_info->sectorsize_bits; + new_entry->entry.key = leaf_bytenr >> fs_info->blocksize_bits; new_entry->entry.gen = 0; new_entry->num_roots = 0; ULIST_ITER_INIT(&uiter); @@ -5790,7 +5790,7 @@ static int send_extent_data(struct send_ctx *sctx, struct btrfs_path *path, /* * Always operate only on ranges that are a multiple of the page * size. This is not only to prevent zeroing parts of a page in - * the case of subpage sector size, but also to guarantee we evict + * the case of subpage block size, but also to guarantee we evict * pages, as passing a range that is smaller than page size does * not evict the respective page (only zeroes part of its content). * @@ -5888,11 +5888,11 @@ static int clone_range(struct send_ctx *sctx, struct btrfs_path *dst_path, u64 clone_src_i_size = 0; /* - * Prevent cloning from a zero offset with a length matching the sector + * Prevent cloning from a zero offset with a length matching the block * size because in some scenarios this will make the receiver fail. * * For example, if in the source filesystem the extent at offset 0 - * has a length of sectorsize and it was written using direct IO, then + * has a length of blocksize and it was written using direct IO, then * it can never be an inline extent (even if compression is enabled). * Then this extent can be cloned in the original filesystem to a non * zero file offset, but it may not be possible to clone in the @@ -5903,7 +5903,7 @@ static int clone_range(struct send_ctx *sctx, struct btrfs_path *dst_path, * filesystem has. */ if (clone_root->offset == 0 && - len == sctx->send_root->fs_info->sectorsize) + len == sctx->send_root->fs_info->blocksize) return send_extent_data(sctx, dst_path, offset, len); path = alloc_path_for_send(); @@ -6045,11 +6045,11 @@ static int clone_range(struct send_ctx *sctx, struct btrfs_path *dst_path, if (btrfs_file_extent_disk_bytenr(leaf, ei) == disk_byte && clone_data_offset == data_offset) { const u64 src_end = clone_root->offset + clone_len; - const u64 sectorsize = SZ_64K; + const u64 blocksize = SZ_64K; /* * We can't clone the last block, when its size is not - * sector size aligned, into the middle of a file. If we + * block size aligned, into the middle of a file. If we * do so, the receiver will get a failure (-EINVAL) when * trying to clone or will silently corrupt the data in * the destination file if it's on a kernel without the @@ -6060,18 +6060,18 @@ static int clone_range(struct send_ctx *sctx, struct btrfs_path *dst_path, * So issue a clone of the aligned down range plus a * regular write for the eof block, if we hit that case. * - * Also, we use the maximum possible sector size, 64K, - * because we don't know what's the sector size of the + * Also, we use the maximum possible block size, 64K, + * because we don't know what's the block size of the * filesystem that receives the stream, so we have to - * assume the largest possible sector size. + * assume the largest possible block size. */ if (src_end == clone_src_i_size && - !IS_ALIGNED(src_end, sectorsize) && + !IS_ALIGNED(src_end, blocksize) && offset + clone_len < sctx->cur_inode_size) { u64 slen; slen = ALIGN_DOWN(src_end - clone_root->offset, - sectorsize); + blocksize); if (slen > 0) { ret = send_clone(sctx, offset, slen, clone_root); @@ -6096,8 +6096,8 @@ static int clone_range(struct send_ctx *sctx, struct btrfs_path *dst_path, * When using encoded writes (BTRFS_SEND_FLAG_COMPRESSED * was passed to the send ioctl), this helps avoid * sending an encoded write for an offset that is not - * sector size aligned, in case the i_size of the source - * inode is not sector size aligned. That will make the + * block size aligned, in case the i_size of the source + * inode is not block size aligned. That will make the * receiver fallback to decompression of the data and * writing it using regular buffered IO, therefore while * not incorrect, it's not optimal due decompression and @@ -6154,7 +6154,7 @@ static int send_write_or_clone(struct send_ctx *sctx, int ret = 0; u64 offset = key->offset; u64 end; - u64 bs = sctx->send_root->fs_info->sectorsize; + u64 bs = sctx->send_root->fs_info->blocksize; struct btrfs_file_extent_item *ei; u64 disk_byte; u64 data_offset; @@ -6195,7 +6195,7 @@ static int send_write_or_clone(struct send_ctx *sctx, * We do this truncate to the final i_size when we finish * processing the inode, but it's too late by then. And here we * truncate to the start offset of the range because it's always - * sector size aligned while if it were the final i_size it + * block size aligned while if it were the final i_size it * would result in dirtying part of a page, filling part of a * page with zeroes and then having the clone operation at the * receiver trigger IO and wait for it due to the dirty page. @@ -6347,7 +6347,7 @@ static int is_extent_unchanged(struct send_ctx *sctx, * condition for inline extents too). This should normally not * happen but it's possible for example when we have an inline * compressed extent representing data with a size matching - * the page size (currently the same as sector size). + * the page size (currently the same as block size). */ if (right_type == BTRFS_FILE_EXTENT_INLINE) { ret = 0; diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index f6eaaf20229d..4a056d7ef1ca 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -718,12 +718,12 @@ bool btrfs_check_options(const struct btrfs_fs_info *info, */ void btrfs_set_free_space_cache_settings(struct btrfs_fs_info *fs_info) { - if (fs_info->sectorsize < PAGE_SIZE) { + if (fs_info->blocksize < PAGE_SIZE) { btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE); if (!btrfs_test_opt(fs_info, FREE_SPACE_TREE)) { btrfs_info(fs_info, - "forcing free space tree for sector size %u with page size %lu", - fs_info->sectorsize, PAGE_SIZE); + "forcing free space tree for block size %u with page size %lu", + fs_info->blocksize, PAGE_SIZE); btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE); } } @@ -1719,7 +1719,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) u64 total_used = 0; u64 total_free_data = 0; u64 total_free_meta = 0; - u32 bits = fs_info->sectorsize_bits; + u32 bits = fs_info->blocksize_bits; __be32 *fsid = (__be32 *)fs_info->fs_devices->fsid; unsigned factor = 1; struct btrfs_block_rsv *block_rsv = &fs_info->global_block_rsv; @@ -1803,7 +1803,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) buf->f_bavail = 0; buf->f_type = BTRFS_SUPER_MAGIC; - buf->f_bsize = fs_info->sectorsize; + buf->f_bsize = fs_info->blocksize; buf->f_namelen = BTRFS_NAME_LEN; /* We treat it as constant endianness (it doesn't matter _which_) diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c index 7f09b6c9cc2d..23696a842ff9 100644 --- a/fs/btrfs/sysfs.c +++ b/fs/btrfs/sysfs.c @@ -1128,7 +1128,7 @@ static ssize_t btrfs_sectorsize_show(struct kobject *kobj, { struct btrfs_fs_info *fs_info = to_fs_info(kobj); - return sysfs_emit(buf, "%u\n", fs_info->sectorsize); + return sysfs_emit(buf, "%u\n", fs_info->blocksize); } BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show); @@ -1180,7 +1180,7 @@ static ssize_t btrfs_clone_alignment_show(struct kobject *kobj, { struct btrfs_fs_info *fs_info = to_fs_info(kobj); - return sysfs_emit(buf, "%u\n", fs_info->sectorsize); + return sysfs_emit(buf, "%u\n", fs_info->blocksize); } BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show); diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c index c8d6587688b3..995aec0e8ce3 100644 --- a/fs/btrfs/tree-log.c +++ b/fs/btrfs/tree-log.c @@ -672,7 +672,7 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, size = btrfs_file_extent_ram_bytes(eb, item); nbytes = btrfs_file_extent_ram_bytes(eb, item); extent_end = ALIGN(start + size, - fs_info->sectorsize); + fs_info->blocksize); } else { ret = 0; goto out; @@ -2489,7 +2489,7 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, break; } from = ALIGN(i_size_read(inode), - root->fs_info->sectorsize); + root->fs_info->blocksize); drop_args.start = from; drop_args.end = (u64)-1; drop_args.drop_cache = true; @@ -5232,7 +5232,7 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans, u64 hole_len; btrfs_release_path(path); - hole_len = ALIGN(i_size - prev_extent_end, fs_info->sectorsize); + hole_len = ALIGN(i_size - prev_extent_end, fs_info->blocksize); ret = btrfs_insert_hole_extent(trans, root->log_root, ino, prev_extent_end, hole_len); if (ret < 0) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index d32913c51d69..7e472382d44e 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -2830,11 +2830,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path set_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state); device->generation = trans->transid; - device->io_width = fs_info->sectorsize; - device->io_align = fs_info->sectorsize; - device->sector_size = fs_info->sectorsize; + device->io_width = fs_info->blocksize; + device->io_align = fs_info->blocksize; + device->sector_size = fs_info->blocksize; device->total_bytes = - round_down(bdev_nr_bytes(device->bdev), fs_info->sectorsize); + round_down(bdev_nr_bytes(device->bdev), fs_info->blocksize); device->disk_total_bytes = device->total_bytes; device->commit_total_bytes = device->total_bytes; set_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state); @@ -2878,7 +2878,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path orig_super_total_bytes = btrfs_super_total_bytes(fs_info->super_copy); btrfs_set_super_total_bytes(fs_info->super_copy, round_down(orig_super_total_bytes + device->total_bytes, - fs_info->sectorsize)); + fs_info->blocksize)); orig_super_num_devices = btrfs_super_num_devices(fs_info->super_copy); btrfs_set_super_num_devices(fs_info->super_copy, @@ -3058,11 +3058,11 @@ int btrfs_grow_device(struct btrfs_trans_handle *trans, if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) return -EACCES; - new_size = round_down(new_size, fs_info->sectorsize); + new_size = round_down(new_size, fs_info->blocksize); mutex_lock(&fs_info->chunk_mutex); old_total = btrfs_super_total_bytes(super_copy); - diff = round_down(new_size - device->total_bytes, fs_info->sectorsize); + diff = round_down(new_size - device->total_bytes, fs_info->blocksize); if (new_size <= device->total_bytes || test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) { @@ -3071,7 +3071,7 @@ int btrfs_grow_device(struct btrfs_trans_handle *trans, } btrfs_set_super_total_bytes(super_copy, - round_down(old_total + diff, fs_info->sectorsize)); + round_down(old_total + diff, fs_info->blocksize)); device->fs_devices->total_rw_bytes += diff; atomic64_add(diff, &fs_info->free_chunk_space); @@ -4932,9 +4932,9 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) u64 start; u64 free_diff = 0; - new_size = round_down(new_size, fs_info->sectorsize); + new_size = round_down(new_size, fs_info->blocksize); start = new_size; - diff = round_down(old_size - new_size, fs_info->sectorsize); + diff = round_down(old_size - new_size, fs_info->blocksize); if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) return -EINVAL; @@ -5085,7 +5085,7 @@ int btrfs_shrink_device(struct btrfs_device *device, u64 new_size) WARN_ON(diff > old_total); btrfs_set_super_total_bytes(super_copy, - round_down(old_total - diff, fs_info->sectorsize)); + round_down(old_total - diff, fs_info->blocksize)); mutex_unlock(&fs_info->chunk_mutex); btrfs_reserve_chunk_metadata(trans, false); @@ -5773,7 +5773,7 @@ int btrfs_chunk_alloc_add_chunk_item(struct btrfs_trans_handle *trans, btrfs_set_stack_chunk_num_stripes(chunk, map->num_stripes); btrfs_set_stack_chunk_io_align(chunk, BTRFS_STRIPE_LEN); btrfs_set_stack_chunk_io_width(chunk, BTRFS_STRIPE_LEN); - btrfs_set_stack_chunk_sector_size(chunk, fs_info->sectorsize); + btrfs_set_stack_chunk_sector_size(chunk, fs_info->blocksize); btrfs_set_stack_chunk_sub_stripes(chunk, map->sub_stripes); key.objectid = BTRFS_FIRST_CHUNK_TREE_OBJECTID; @@ -5945,7 +5945,7 @@ unsigned long btrfs_full_stripe_len(struct btrfs_fs_info *fs_info, u64 logical) { struct btrfs_chunk_map *map; - unsigned long len = fs_info->sectorsize; + unsigned long len = fs_info->blocksize; if (!btrfs_fs_incompat(fs_info, RAID56)) return len; diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index abea8f2f497e..1cfde2bb7b74 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -746,7 +746,7 @@ int btrfs_check_zoned_mode(struct btrfs_fs_info *fs_info) min3((u64)lim->max_zone_append_sectors << SECTOR_SHIFT, (u64)lim->max_sectors << SECTOR_SHIFT, (u64)lim->max_segments << PAGE_SHIFT), - fs_info->sectorsize); + fs_info->blocksize); fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED; if (fs_info->max_zone_append_size < fs_info->max_extent_size) fs_info->max_extent_size = fs_info->max_zone_append_size; @@ -2160,7 +2160,7 @@ static void wait_eb_writebacks(struct btrfs_block_group *block_group) rcu_read_lock(); radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, - block_group->start >> fs_info->sectorsize_bits) { + block_group->start >> fs_info->blocksize_bits) { eb = radix_tree_deref_slot(slot); if (!eb) continue; @@ -2375,7 +2375,7 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len /* No MIXED_BG on zoned btrfs. */ if (block_group->flags & BTRFS_BLOCK_GROUP_DATA) - min_alloc_bytes = fs_info->sectorsize; + min_alloc_bytes = fs_info->blocksize; else min_alloc_bytes = fs_info->nodesize; From patchwork Wed Dec 18 09:41:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913316 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F5A519C558 for ; Wed, 18 Dec 2024 09:42:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514939; cv=none; b=MRjqphu5IdzAhp28TIvgPrsau7zQYaSrjd0QZCeiM3bnU8USkKg2w09WHFOxHXZ2T9y7om7rhEyUgN5oX3FpY6b0pqnCBEL761WXIiywNUFF48Z91LSdMMAZTfYNDvO0v6REEvJaeZpLESGmkXCKWj9jKeVARwhspSF76zgXJMQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514939; c=relaxed/simple; bh=5TCXzyk2WWDYCIJj+DkRieAet7QBB7fZU/4H621MIMk=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rLZ0+Q4vJM/1iXHO3Mb1FfYZGFvihpYzOfHt/A+R9SHFITutezlQbfTsGz8yL/8JoiALcaa3XvX/fml03hKrhNd7MtGdzJlR9R/ZZhQcg21CRaPHGREW6PmX/79Z5b+az1DfZpHhiptHRTvBdECgU+tPb6UmkqVolB5eHqzqvak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=E62NQXWf; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=E62NQXWf; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="E62NQXWf"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="E62NQXWf" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 736D82115C for ; Wed, 18 Dec 2024 09:42:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514933; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dKdK5sTcTbWwv7EWdGoy1GTKJHEByNdZY4/4rleGgXY=; b=E62NQXWfnBLxR+mgFMfqe5gNUmpC3rc2iZrm4gt/MT6YcMVFl5uV9k7lyJ4zAqhIMd5D4V Q4Ol5yb5P8Mbo/656OD2sqHGKOoOq64kXFCWB7Ys8aVkOUYaqZlQsbFLYO2Lizr/Di8w2/ QWVIbm5AIXzQ4BP6dgFH0c2LeumyMFs= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514933; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dKdK5sTcTbWwv7EWdGoy1GTKJHEByNdZY4/4rleGgXY=; b=E62NQXWfnBLxR+mgFMfqe5gNUmpC3rc2iZrm4gt/MT6YcMVFl5uV9k7lyJ4zAqhIMd5D4V Q4Ol5yb5P8Mbo/656OD2sqHGKOoOq64kXFCWB7Ys8aVkOUYaqZlQsbFLYO2Lizr/Di8w2/ QWVIbm5AIXzQ4BP6dgFH0c2LeumyMFs= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 964D3132EA for ; Wed, 18 Dec 2024 09:42:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id oBbpFPSYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:12 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 15/18] btrfs: migrate selftests to use block size terminology Date: Wed, 18 Dec 2024 20:11:31 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Score: -2.80 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; FUZZY_BLOCKED(0.00)[rspamd.com]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; ARC_NA(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; RCVD_TLS_ALL(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; FROM_EQ_ENVFROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; MIME_TRACE(0.00)[0:+]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:mid,suse.com:email] X-Spam-Flag: NO X-Spam-Level: Straightforward rename from "sector" to "block". Signed-off-by: Qu Wenruo --- fs/btrfs/tests/btrfs-tests.c | 38 ++-- fs/btrfs/tests/btrfs-tests.h | 18 +- fs/btrfs/tests/delayed-refs-tests.c | 4 +- fs/btrfs/tests/extent-buffer-tests.c | 8 +- fs/btrfs/tests/extent-io-tests.c | 34 +-- fs/btrfs/tests/free-space-tests.c | 104 ++++----- fs/btrfs/tests/free-space-tree-tests.c | 28 +-- fs/btrfs/tests/inode-tests.c | 266 ++++++++++++------------ fs/btrfs/tests/qgroup-tests.c | 12 +- fs/btrfs/tests/raid-stripe-tree-tests.c | 8 +- 10 files changed, 260 insertions(+), 260 deletions(-) diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c index 5eff8d7d2360..8ade0d610e63 100644 --- a/fs/btrfs/tests/btrfs-tests.c +++ b/fs/btrfs/tests/btrfs-tests.c @@ -115,7 +115,7 @@ static void btrfs_free_dummy_device(struct btrfs_device *dev) kfree(dev); } -struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize) +struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 blocksize) { struct btrfs_fs_info *fs_info = kzalloc(sizeof(struct btrfs_fs_info), GFP_KERNEL); @@ -141,8 +141,8 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize) btrfs_init_fs_info(fs_info); fs_info->nodesize = nodesize; - fs_info->sectorsize = sectorsize; - fs_info->sectorsize_bits = ilog2(sectorsize); + fs_info->blocksize = blocksize; + fs_info->blocksize_bits = ilog2(blocksize); /* CRC32C csum size. */ fs_info->csum_size = 4; @@ -232,7 +232,7 @@ btrfs_alloc_dummy_block_group(struct btrfs_fs_info *fs_info, cache->start = 0; cache->length = length; - cache->full_stripe_len = fs_info->sectorsize; + cache->full_stripe_len = fs_info->blocksize; cache->fs_info = fs_info; INIT_LIST_HEAD(&cache->list); @@ -274,43 +274,43 @@ void btrfs_init_dummy_trans(struct btrfs_trans_handle *trans, int btrfs_run_sanity_tests(void) { int ret, i; - u32 sectorsize, nodesize; - u32 test_sectorsize[] = { + u32 blocksize, nodesize; + u32 test_blocksize[] = { PAGE_SIZE, }; ret = btrfs_init_test_fs(); if (ret) return ret; - for (i = 0; i < ARRAY_SIZE(test_sectorsize); i++) { - sectorsize = test_sectorsize[i]; - for (nodesize = sectorsize; + for (i = 0; i < ARRAY_SIZE(test_blocksize); i++) { + blocksize = test_blocksize[i]; + for (nodesize = blocksize; nodesize <= BTRFS_MAX_METADATA_BLOCKSIZE; nodesize <<= 1) { - pr_info("BTRFS: selftest: sectorsize: %u nodesize: %u\n", - sectorsize, nodesize); - ret = btrfs_test_free_space_cache(sectorsize, nodesize); + pr_info("BTRFS: selftest: blocksize: %u nodesize: %u\n", + blocksize, nodesize); + ret = btrfs_test_free_space_cache(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_extent_buffer_operations(sectorsize, + ret = btrfs_test_extent_buffer_operations(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_extent_io(sectorsize, nodesize); + ret = btrfs_test_extent_io(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_inodes(sectorsize, nodesize); + ret = btrfs_test_inodes(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_qgroups(sectorsize, nodesize); + ret = btrfs_test_qgroups(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_free_space_tree(sectorsize, nodesize); + ret = btrfs_test_free_space_tree(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_raid_stripe_tree(sectorsize, nodesize); + ret = btrfs_test_raid_stripe_tree(blocksize, nodesize); if (ret) goto out; - ret = btrfs_test_delayed_refs(sectorsize, nodesize); + ret = btrfs_test_delayed_refs(blocksize, nodesize); if (ret) goto out; } diff --git a/fs/btrfs/tests/btrfs-tests.h b/fs/btrfs/tests/btrfs-tests.h index 4307bdaa6749..a3d3d806211a 100644 --- a/fs/btrfs/tests/btrfs-tests.h +++ b/fs/btrfs/tests/btrfs-tests.h @@ -36,17 +36,17 @@ struct btrfs_root; struct btrfs_trans_handle; struct btrfs_transaction; -int btrfs_test_extent_buffer_operations(u32 sectorsize, u32 nodesize); -int btrfs_test_free_space_cache(u32 sectorsize, u32 nodesize); -int btrfs_test_extent_io(u32 sectorsize, u32 nodesize); -int btrfs_test_inodes(u32 sectorsize, u32 nodesize); -int btrfs_test_qgroups(u32 sectorsize, u32 nodesize); -int btrfs_test_free_space_tree(u32 sectorsize, u32 nodesize); -int btrfs_test_raid_stripe_tree(u32 sectorsize, u32 nodesize); +int btrfs_test_extent_buffer_operations(u32 blocksize, u32 nodesize); +int btrfs_test_free_space_cache(u32 blocksize, u32 nodesize); +int btrfs_test_extent_io(u32 blocksize, u32 nodesize); +int btrfs_test_inodes(u32 blocksize, u32 nodesize); +int btrfs_test_qgroups(u32 blocksize, u32 nodesize); +int btrfs_test_free_space_tree(u32 blocksize, u32 nodesize); +int btrfs_test_raid_stripe_tree(u32 blocksize, u32 nodesize); int btrfs_test_extent_map(void); -int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize); +int btrfs_test_delayed_refs(u32 blocksize, u32 nodesize); struct inode *btrfs_new_test_inode(void); -struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize); +struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 blocksize); void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info); void btrfs_free_dummy_root(struct btrfs_root *root); struct btrfs_block_group * diff --git a/fs/btrfs/tests/delayed-refs-tests.c b/fs/btrfs/tests/delayed-refs-tests.c index 6558508c2ddf..908b5eeabb01 100644 --- a/fs/btrfs/tests/delayed-refs-tests.c +++ b/fs/btrfs/tests/delayed-refs-tests.c @@ -971,7 +971,7 @@ static int select_delayed_refs_test(struct btrfs_trans_handle *trans) return ret; } -int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize) +int btrfs_test_delayed_refs(u32 blocksize, u32 nodesize) { struct btrfs_transaction *transaction; struct btrfs_trans_handle trans; @@ -980,7 +980,7 @@ int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize) test_msg("running delayed refs tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; diff --git a/fs/btrfs/tests/extent-buffer-tests.c b/fs/btrfs/tests/extent-buffer-tests.c index 6a43a64ba55a..b0c30a2740e8 100644 --- a/fs/btrfs/tests/extent-buffer-tests.c +++ b/fs/btrfs/tests/extent-buffer-tests.c @@ -10,7 +10,7 @@ #include "../disk-io.h" #include "../accessors.h" -static int test_btrfs_split_item(u32 sectorsize, u32 nodesize) +static int test_btrfs_split_item(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info; struct btrfs_path *path = NULL; @@ -28,7 +28,7 @@ static int test_btrfs_split_item(u32 sectorsize, u32 nodesize) test_msg("running btrfs_split_item tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -216,8 +216,8 @@ static int test_btrfs_split_item(u32 sectorsize, u32 nodesize) return ret; } -int btrfs_test_extent_buffer_operations(u32 sectorsize, u32 nodesize) +int btrfs_test_extent_buffer_operations(u32 blocksize, u32 nodesize) { test_msg("running extent buffer operation tests"); - return test_btrfs_split_item(sectorsize, nodesize); + return test_btrfs_split_item(blocksize, nodesize); } diff --git a/fs/btrfs/tests/extent-io-tests.c b/fs/btrfs/tests/extent-io-tests.c index 0a2dbfaaf49e..0b98291167b4 100644 --- a/fs/btrfs/tests/extent-io-tests.c +++ b/fs/btrfs/tests/extent-io-tests.c @@ -106,7 +106,7 @@ static void dump_extent_io_tree(const struct extent_io_tree *tree) } } -static int test_find_delalloc(u32 sectorsize, u32 nodesize) +static int test_find_delalloc(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info; struct btrfs_root *root = NULL; @@ -124,7 +124,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize) test_msg("running find delalloc tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -177,7 +177,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize) * |--- delalloc ---| * |--- search ---| */ - set_extent_bit(tmp, 0, sectorsize - 1, EXTENT_DELALLOC, NULL); + set_extent_bit(tmp, 0, blocksize - 1, EXTENT_DELALLOC, NULL); start = 0; end = start + PAGE_SIZE - 1; found = find_lock_delalloc_range(inode, page_folio(locked_page), &start, @@ -186,9 +186,9 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize) test_err("should have found at least one delalloc"); goto out_bits; } - if (start != 0 || end != (sectorsize - 1)) { + if (start != 0 || end != (blocksize - 1)) { test_err("expected start 0 end %u, got start %llu end %llu", - sectorsize - 1, start, end); + blocksize - 1, start, end); goto out_bits; } unlock_extent(tmp, start, end, NULL); @@ -208,7 +208,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize) test_err("couldn't find the locked page"); goto out_bits; } - set_extent_bit(tmp, sectorsize, max_bytes - 1, EXTENT_DELALLOC, NULL); + set_extent_bit(tmp, blocksize, max_bytes - 1, EXTENT_DELALLOC, NULL); start = test_start; end = start + PAGE_SIZE - 1; found = find_lock_delalloc_range(inode, page_folio(locked_page), &start, @@ -236,7 +236,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize) * |--- delalloc ---| * |--- search ---| */ - test_start = max_bytes + sectorsize; + test_start = max_bytes + blocksize; locked_page = find_lock_page(inode->i_mapping, test_start >> PAGE_SHIFT); if (!locked_page) { @@ -503,7 +503,7 @@ static int __test_eb_bitmaps(unsigned long *bitmap, struct extent_buffer *eb) return 0; } -static int test_eb_bitmaps(u32 sectorsize, u32 nodesize) +static int test_eb_bitmaps(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info; unsigned long *bitmap = NULL; @@ -512,7 +512,7 @@ static int test_eb_bitmaps(u32 sectorsize, u32 nodesize) test_msg("running extent buffer bitmap tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -539,10 +539,10 @@ static int test_eb_bitmaps(u32 sectorsize, u32 nodesize) free_extent_buffer(eb); /* - * Test again for case where the tree block is sectorsize aligned but + * Test again for case where the tree block is blocksize aligned but * not nodesize aligned. */ - eb = __alloc_dummy_extent_buffer(fs_info, sectorsize, nodesize); + eb = __alloc_dummy_extent_buffer(fs_info, blocksize, nodesize); if (!eb) { test_std_err(TEST_ALLOC_ROOT); ret = -ENOMEM; @@ -708,7 +708,7 @@ static void init_eb_and_memory(struct extent_buffer *eb, void *memory) write_extent_buffer(eb, memory, 0, eb->len); } -static int test_eb_mem_ops(u32 sectorsize, u32 nodesize) +static int test_eb_mem_ops(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info; struct extent_buffer *eb = NULL; @@ -717,7 +717,7 @@ static int test_eb_mem_ops(u32 sectorsize, u32 nodesize) test_msg("running extent buffer memory operation tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -808,13 +808,13 @@ static int test_eb_mem_ops(u32 sectorsize, u32 nodesize) return ret; } -int btrfs_test_extent_io(u32 sectorsize, u32 nodesize) +int btrfs_test_extent_io(u32 blocksize, u32 nodesize) { int ret; test_msg("running extent I/O tests"); - ret = test_find_delalloc(sectorsize, nodesize); + ret = test_find_delalloc(blocksize, nodesize); if (ret) goto out; @@ -822,11 +822,11 @@ int btrfs_test_extent_io(u32 sectorsize, u32 nodesize) if (ret) goto out; - ret = test_eb_bitmaps(sectorsize, nodesize); + ret = test_eb_bitmaps(blocksize, nodesize); if (ret) goto out; - ret = test_eb_mem_ops(sectorsize, nodesize); + ret = test_eb_mem_ops(blocksize, nodesize); out: return ret; } diff --git a/fs/btrfs/tests/free-space-tests.c b/fs/btrfs/tests/free-space-tests.c index ebf68fcd2149..a5b27fd53b53 100644 --- a/fs/btrfs/tests/free-space-tests.c +++ b/fs/btrfs/tests/free-space-tests.c @@ -87,7 +87,7 @@ static int test_extents(struct btrfs_block_group *cache) return 0; } -static int test_bitmaps(struct btrfs_block_group *cache, u32 sectorsize) +static int test_bitmaps(struct btrfs_block_group *cache, u32 blocksize) { u64 next_bitmap_offset; int ret; @@ -127,7 +127,7 @@ static int test_bitmaps(struct btrfs_block_group *cache, u32 sectorsize) * The first bitmap we have starts at offset 0 so the next one is just * at the end of the first bitmap. */ - next_bitmap_offset = (u64)(BITS_PER_BITMAP * sectorsize); + next_bitmap_offset = (u64)(BITS_PER_BITMAP * blocksize); /* Test a bit straddling two bitmaps */ ret = test_add_free_space_entry(cache, next_bitmap_offset - SZ_2M, @@ -156,9 +156,9 @@ static int test_bitmaps(struct btrfs_block_group *cache, u32 sectorsize) /* This is the high grade jackassery */ static int test_bitmaps_and_extents(struct btrfs_block_group *cache, - u32 sectorsize) + u32 blocksize) { - u64 bitmap_offset = (u64)(BITS_PER_BITMAP * sectorsize); + u64 bitmap_offset = (u64)(BITS_PER_BITMAP * blocksize); int ret; test_msg("running bitmap and extent tests"); @@ -393,7 +393,7 @@ static int check_cache_empty(struct btrfs_block_group *cache) */ static int test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, - u32 sectorsize) + u32 blocksize) { int ret; u64 offset; @@ -530,7 +530,7 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, * The goal is to test that the bitmap entry space stealing doesn't * steal this space region. */ - ret = btrfs_add_free_space(cache, SZ_128M + SZ_16M, sectorsize); + ret = btrfs_add_free_space(cache, SZ_128M + SZ_16M, blocksize); if (ret) { test_err("error adding free space: %d", ret); return ret; @@ -588,8 +588,8 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, return -ENOENT; } - if (cache->free_space_ctl->free_space != (SZ_1M + sectorsize)) { - test_err("cache free space is not 1Mb + %u", sectorsize); + if (cache->free_space_ctl->free_space != (SZ_1M + blocksize)) { + test_err("cache free space is not 1Mb + %u", blocksize); return -EINVAL; } @@ -604,24 +604,24 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, } /* - * All that remains is a sectorsize free space region in a bitmap. + * All that remains is a blocksize free space region in a bitmap. * Confirm. */ ret = check_num_extents_and_bitmaps(cache, 1, 1); if (ret) return ret; - if (cache->free_space_ctl->free_space != sectorsize) { - test_err("cache free space is not %u", sectorsize); + if (cache->free_space_ctl->free_space != blocksize) { + test_err("cache free space is not %u", blocksize); return -EINVAL; } offset = btrfs_find_space_for_alloc(cache, - 0, sectorsize, 0, + 0, blocksize, 0, &max_extent_size); if (offset != (SZ_128M + SZ_16M)) { test_err("failed to allocate %u, returned offset : %llu", - sectorsize, offset); + blocksize, offset); return -EINVAL; } @@ -728,7 +728,7 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, * The goal is to test that the bitmap entry space stealing doesn't * steal this space region. */ - ret = btrfs_add_free_space(cache, SZ_32M, 2 * sectorsize); + ret = btrfs_add_free_space(cache, SZ_32M, 2 * blocksize); if (ret) { test_err("error adding free space: %d", ret); return ret; @@ -752,7 +752,7 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, /* * Confirm that our extent entry didn't stole all free space from the - * bitmap, because of the small 2 * sectorsize free space region. + * bitmap, because of the small 2 * blocksize free space region. */ ret = check_num_extents_and_bitmaps(cache, 2, 1); if (ret) @@ -778,8 +778,8 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, return -ENOENT; } - if (cache->free_space_ctl->free_space != (SZ_1M + 2 * sectorsize)) { - test_err("cache free space is not 1Mb + %u", 2 * sectorsize); + if (cache->free_space_ctl->free_space != (SZ_1M + 2 * blocksize)) { + test_err("cache free space is not 1Mb + %u", 2 * blocksize); return -EINVAL; } @@ -793,24 +793,24 @@ test_steal_space_from_bitmap_to_extent(struct btrfs_block_group *cache, } /* - * All that remains is 2 * sectorsize free space region + * All that remains is 2 * blocksize free space region * in a bitmap. Confirm. */ ret = check_num_extents_and_bitmaps(cache, 1, 1); if (ret) return ret; - if (cache->free_space_ctl->free_space != 2 * sectorsize) { - test_err("cache free space is not %u", 2 * sectorsize); + if (cache->free_space_ctl->free_space != 2 * blocksize) { + test_err("cache free space is not %u", 2 * blocksize); return -EINVAL; } offset = btrfs_find_space_for_alloc(cache, - 0, 2 * sectorsize, 0, + 0, 2 * blocksize, 0, &max_extent_size); if (offset != SZ_32M) { test_err("failed to allocate %u, offset: %llu", - 2 * sectorsize, offset); + 2 * blocksize, offset); return -EINVAL; } @@ -830,7 +830,7 @@ static bool bytes_index_use_bitmap(struct btrfs_free_space_ctl *ctl, return true; } -static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) +static int test_bytes_index(struct btrfs_block_group *cache, u32 blocksize) { const struct btrfs_free_space_op test_free_space_ops = { .use_bitmap = bytes_index_use_bitmap, @@ -853,7 +853,7 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) test_err("couldn't add extent entry %d\n", ret); return ret; } - offset += bytes + sectorsize; + offset += bytes + blocksize; } for (node = rb_first_cached(&ctl->free_space_bytes), i = 9; node; @@ -870,7 +870,7 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) /* Now validate bitmaps do the correct thing. */ btrfs_remove_free_space_cache(cache); for (i = 0; i < 2; i++) { - offset = i * BITS_PER_BITMAP * sectorsize; + offset = i * BITS_PER_BITMAP * blocksize; bytes = (i + 1) * SZ_1M; ret = test_add_free_space_entry(cache, offset, bytes, 1); if (ret) { @@ -895,26 +895,26 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) orig_free_space_ops = cache->free_space_ctl->op; cache->free_space_ctl->op = &test_free_space_ops; - ret = test_add_free_space_entry(cache, 0, sectorsize, 1); + ret = test_add_free_space_entry(cache, 0, blocksize, 1); if (ret) { test_err("couldn't add bitmap entry"); return ret; } - offset = BITS_PER_BITMAP * sectorsize; - ret = test_add_free_space_entry(cache, offset, sectorsize, 1); + offset = BITS_PER_BITMAP * blocksize; + ret = test_add_free_space_entry(cache, offset, blocksize, 1); if (ret) { test_err("couldn't add bitmap_entry"); return ret; } /* - * Now set a bunch of sectorsize extents in the first entry so it's + * Now set a bunch of blocksize extents in the first entry so it's * ->bytes is large. */ for (i = 2; i < 20; i += 2) { - offset = sectorsize * i; - ret = btrfs_add_free_space(cache, offset, sectorsize); + offset = blocksize * i; + ret = btrfs_add_free_space(cache, offset, blocksize); if (ret) { test_err("error populating sparse bitmap %d", ret); return ret; @@ -925,8 +925,8 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) * Now set a contiguous extent in the second bitmap so its * ->max_extent_size is larger than the first bitmaps. */ - offset = (BITS_PER_BITMAP * sectorsize) + sectorsize; - ret = btrfs_add_free_space(cache, offset, sectorsize); + offset = (BITS_PER_BITMAP * blocksize) + blocksize; + ret = btrfs_add_free_space(cache, offset, blocksize); if (ret) { test_err("error adding contiguous extent %d", ret); return ret; @@ -938,22 +938,22 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) */ entry = rb_entry(rb_first_cached(&ctl->free_space_bytes), struct btrfs_free_space, bytes_index); - if (entry->bytes != (10 * sectorsize)) { + if (entry->bytes != (10 * blocksize)) { test_err("error, wrong entry in the first slot in bytes_index"); return -EINVAL; } max_extent_size = 0; - offset = btrfs_find_space_for_alloc(cache, cache->start, sectorsize * 3, + offset = btrfs_find_space_for_alloc(cache, cache->start, blocksize * 3, 0, &max_extent_size); if (offset != 0) { test_err("found space to alloc even though we don't have enough space"); return -EINVAL; } - if (max_extent_size != (2 * sectorsize)) { + if (max_extent_size != (2 * blocksize)) { test_err("got the wrong max_extent size %llu expected %llu", - max_extent_size, (unsigned long long)(2 * sectorsize)); + max_extent_size, (unsigned long long)(2 * blocksize)); return -EINVAL; } @@ -963,14 +963,14 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) */ entry = rb_entry(rb_first_cached(&ctl->free_space_bytes), struct btrfs_free_space, bytes_index); - if (entry->bytes != (2 * sectorsize)) { + if (entry->bytes != (2 * blocksize)) { test_err("error, the bytes index wasn't recalculated properly"); return -EINVAL; } - /* Add another sectorsize to re-arrange the tree back to ->bytes. */ - offset = (BITS_PER_BITMAP * sectorsize) - sectorsize; - ret = btrfs_add_free_space(cache, offset, sectorsize); + /* Add another blocksize to re-arrange the tree back to ->bytes. */ + offset = (BITS_PER_BITMAP * blocksize) - blocksize; + ret = btrfs_add_free_space(cache, offset, blocksize); if (ret) { test_err("error adding extent to the sparse entry %d", ret); return ret; @@ -978,7 +978,7 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) entry = rb_entry(rb_first_cached(&ctl->free_space_bytes), struct btrfs_free_space, bytes_index); - if (entry->bytes != (11 * sectorsize)) { + if (entry->bytes != (11 * blocksize)) { test_err("error, wrong entry in the first slot in bytes_index"); return -EINVAL; } @@ -988,12 +988,12 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) * result in a re-arranging of the tree. */ max_extent_size = 0; - offset = btrfs_find_space_for_alloc(cache, cache->start, sectorsize * 2, + offset = btrfs_find_space_for_alloc(cache, cache->start, blocksize * 2, 0, &max_extent_size); - if (offset != (BITS_PER_BITMAP * sectorsize)) { + if (offset != (BITS_PER_BITMAP * blocksize)) { test_err("error, found %llu instead of %llu for our alloc", offset, - (unsigned long long)(BITS_PER_BITMAP * sectorsize)); + (unsigned long long)(BITS_PER_BITMAP * blocksize)); return -EINVAL; } @@ -1002,7 +1002,7 @@ static int test_bytes_index(struct btrfs_block_group *cache, u32 sectorsize) return 0; } -int btrfs_test_free_space_cache(u32 sectorsize, u32 nodesize) +int btrfs_test_free_space_cache(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info; struct btrfs_block_group *cache; @@ -1010,7 +1010,7 @@ int btrfs_test_free_space_cache(u32 sectorsize, u32 nodesize) int ret = -ENOMEM; test_msg("running btrfs free space cache tests"); - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -1022,7 +1022,7 @@ int btrfs_test_free_space_cache(u32 sectorsize, u32 nodesize) * alloc dummy block group whose size cross bitmaps. */ cache = btrfs_alloc_dummy_block_group(fs_info, - BITS_PER_BITMAP * sectorsize + PAGE_SIZE); + BITS_PER_BITMAP * blocksize + PAGE_SIZE); if (!cache) { test_std_err(TEST_ALLOC_BLOCK_GROUP); btrfs_free_dummy_fs_info(fs_info); @@ -1044,17 +1044,17 @@ int btrfs_test_free_space_cache(u32 sectorsize, u32 nodesize) ret = test_extents(cache); if (ret) goto out; - ret = test_bitmaps(cache, sectorsize); + ret = test_bitmaps(cache, blocksize); if (ret) goto out; - ret = test_bitmaps_and_extents(cache, sectorsize); + ret = test_bitmaps_and_extents(cache, blocksize); if (ret) goto out; - ret = test_steal_space_from_bitmap_to_extent(cache, sectorsize); + ret = test_steal_space_from_bitmap_to_extent(cache, blocksize); if (ret) goto out; - ret = test_bytes_index(cache, sectorsize); + ret = test_bytes_index(cache, blocksize); out: btrfs_free_dummy_block_group(cache); btrfs_free_dummy_root(root); diff --git a/fs/btrfs/tests/free-space-tree-tests.c b/fs/btrfs/tests/free-space-tree-tests.c index b61972046feb..e804bcbb9a96 100644 --- a/fs/btrfs/tests/free-space-tree-tests.c +++ b/fs/btrfs/tests/free-space-tree-tests.c @@ -68,7 +68,7 @@ static int __check_free_space_extents(struct btrfs_trans_handle *trans, i++; } prev_bit = bit; - offset += fs_info->sectorsize; + offset += fs_info->blocksize; } } if (prev_bit == 1) { @@ -421,7 +421,7 @@ typedef int (*test_func_t)(struct btrfs_trans_handle *, struct btrfs_path *, u32 alignment); -static int run_test(test_func_t test_func, int bitmaps, u32 sectorsize, +static int run_test(test_func_t test_func, int bitmaps, u32 blocksize, u32 nodesize, u32 alignment) { struct btrfs_fs_info *fs_info; @@ -431,7 +431,7 @@ static int run_test(test_func_t test_func, int bitmaps, u32 sectorsize, struct btrfs_path *path = NULL; int ret; - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); ret = -ENOMEM; @@ -522,32 +522,32 @@ static int run_test(test_func_t test_func, int bitmaps, u32 sectorsize, return ret; } -static int run_test_both_formats(test_func_t test_func, u32 sectorsize, +static int run_test_both_formats(test_func_t test_func, u32 blocksize, u32 nodesize, u32 alignment) { int test_ret = 0; int ret; - ret = run_test(test_func, 0, sectorsize, nodesize, alignment); + ret = run_test(test_func, 0, blocksize, nodesize, alignment); if (ret) { test_err( - "%ps failed with extents, sectorsize=%u, nodesize=%u, alignment=%u", - test_func, sectorsize, nodesize, alignment); + "%ps failed with extents, blocksize=%u, nodesize=%u, alignment=%u", + test_func, blocksize, nodesize, alignment); test_ret = ret; } - ret = run_test(test_func, 1, sectorsize, nodesize, alignment); + ret = run_test(test_func, 1, blocksize, nodesize, alignment); if (ret) { test_err( - "%ps failed with bitmaps, sectorsize=%u, nodesize=%u, alignment=%u", - test_func, sectorsize, nodesize, alignment); + "%ps failed with bitmaps, blocksize=%u, nodesize=%u, alignment=%u", + test_func, blocksize, nodesize, alignment); test_ret = ret; } return test_ret; } -int btrfs_test_free_space_tree(u32 sectorsize, u32 nodesize) +int btrfs_test_free_space_tree(u32 blocksize, u32 nodesize) { test_func_t tests[] = { test_empty_block_group, @@ -574,12 +574,12 @@ int btrfs_test_free_space_tree(u32 sectorsize, u32 nodesize) for (i = 0; i < ARRAY_SIZE(tests); i++) { int ret; - ret = run_test_both_formats(tests[i], sectorsize, nodesize, - sectorsize); + ret = run_test_both_formats(tests[i], blocksize, nodesize, + blocksize); if (ret) test_ret = ret; - ret = run_test_both_formats(tests[i], sectorsize, nodesize, + ret = run_test_both_formats(tests[i], blocksize, nodesize, bitmap_alignment); if (ret) test_ret = ret; diff --git a/fs/btrfs/tests/inode-tests.c b/fs/btrfs/tests/inode-tests.c index 3ea3bc2225fe..e6f3a3241c5b 100644 --- a/fs/btrfs/tests/inode-tests.c +++ b/fs/btrfs/tests/inode-tests.c @@ -93,7 +93,7 @@ static void insert_inode_item_key(struct btrfs_root *root) * [69635-73731][ 73731 - 86019 ][86019-90115] * [ regular ][ hole but no extent][ regular ] */ -static void setup_file_extents(struct btrfs_root *root, u32 sectorsize) +static void setup_file_extents(struct btrfs_root *root, u32 blocksize) { int slot = 0; u64 disk_bytenr = SZ_1M; @@ -107,7 +107,7 @@ static void setup_file_extents(struct btrfs_root *root, u32 sectorsize) insert_extent(root, offset, 6, 6, 0, 0, 0, BTRFS_FILE_EXTENT_INLINE, 0, slot); slot++; - offset = sectorsize; + offset = blocksize; /* Now another hole */ insert_extent(root, offset, 4, 4, 0, 0, 0, BTRFS_FILE_EXTENT_REG, 0, @@ -116,106 +116,106 @@ static void setup_file_extents(struct btrfs_root *root, u32 sectorsize) offset += 4; /* Now for a regular extent */ - insert_extent(root, offset, sectorsize - 1, sectorsize - 1, 0, - disk_bytenr, sectorsize - 1, BTRFS_FILE_EXTENT_REG, 0, slot); + insert_extent(root, offset, blocksize - 1, blocksize - 1, 0, + disk_bytenr, blocksize - 1, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - disk_bytenr += sectorsize; - offset += sectorsize - 1; + disk_bytenr += blocksize; + offset += blocksize - 1; /* * Now for 3 extents that were split from a hole punch so we test * offsets properly. */ - insert_extent(root, offset, sectorsize, 4 * sectorsize, 0, disk_bytenr, - 4 * sectorsize, BTRFS_FILE_EXTENT_REG, 0, slot); + insert_extent(root, offset, blocksize, 4 * blocksize, 0, disk_bytenr, + 4 * blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, sectorsize, sectorsize, 0, 0, 0, + offset += blocksize; + insert_extent(root, offset, blocksize, blocksize, 0, 0, 0, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, 2 * sectorsize, 4 * sectorsize, - 2 * sectorsize, disk_bytenr, 4 * sectorsize, + offset += blocksize; + insert_extent(root, offset, 2 * blocksize, 4 * blocksize, + 2 * blocksize, disk_bytenr, 4 * blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += 2 * sectorsize; - disk_bytenr += 4 * sectorsize; + offset += 2 * blocksize; + disk_bytenr += 4 * blocksize; /* Now for a unwritten prealloc extent */ - insert_extent(root, offset, sectorsize, sectorsize, 0, disk_bytenr, - sectorsize, BTRFS_FILE_EXTENT_PREALLOC, 0, slot); + insert_extent(root, offset, blocksize, blocksize, 0, disk_bytenr, + blocksize, BTRFS_FILE_EXTENT_PREALLOC, 0, slot); slot++; - offset += sectorsize; + offset += blocksize; /* * We want to jack up disk_bytenr a little more so the em stuff doesn't * merge our records. */ - disk_bytenr += 2 * sectorsize; + disk_bytenr += 2 * blocksize; /* * Now for a partially written prealloc extent, basically the same as * the hole punch example above. Ram_bytes never changes when you mark * extents written btw. */ - insert_extent(root, offset, sectorsize, 4 * sectorsize, 0, disk_bytenr, - 4 * sectorsize, BTRFS_FILE_EXTENT_PREALLOC, 0, slot); + insert_extent(root, offset, blocksize, 4 * blocksize, 0, disk_bytenr, + 4 * blocksize, BTRFS_FILE_EXTENT_PREALLOC, 0, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, sectorsize, 4 * sectorsize, sectorsize, - disk_bytenr, 4 * sectorsize, BTRFS_FILE_EXTENT_REG, 0, + offset += blocksize; + insert_extent(root, offset, blocksize, 4 * blocksize, blocksize, + disk_bytenr, 4 * blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, 2 * sectorsize, 4 * sectorsize, - 2 * sectorsize, disk_bytenr, 4 * sectorsize, + offset += blocksize; + insert_extent(root, offset, 2 * blocksize, 4 * blocksize, + 2 * blocksize, disk_bytenr, 4 * blocksize, BTRFS_FILE_EXTENT_PREALLOC, 0, slot); slot++; - offset += 2 * sectorsize; - disk_bytenr += 4 * sectorsize; + offset += 2 * blocksize; + disk_bytenr += 4 * blocksize; /* Now a normal compressed extent */ - insert_extent(root, offset, 2 * sectorsize, 2 * sectorsize, 0, - disk_bytenr, sectorsize, BTRFS_FILE_EXTENT_REG, + insert_extent(root, offset, 2 * blocksize, 2 * blocksize, 0, + disk_bytenr, blocksize, BTRFS_FILE_EXTENT_REG, BTRFS_COMPRESS_ZLIB, slot); slot++; - offset += 2 * sectorsize; + offset += 2 * blocksize; /* No merges */ - disk_bytenr += 2 * sectorsize; + disk_bytenr += 2 * blocksize; /* Now a split compressed extent */ - insert_extent(root, offset, sectorsize, 4 * sectorsize, 0, disk_bytenr, - sectorsize, BTRFS_FILE_EXTENT_REG, + insert_extent(root, offset, blocksize, 4 * blocksize, 0, disk_bytenr, + blocksize, BTRFS_FILE_EXTENT_REG, BTRFS_COMPRESS_ZLIB, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, sectorsize, sectorsize, 0, - disk_bytenr + sectorsize, sectorsize, + offset += blocksize; + insert_extent(root, offset, blocksize, blocksize, 0, + disk_bytenr + blocksize, blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += sectorsize; - insert_extent(root, offset, 2 * sectorsize, 4 * sectorsize, - 2 * sectorsize, disk_bytenr, sectorsize, + offset += blocksize; + insert_extent(root, offset, 2 * blocksize, 4 * blocksize, + 2 * blocksize, disk_bytenr, blocksize, BTRFS_FILE_EXTENT_REG, BTRFS_COMPRESS_ZLIB, slot); slot++; - offset += 2 * sectorsize; - disk_bytenr += 2 * sectorsize; + offset += 2 * blocksize; + disk_bytenr += 2 * blocksize; /* Now extents that have a hole but no hole extent */ - insert_extent(root, offset, sectorsize, sectorsize, 0, disk_bytenr, - sectorsize, BTRFS_FILE_EXTENT_REG, 0, slot); + insert_extent(root, offset, blocksize, blocksize, 0, disk_bytenr, + blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); slot++; - offset += 4 * sectorsize; - disk_bytenr += sectorsize; - insert_extent(root, offset, sectorsize, sectorsize, 0, disk_bytenr, - sectorsize, BTRFS_FILE_EXTENT_REG, 0, slot); + offset += 4 * blocksize; + disk_bytenr += blocksize; + insert_extent(root, offset, blocksize, blocksize, 0, disk_bytenr, + blocksize, BTRFS_FILE_EXTENT_REG, 0, slot); } static u32 prealloc_only = 0; static u32 compressed_only = 0; static u32 vacancy_only = 0; -static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) +static noinline int test_btrfs_get_extent(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info = NULL; struct inode *inode = NULL; @@ -234,7 +234,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) return ret; } - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); goto out; @@ -258,7 +258,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) /* First with no extents */ BTRFS_I(inode)->root = root; - em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, blocksize); if (IS_ERR(em)) { em = NULL; test_err("got an error when we shouldn't have"); @@ -276,7 +276,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) * setup_file_extents, so if you change anything there you need to * update the comment and update the expected values below. */ - setup_file_extents(root, sectorsize); + setup_file_extents(root, blocksize); em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, (u64)-1); if (IS_ERR(em)) { @@ -289,7 +289,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) } /* - * For inline extent, we always round up the em to sectorsize, as + * For inline extent, we always round up the em to blocksize, as * they are either: * * a) a hidden hole @@ -298,10 +298,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) * b) a file extent with unaligned bytenr * Tree checker will reject it. */ - if (em->start != 0 || em->len != sectorsize) { + if (em->start != 0 || em->len != blocksize) { test_err( "unexpected extent wanted start 0 len %u, got start %llu len %llu", - sectorsize, em->start, em->len); + blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -316,7 +316,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -339,7 +339,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* Regular extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -348,7 +348,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize - 1) { + if (em->start != offset || em->len != blocksize - 1) { test_err( "unexpected extent wanted start %llu len 4095, got start %llu len %llu", offset, em->start, em->len); @@ -366,7 +366,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* The next 3 are split extents */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -375,10 +375,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -394,7 +394,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -403,10 +403,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a hole, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -416,7 +416,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -425,10 +425,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != 2 * sectorsize) { + if (em->start != offset || em->len != 2 * blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, 2 * sectorsize, em->start, em->len); + offset, 2 * blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -450,7 +450,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* Prealloc extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -459,10 +459,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != prealloc_only) { @@ -478,7 +478,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* The next 3 are a half written prealloc extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -487,10 +487,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != prealloc_only) { @@ -507,7 +507,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -516,10 +516,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -539,7 +539,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -548,10 +548,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != 2 * sectorsize) { + if (em->start != offset || em->len != 2 * blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, 2 * sectorsize, em->start, em->len); + offset, 2 * blocksize, em->start, em->len); goto out; } if (em->flags != prealloc_only) { @@ -573,7 +573,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* Now for the compressed extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -582,10 +582,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != 2 * sectorsize) { + if (em->start != offset || em->len != 2 * blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, 2 * sectorsize, em->start, em->len); + offset, 2 * blocksize, em->start, em->len); goto out; } if (em->flags != compressed_only) { @@ -606,7 +606,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* Split compressed extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -615,10 +615,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != compressed_only) { @@ -640,7 +640,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -649,10 +649,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -666,7 +666,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -676,10 +676,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) disk_bytenr, extent_map_block_start(em)); goto out; } - if (em->start != offset || em->len != 2 * sectorsize) { + if (em->start != offset || em->len != 2 * blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, 2 * sectorsize, em->start, em->len); + offset, 2 * blocksize, em->start, em->len); goto out; } if (em->flags != compressed_only) { @@ -701,7 +701,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) free_extent_map(em); /* A hole between regular extents but no hole extent */ - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset + 6, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset + 6, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -710,10 +710,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -741,10 +741,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) * length of the actual hole, if this changes we'll have to change this * test. */ - if (em->start != offset || em->len != 3 * sectorsize) { + if (em->start != offset || em->len != 3 * blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, 3 * sectorsize, em->start, em->len); + offset, 3 * blocksize, em->start, em->len); goto out; } if (em->flags != vacancy_only) { @@ -759,7 +759,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) offset = em->start + em->len; free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, offset, blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -768,10 +768,10 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) test_err("expected a real extent, got %llu", em->disk_bytenr); goto out; } - if (em->start != offset || em->len != sectorsize) { + if (em->start != offset || em->len != blocksize) { test_err( "unexpected extent wanted start %llu len %u, got start %llu len %llu", - offset, sectorsize, em->start, em->len); + offset, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -792,7 +792,7 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) return ret; } -static int test_hole_first(u32 sectorsize, u32 nodesize) +static int test_hole_first(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info = NULL; struct inode *inode = NULL; @@ -808,7 +808,7 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) return ret; } - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); goto out; @@ -836,9 +836,9 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) * btrfs_get_extent. */ insert_inode_item_key(root); - insert_extent(root, sectorsize, sectorsize, sectorsize, 0, sectorsize, - sectorsize, BTRFS_FILE_EXTENT_REG, 0, 1); - em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, 2 * sectorsize); + insert_extent(root, blocksize, blocksize, blocksize, 0, blocksize, + blocksize, BTRFS_FILE_EXTENT_REG, 0, 1); + em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, 2 * blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; @@ -847,10 +847,10 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) test_err("expected a hole, got %llu", em->disk_bytenr); goto out; } - if (em->start != 0 || em->len != sectorsize) { + if (em->start != 0 || em->len != blocksize) { test_err( "unexpected extent wanted start 0 len %u, got start %llu len %llu", - sectorsize, em->start, em->len); + blocksize, em->start, em->len); goto out; } if (em->flags != vacancy_only) { @@ -860,19 +860,19 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) } free_extent_map(em); - em = btrfs_get_extent(BTRFS_I(inode), NULL, sectorsize, 2 * sectorsize); + em = btrfs_get_extent(BTRFS_I(inode), NULL, blocksize, 2 * blocksize); if (IS_ERR(em)) { test_err("got an error when we shouldn't have"); goto out; } - if (extent_map_block_start(em) != sectorsize) { + if (extent_map_block_start(em) != blocksize) { test_err("expected a real extent, got %llu", extent_map_block_start(em)); goto out; } - if (em->start != sectorsize || em->len != sectorsize) { + if (em->start != blocksize || em->len != blocksize) { test_err( "unexpected extent wanted start %u len %u, got start %llu len %llu", - sectorsize, sectorsize, em->start, em->len); + blocksize, blocksize, em->start, em->len); goto out; } if (em->flags != 0) { @@ -890,7 +890,7 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) return ret; } -static int test_extent_accounting(u32 sectorsize, u32 nodesize) +static int test_extent_accounting(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info = NULL; struct inode *inode = NULL; @@ -905,7 +905,7 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) return ret; } - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); goto out; @@ -933,9 +933,9 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) goto out; } - /* [BTRFS_MAX_EXTENT_SIZE][sectorsize] */ + /* [BTRFS_MAX_EXTENT_SIZE][blocksize] */ ret = btrfs_set_extent_delalloc(BTRFS_I(inode), BTRFS_MAX_EXTENT_SIZE, - BTRFS_MAX_EXTENT_SIZE + sectorsize - 1, + BTRFS_MAX_EXTENT_SIZE + blocksize - 1, 0, NULL); if (ret) { test_err("btrfs_set_extent_delalloc returned %d", ret); @@ -948,10 +948,10 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) goto out; } - /* [BTRFS_MAX_EXTENT_SIZE/2][sectorsize HOLE][the rest] */ + /* [BTRFS_MAX_EXTENT_SIZE/2][blocksize HOLE][the rest] */ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, BTRFS_MAX_EXTENT_SIZE >> 1, - (BTRFS_MAX_EXTENT_SIZE >> 1) + sectorsize - 1, + (BTRFS_MAX_EXTENT_SIZE >> 1) + blocksize - 1, EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_UPTODATE, NULL); if (ret) { @@ -965,10 +965,10 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) goto out; } - /* [BTRFS_MAX_EXTENT_SIZE][sectorsize] */ + /* [BTRFS_MAX_EXTENT_SIZE][blocksize] */ ret = btrfs_set_extent_delalloc(BTRFS_I(inode), BTRFS_MAX_EXTENT_SIZE >> 1, (BTRFS_MAX_EXTENT_SIZE >> 1) - + sectorsize - 1, + + blocksize - 1, 0, NULL); if (ret) { test_err("btrfs_set_extent_delalloc returned %d", ret); @@ -982,11 +982,11 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) } /* - * [BTRFS_MAX_EXTENT_SIZE+sectorsize][sectorsize HOLE][BTRFS_MAX_EXTENT_SIZE+sectorsize] + * [BTRFS_MAX_EXTENT_SIZE+blocksize][blocksize HOLE][BTRFS_MAX_EXTENT_SIZE+blocksize] */ ret = btrfs_set_extent_delalloc(BTRFS_I(inode), - BTRFS_MAX_EXTENT_SIZE + 2 * sectorsize, - (BTRFS_MAX_EXTENT_SIZE << 1) + 3 * sectorsize - 1, + BTRFS_MAX_EXTENT_SIZE + 2 * blocksize, + (BTRFS_MAX_EXTENT_SIZE << 1) + 3 * blocksize - 1, 0, NULL); if (ret) { test_err("btrfs_set_extent_delalloc returned %d", ret); @@ -1000,11 +1000,11 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) } /* - * [BTRFS_MAX_EXTENT_SIZE+sectorsize][sectorsize][BTRFS_MAX_EXTENT_SIZE+sectorsize] + * [BTRFS_MAX_EXTENT_SIZE+blocksize][blocksize][BTRFS_MAX_EXTENT_SIZE+blocksize] */ ret = btrfs_set_extent_delalloc(BTRFS_I(inode), - BTRFS_MAX_EXTENT_SIZE + sectorsize, - BTRFS_MAX_EXTENT_SIZE + 2 * sectorsize - 1, 0, NULL); + BTRFS_MAX_EXTENT_SIZE + blocksize, + BTRFS_MAX_EXTENT_SIZE + 2 * blocksize - 1, 0, NULL); if (ret) { test_err("btrfs_set_extent_delalloc returned %d", ret); goto out; @@ -1018,8 +1018,8 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) /* [BTRFS_MAX_EXTENT_SIZE+4k][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4k] */ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, - BTRFS_MAX_EXTENT_SIZE + sectorsize, - BTRFS_MAX_EXTENT_SIZE + 2 * sectorsize - 1, + BTRFS_MAX_EXTENT_SIZE + blocksize, + BTRFS_MAX_EXTENT_SIZE + 2 * blocksize - 1, EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_UPTODATE, NULL); if (ret) { @@ -1038,8 +1038,8 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) * might fail and I'd rather satisfy my paranoia at this point. */ ret = btrfs_set_extent_delalloc(BTRFS_I(inode), - BTRFS_MAX_EXTENT_SIZE + sectorsize, - BTRFS_MAX_EXTENT_SIZE + 2 * sectorsize - 1, 0, NULL); + BTRFS_MAX_EXTENT_SIZE + blocksize, + BTRFS_MAX_EXTENT_SIZE + 2 * blocksize - 1, 0, NULL); if (ret) { test_err("btrfs_set_extent_delalloc returned %d", ret); goto out; @@ -1077,7 +1077,7 @@ static int test_extent_accounting(u32 sectorsize, u32 nodesize) return ret; } -int btrfs_test_inodes(u32 sectorsize, u32 nodesize) +int btrfs_test_inodes(u32 blocksize, u32 nodesize) { int ret; @@ -1086,11 +1086,11 @@ int btrfs_test_inodes(u32 sectorsize, u32 nodesize) compressed_only |= EXTENT_FLAG_COMPRESS_ZLIB; prealloc_only |= EXTENT_FLAG_PREALLOC; - ret = test_btrfs_get_extent(sectorsize, nodesize); + ret = test_btrfs_get_extent(blocksize, nodesize); if (ret) return ret; - ret = test_hole_first(sectorsize, nodesize); + ret = test_hole_first(blocksize, nodesize); if (ret) return ret; - return test_extent_accounting(sectorsize, nodesize); + return test_extent_accounting(blocksize, nodesize); } diff --git a/fs/btrfs/tests/qgroup-tests.c b/fs/btrfs/tests/qgroup-tests.c index 3fc8dc3fd980..533fd318d848 100644 --- a/fs/btrfs/tests/qgroup-tests.c +++ b/fs/btrfs/tests/qgroup-tests.c @@ -203,7 +203,7 @@ static int remove_extent_ref(struct btrfs_root *root, u64 bytenr, } static int test_no_shared_qgroup(struct btrfs_root *root, - u32 sectorsize, u32 nodesize) + u32 blocksize, u32 nodesize) { struct btrfs_backref_walk_ctx ctx = { 0 }; struct btrfs_trans_handle trans; @@ -315,7 +315,7 @@ static int test_no_shared_qgroup(struct btrfs_root *root, * adjusted properly. */ static int test_multiple_refs(struct btrfs_root *root, - u32 sectorsize, u32 nodesize) + u32 blocksize, u32 nodesize) { struct btrfs_backref_walk_ctx ctx = { 0 }; struct btrfs_trans_handle trans; @@ -468,14 +468,14 @@ static int test_multiple_refs(struct btrfs_root *root, return 0; } -int btrfs_test_qgroups(u32 sectorsize, u32 nodesize) +int btrfs_test_qgroups(u32 blocksize, u32 nodesize) { struct btrfs_fs_info *fs_info = NULL; struct btrfs_root *root; struct btrfs_root *tmp_root; int ret = 0; - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); + fs_info = btrfs_alloc_dummy_fs_info(nodesize, blocksize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); return -ENOMEM; @@ -548,10 +548,10 @@ int btrfs_test_qgroups(u32 sectorsize, u32 nodesize) btrfs_put_root(tmp_root); test_msg("running qgroup tests"); - ret = test_no_shared_qgroup(root, sectorsize, nodesize); + ret = test_no_shared_qgroup(root, blocksize, nodesize); if (ret) goto out; - ret = test_multiple_refs(root, sectorsize, nodesize); + ret = test_multiple_refs(root, blocksize, nodesize); out: btrfs_free_dummy_root(root); btrfs_free_dummy_fs_info(fs_info); diff --git a/fs/btrfs/tests/raid-stripe-tree-tests.c b/fs/btrfs/tests/raid-stripe-tree-tests.c index 30f17eb7b6a8..825cc356e204 100644 --- a/fs/btrfs/tests/raid-stripe-tree-tests.c +++ b/fs/btrfs/tests/raid-stripe-tree-tests.c @@ -458,14 +458,14 @@ static const test_func_t tests[] = { test_front_delete, }; -static int run_test(test_func_t test, u32 sectorsize, u32 nodesize) +static int run_test(test_func_t test, u32 blocksize, u32 nodesize) { struct btrfs_trans_handle trans; struct btrfs_fs_info *fs_info; struct btrfs_root *root = NULL; int ret; - fs_info = btrfs_alloc_dummy_fs_info(sectorsize, nodesize); + fs_info = btrfs_alloc_dummy_fs_info(blocksize, nodesize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); ret = -ENOMEM; @@ -520,13 +520,13 @@ static int run_test(test_func_t test, u32 sectorsize, u32 nodesize) return ret; } -int btrfs_test_raid_stripe_tree(u32 sectorsize, u32 nodesize) +int btrfs_test_raid_stripe_tree(u32 blocksize, u32 nodesize) { int ret = 0; test_msg("running raid-stripe-tree tests"); for (int i = 0; i < ARRAY_SIZE(tests); i++) { - ret = run_test(tests[i], sectorsize, nodesize); + ret = run_test(tests[i], blocksize, nodesize); if (ret) { test_err("test-case %ps failed with %d\n", tests[i], ret); goto out; From patchwork Wed Dec 18 09:41:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913314 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 911EB19CC11 for ; Wed, 18 Dec 2024 09:42:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514938; cv=none; b=TY+KqYGDqN/X9mQmDXLW2bP+leZ1yskPvtmhfndYbQgTiXRlLxUK3thkHJfWB0GngAUffH+g26ZUYGZ2kEzJahbDMRF73zBwwJyDu7Uke6w8MSKvYjtSMNNasaL7i49uo7l5tL9W60nZufqYjxMqZa2OyEnRX9Hp0JdYjcDOjSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514938; c=relaxed/simple; bh=Kt9u1UQTsFxDNrS4MRfc9WYvr475Hslrg8Ebpp+PGWs=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D2ewYeSOfDdP5rhlBn9ncIHiK21rae0BjDvb+6vvJ1OF/rcaYSMtn7hzG2UvXVp0PVI/w9/r/CxnuFPs9ezdvuYmh7WLE51NcvzsXvcO/IP7yhsT2gQDsbIq23Se01etjpeWolsEpVRIRMCkcDZRU4QvOgZ5fquzZYjPDi4T3fA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=oTF9sAbx; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=oTF9sAbx; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="oTF9sAbx"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="oTF9sAbx" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B94041F399 for ; Wed, 18 Dec 2024 09:42:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514934; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7NiJM9+OzFsvylLhXTGHi3PDa+huKiLOPSkMT1a68sE=; b=oTF9sAbxykwARUscJro+sYVFZY+kDaBRI8CVLy6Z3ix7NomainyjFqDUBBf3WebHfEaHa1 WOt5goGUvVlbCjySgbigD3s4oXwk3GjZu0BzZ/hu0ckrL8p5S8iUepquO0U1HijL0R6rAu pyWWNVavChXQoPfK7zH+Zm/+0895s8I= Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=oTF9sAbx DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514934; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7NiJM9+OzFsvylLhXTGHi3PDa+huKiLOPSkMT1a68sE=; b=oTF9sAbxykwARUscJro+sYVFZY+kDaBRI8CVLy6Z3ix7NomainyjFqDUBBf3WebHfEaHa1 WOt5goGUvVlbCjySgbigD3s4oXwk3GjZu0BzZ/hu0ckrL8p5S8iUepquO0U1HijL0R6rAu pyWWNVavChXQoPfK7zH+Zm/+0895s8I= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E5DD1132EA for ; Wed, 18 Dec 2024 09:42:13 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id SDJTKPWYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:13 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 16/18] btrfs: finish the rename of btrfs_fs_info::sectorsize Date: Wed, 18 Dec 2024 20:11:32 +1030 Message-ID: <11e2dc8b9f5745bec215d5c0fc38e35b00d65a74.1734514696.git.wqu@suse.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Rspamd-Queue-Id: B94041F399 X-Spam-Level: X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[99.99%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:dkim,suse.com:mid,imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Rspamd-Action: no action X-Spam-Score: -3.01 X-Spam-Flag: NO Now every user of btrfs_fs_info::sectorsize is migrated to use btrfs_fs_info::blocksize, we can finish the rename by removing the @sectorsize/@sectorsize_bits/@sectors_per_page aliases now. Signed-off-by: Qu Wenruo --- fs/btrfs/fs.h | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h index 9f8324ae3800..e2aafdc50498 100644 --- a/fs/btrfs/fs.h +++ b/fs/btrfs/fs.h @@ -797,15 +797,9 @@ struct btrfs_fs_info { /* Cached block sizes */ u32 nodesize; - union { - u32 sectorsize; - u32 blocksize; - }; + u32 blocksize; /* ilog2 of blocksize, use to avoid 64bit division */ - union { - u32 sectorsize_bits; - u32 blocksize_bits; - }; + u32 blocksize_bits; u32 csum_size; u32 csums_per_leaf; u32 stripesize; From patchwork Wed Dec 18 09:41:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913317 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9A8119CC3D for ; Wed, 18 Dec 2024 09:42:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514940; cv=none; b=gfqbuInZD673IqYJ6XBOCEQEfHHNqcWTb0I6Br7yehDkJFpKYuqNozywVekLhTbbEy9bPnSLBA/0BBN9QKC9UwoiQqsw8j7ox9harTpud0XZAwGOnrSf/z8bQPtNCGPfBalCst/BJ5FkygxC0mXm8DzuLLV0II8PJq4GL6D0yTM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514940; c=relaxed/simple; bh=2bRYtWpYVMEDhJ5w0op9w1AI31z+hcs0P3Fq/PruFVE=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OS0nBWxxgwsEXqd5Q6NIV11KxkfBj0uxxXM6DFQeaRMd6/2dDSNuOhrcgULLZL+i3lA3bbupu9uJKHlKhBN4YULGAq828ICkPmvsDp6Vabzv5mm9mhsoiGN3G0/D9U3A4kr2pL1fhIEEbOayTV+kG9TdKw4nVvEHyXtPrxUE4u8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=DWTmA0aE; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=DWTmA0aE; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="DWTmA0aE"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="DWTmA0aE" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 145A02115F for ; Wed, 18 Dec 2024 09:42:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514936; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yvLlTMcqwB5VyTOe4ov2s4BVV1z0//ravBaqD0TF1U0=; b=DWTmA0aE3ukmV9+RMofL00SCYqSPdPvKEWyBvtwTPrAso0ofe4xqf/eimZ0lYz3uhL5AyM /YfICvkEpn8xY41m10XUW0RnUOxHO5BjR2Jz/y0psC5uJZzEzfMRLypZK3CZn9Ygj2meJC AyJ8b9IGjANQ9KFTdKwH+JIoW2DBK0I= Authentication-Results: smtp-out1.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514936; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yvLlTMcqwB5VyTOe4ov2s4BVV1z0//ravBaqD0TF1U0=; b=DWTmA0aE3ukmV9+RMofL00SCYqSPdPvKEWyBvtwTPrAso0ofe4xqf/eimZ0lYz3uhL5AyM /YfICvkEpn8xY41m10XUW0RnUOxHO5BjR2Jz/y0psC5uJZzEzfMRLypZK3CZn9Ygj2meJC AyJ8b9IGjANQ9KFTdKwH+JIoW2DBK0I= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 41655132EA for ; Wed, 18 Dec 2024 09:42:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wE8sAPeYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:15 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 17/18] btrfs: migrate btrfs_super_block::sectorsize to blocksize Date: Wed, 18 Dec 2024 20:11:33 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO This is the rename of the on-disk format btrfs_super_block, which also affects the accessors and a few callers. To keep compatibility for old programs which may still access btrfs_super_block::sectorsize, use a union so @blocksize and @sectorsize can both access the same @blocksize value. Signed-off-by: Qu Wenruo --- fs/btrfs/accessors.h | 4 ++-- fs/btrfs/disk-io.c | 4 ++-- include/uapi/linux/btrfs_tree.h | 13 +++++++++++-- 3 files changed, 15 insertions(+), 6 deletions(-) diff --git a/fs/btrfs/accessors.h b/fs/btrfs/accessors.h index a796ec3fcb67..ecafbd6262cc 100644 --- a/fs/btrfs/accessors.h +++ b/fs/btrfs/accessors.h @@ -873,8 +873,8 @@ BTRFS_SETGET_STACK_FUNCS(super_total_bytes, struct btrfs_super_block, total_bytes, 64); BTRFS_SETGET_STACK_FUNCS(super_bytes_used, struct btrfs_super_block, bytes_used, 64); -BTRFS_SETGET_STACK_FUNCS(super_sectorsize, struct btrfs_super_block, - sectorsize, 32); +BTRFS_SETGET_STACK_FUNCS(super_blocksize, struct btrfs_super_block, + blocksize, 32); BTRFS_SETGET_STACK_FUNCS(super_nodesize, struct btrfs_super_block, nodesize, 32); BTRFS_SETGET_STACK_FUNCS(super_stripesize, struct btrfs_super_block, diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index d3d2c9e2356a..9e6a1ea507d7 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -2341,7 +2341,7 @@ int btrfs_validate_super(const struct btrfs_fs_info *fs_info, const struct btrfs_super_block *sb, int mirror_num) { u64 nodesize = btrfs_super_nodesize(sb); - u64 blocksize = btrfs_super_sectorsize(sb); + u64 blocksize = btrfs_super_blocksize(sb); int ret = 0; const bool ignore_flags = btrfs_test_opt(fs_info, IGNORESUPERFLAGS); @@ -3310,7 +3310,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device /* Set up fs_info before parsing mount options */ nodesize = btrfs_super_nodesize(disk_super); - blocksize = btrfs_super_sectorsize(disk_super); + blocksize = btrfs_super_blocksize(disk_super); stripesize = blocksize; fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids)); fs_info->delalloc_batch = blocksize * 512 * (1 + ilog2(nr_cpu_ids)); diff --git a/include/uapi/linux/btrfs_tree.h b/include/uapi/linux/btrfs_tree.h index fc29d273845d..3fbefe00be4c 100644 --- a/include/uapi/linux/btrfs_tree.h +++ b/include/uapi/linux/btrfs_tree.h @@ -272,7 +272,7 @@ * When a block group becomes very fragmented, we convert it to use bitmaps * instead of extents. A free space bitmap is keyed on * (start, FREE_SPACE_BITMAP, length); the corresponding item is a bitmap with - * (length / sectorsize) bits. + * (length / blocksize) bits. */ #define BTRFS_FREE_SPACE_BITMAP_KEY 200 @@ -690,7 +690,16 @@ struct btrfs_super_block { __le64 bytes_used; __le64 root_dir_objectid; __le64 num_devices; - __le32 sectorsize; + union { + /* + * The minimum data block size. + * + * Used to be called "sectorsize", but not recommended now. + * Keep the old "sectorsize" just for old programs. + */ + __le32 blocksize; + __le32 sectorsize; + }; __le32 nodesize; __le32 __unused_leafsize; __le32 stripesize; From patchwork Wed Dec 18 09:41:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 13913318 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D18B19D072 for ; Wed, 18 Dec 2024 09:42:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514941; cv=none; b=m3J5q7nvFqvUJzWKS3B42QOU0jLMBn4781+CIcFuxsPBOTRJlJ7lE1oY+pjOH8hon91PB8DXmqChmxJoO17zzrRZpSU+JJSW+EthC3b29T3K0bRA58LXPSR1oNkhHPPPtpaJ9PsKy3MTTQF22BRmKKHaTzFXeEcvlR4BrHbe4G8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734514941; c=relaxed/simple; bh=Du+ounKbe1okLsJlZYl3tDHcVh3Sg9wT0MwoI9juol4=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lMhnp5h2p0i5oryzaJxJx8akg0esHZ84X7z/pZ2R51xBVpAUCQCAfgpIuZuIx+HGgcS/ipTw//VhTk4XQtIOAwydsvSN9GPKGFhD+t/twHC+YwpneCkGQZ72lS85o4adA/ZK9zvRNXIisVJeUAsWLFVdvHCzat7j8meNDZRfWZY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=YWZ+htrn; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=YWZ+htrn; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="YWZ+htrn"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="YWZ+htrn" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 64E971F444 for ; Wed, 18 Dec 2024 09:42:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514937; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NlfEGzdIj9bj0xdehleaefrLv/FvPvCAxxJsVwuuO1M=; b=YWZ+htrnDsd31qlKz2oZmDxn54vQKZzqzXcwl6vGCERzfNA8NPp2DbqchqXQIqZ5yUPQMO IxGlwOkPYWAeWTRiR2xWFMUhLuSmXhJT7iygOi0+0mUWwGlmz/llX7PawFL7u/mOKtNCpY sYD7pz2BToteORIA0pE/Gs3fQ15LPHw= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1734514937; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NlfEGzdIj9bj0xdehleaefrLv/FvPvCAxxJsVwuuO1M=; b=YWZ+htrnDsd31qlKz2oZmDxn54vQKZzqzXcwl6vGCERzfNA8NPp2DbqchqXQIqZ5yUPQMO IxGlwOkPYWAeWTRiR2xWFMUhLuSmXhJT7iygOi0+0mUWwGlmz/llX7PawFL7u/mOKtNCpY sYD7pz2BToteORIA0pE/Gs3fQ15LPHw= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 91355132EA for ; Wed, 18 Dec 2024 09:42:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id MK+oE/iYYmdmSwAAD6G6ig (envelope-from ) for ; Wed, 18 Dec 2024 09:42:16 +0000 From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 18/18] btrfs: migrate the ioctl interfaces to use block size terminology Date: Wed, 18 Dec 2024 20:11:34 +1030 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Level: X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.com:email,suse.com:mid]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[linux-btrfs@vger.kernel.org]; RCVD_TLS_ALL(0.00)[] X-Spam-Score: -2.80 X-Spam-Flag: NO This rename really only affects btrfs_ioctl_fs_info_args structure, but since we're here, also update the comments in the ioctl header. To keep compatibility for old programs which may still access btrfs_ioctl_fs_info_args::sectorsize, use a union so @blocksize and @sectorsize can both access the same @blocksize value. Signed-off-by: Qu Wenruo --- fs/btrfs/ioctl.c | 2 +- include/uapi/linux/btrfs.h | 21 +++++++++++++++------ 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 888f7b97434c..bbaac3d8a36d 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2781,7 +2781,7 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info, memcpy(&fi_args->fsid, fs_devices->fsid, sizeof(fi_args->fsid)); fi_args->nodesize = fs_info->nodesize; - fi_args->sectorsize = fs_info->blocksize; + fi_args->blocksize = fs_info->blocksize; fi_args->clone_alignment = fs_info->blocksize; if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) { diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index d3b222d7af24..16ea8266b26d 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -278,7 +278,16 @@ struct btrfs_ioctl_fs_info_args { __u64 num_devices; /* out */ __u8 fsid[BTRFS_FSID_SIZE]; /* out */ __u32 nodesize; /* out */ - __u32 sectorsize; /* out */ + union { /* out */ + /* + * The minimum data block size. + * + * The old name "sectorsize" is no longer recommended, + * only for compatibility usage. + */ + __u32 blocksize; + __u32 sectorsize; + }; __u32 clone_alignment; /* out */ /* See BTRFS_FS_INFO_FLAG_* */ __u16 csum_type; /* out */ @@ -965,7 +974,7 @@ struct btrfs_ioctl_encoded_io_args { /* * Offset in file. * - * For writes, must be aligned to the sector size of the filesystem. + * For writes, must be aligned to the block size of the filesystem. */ __s64 offset; /* Currently must be zero. */ @@ -982,7 +991,7 @@ struct btrfs_ioctl_encoded_io_args { * Length of the data in the file. * * Must be less than or equal to unencoded_len - unencoded_offset. For - * writes, must be aligned to the sector size of the filesystem unless + * writes, must be aligned to the block size of the filesystem unless * the data ends at or beyond the current end of the file. */ __u64 len; @@ -1033,10 +1042,10 @@ struct btrfs_ioctl_encoded_io_args { */ #define BTRFS_ENCODED_IO_COMPRESSION_ZSTD 2 /* - * Data is compressed sector by sector (using the sector size indicated by the + * Data is compressed block by block (using the block size indicated by the * name of the constant) with LZO1X and wrapped in the format documented in - * fs/btrfs/lzo.c. For writes, the compression sector size must match the - * filesystem sector size. + * fs/btrfs/lzo.c. For writes, the compression block size must match the + * filesystem block size. */ #define BTRFS_ENCODED_IO_COMPRESSION_LZO_4K 3 #define BTRFS_ENCODED_IO_COMPRESSION_LZO_8K 4