From patchwork Thu Aug 22 01:37:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772445 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3160376F1 for ; Thu, 22 Aug 2024 01:29:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290184; cv=none; b=i6J8ojltDhIIzibs1tibC7oNnPL2KzDkeDYFldRt8/siOfLQ5pB3Y+Awr1y7pqR8d1TZ3O6Lv53VcwKQaT/Fp3XgHm3QKfiILA0+S6R0MujzUS6EU33CoheSDOGRFkFe91t1x00B/J13+/f9jCbSEF2hKG7V3EnD//CanarWi4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290184; c=relaxed/simple; bh=ygDL5dc+HslxbcFN865/eR5dsKEMC/jldJYQ5agrB3A=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=l0CtGq7glAL/NgXJfNtei0aYOd7iemBpP/3M8LWTqCEgHxBJvaTHAue8el2DDdx6l/CiCdOBSeubakTOhH1ObhDKP+cHNMENHGIrULc6PVC7YbflYonE4RWtllOenFt9Dvwbss5c6ikMwuJW4vIB+BPjPHCvsNU8BWWZg8ZB4F4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Wq5GS1XV2z2CmwF; Thu, 22 Aug 2024 09:29:36 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 4C27D14022D; Thu, 22 Aug 2024 09:29:39 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:38 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 01/14] btrfs: convert clear_page_extent_mapped() to take a folio Date: Thu, 22 Aug 2024 09:37:01 +0800 Message-ID: <20240822013714.3278193-2-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 9 ++++----- fs/btrfs/extent_io.h | 2 +- fs/btrfs/inode.c | 4 ++-- 3 files changed, 7 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 822e2bf8bc99..3c2ad5c9990d 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -951,18 +951,17 @@ int set_folio_extent_mapped(struct folio *folio) return 0; } -void clear_page_extent_mapped(struct page *page) +void clear_page_extent_mapped(struct folio *folio) { - struct folio *folio = page_folio(page); struct btrfs_fs_info *fs_info; - ASSERT(page->mapping); + ASSERT(folio->mapping); if (!folio_test_private(folio)) return; - fs_info = page_to_fs_info(page); - if (btrfs_is_subpage(fs_info, page->mapping)) + fs_info = folio_to_fs_info(folio); + if (btrfs_is_subpage(fs_info, folio->mapping)) return btrfs_detach_subpage(fs_info, folio); folio_detach_private(folio); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index b38460279b99..236da2231a0e 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -249,7 +249,7 @@ int btree_write_cache_pages(struct address_space *mapping, void btrfs_readahead(struct readahead_control *rac); int set_folio_extent_mapped(struct folio *folio); int set_page_extent_mapped(struct page *page); -void clear_page_extent_mapped(struct page *page); +void clear_page_extent_mapped(struct folio *folio); struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, u64 start, u64 owner_root, int level); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index a8ad540d6de2..5e3b834cc72b 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -7240,7 +7240,7 @@ static bool __btrfs_release_folio(struct folio *folio, gfp_t gfp_flags) { if (try_release_extent_mapping(&folio->page, gfp_flags)) { wait_subpage_spinlock(folio); - clear_page_extent_mapped(&folio->page); + clear_page_extent_mapped(folio); return true; } return false; @@ -7438,7 +7438,7 @@ static void btrfs_invalidate_folio(struct folio *folio, size_t offset, btrfs_folio_clear_checked(fs_info, folio, folio_pos(folio), folio_size(folio)); if (!inode_evicting) __btrfs_release_folio(folio, GFP_NOFS); - clear_page_extent_mapped(&folio->page); + clear_page_extent_mapped(folio); } static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback) From patchwork Thu Aug 22 01:37:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772442 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF502200AE for ; Thu, 22 Aug 2024 01:29:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.189 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290183; cv=none; b=NNd8zhsB7AjBxpMP7ZGXH87stXudb8WSRfVfbAhmt1nrOctvMArJHeNqMzecIjYPKGyG0L/l8+rWpZNJhn5GRM7D/KLkw+73U/bZyJpOe2p6PB36BKONAwhkPtvKOlRzFsoy794nXp/JFGxWEOjBDlFm4KpQi/12qO1S24VDe7w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290183; c=relaxed/simple; bh=FpqNFXC4g+z64DX9CJFXjSZBjX4YZuywJJv+qf7teaE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=pb+NjlEzKKy0QuLUW/Wvw3GrD5yae8WztFMOFoCLKj0cFSYVvX3a1MTxICxaNYpmgZkrzZMXtnccjfJJSDqAwaZYMaUG9b84pmDNDQM0v/h2njlvGoGt3y2l2gywF2jqIxc75tlUi5XQ4SQMbvQrfGsmyMNPkV6L+nYXNI75dZU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Wq5946nWxz69Lc; Thu, 22 Aug 2024 09:24:56 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id B05A21800FF; Thu, 22 Aug 2024 09:29:39 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:39 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 02/14] btrfs: convert get_next_extent_buffer() to take a folio Date: Thu, 22 Aug 2024 09:37:02 +0800 Message-ID: <20240822013714.3278193-3-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And use folio_pos instend of page_offset, which is more consistent with folio usage. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 3c2ad5c9990d..b9d159fcbbc5 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4135,17 +4135,17 @@ void memmove_extent_buffer(const struct extent_buffer *dst, #define GANG_LOOKUP_SIZE 16 static struct extent_buffer *get_next_extent_buffer( - const struct btrfs_fs_info *fs_info, struct page *page, u64 bytenr) + const struct btrfs_fs_info *fs_info, struct folio *folio, u64 bytenr) { struct extent_buffer *gang[GANG_LOOKUP_SIZE]; struct extent_buffer *found = NULL; - u64 page_start = page_offset(page); - u64 cur = page_start; + u64 folio_start = folio_pos(folio); + u64 cur = folio_start; - ASSERT(in_range(bytenr, page_start, PAGE_SIZE)); + ASSERT(in_range(bytenr, folio_start, PAGE_SIZE)); lockdep_assert_held(&fs_info->buffer_lock); - while (cur < page_start + PAGE_SIZE) { + while (cur < folio_start + PAGE_SIZE) { int ret; int i; @@ -4157,7 +4157,7 @@ static struct extent_buffer *get_next_extent_buffer( goto out; for (i = 0; i < ret; i++) { /* Already beyond page end */ - if (gang[i]->start >= page_start + PAGE_SIZE) + if (gang[i]->start >= folio_start + PAGE_SIZE) goto out; /* Found one */ if (gang[i]->start >= bytenr) { @@ -4190,7 +4190,7 @@ static int try_release_subpage_extent_buffer(struct page *page) * with spinlock rather than RCU. */ spin_lock(&fs_info->buffer_lock); - eb = get_next_extent_buffer(fs_info, page, cur); + eb = get_next_extent_buffer(fs_info, page_folio(page), cur); if (!eb) { /* No more eb in the page range after or at cur */ spin_unlock(&fs_info->buffer_lock); From patchwork Thu Aug 22 01:37:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772443 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADBAA37160 for ; Thu, 22 Aug 2024 01:29:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290184; cv=none; b=FrFB8D0fhpJvEmYGAv97OukTueYikXxRLJJr8SLikFD27ssFTsQHwJ3+D/SKp5wNV08zUg9HgzG/4jMRhT33H1HsnTb5nxc48HvWgRbXJDcPoQTgVyC5+7JhNfv6qPibMHy/ApkXjKxO6gV/OGG5ub64nA7jM6MzoHwhGYzN0XI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290184; c=relaxed/simple; bh=sPsqqGr+XUqhmxoIiZXSD3M/cVag3Uf/JU46x/AueWI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QHDubpAZOeLcH/4eKXkb4RdNp1jgW+gqTc+zSiN+d+9hHOWTyqHwP1Pu7SgXswKM3A7AXE9jBTYdiPjW455Kz/NIpyLTkR0qDv9t8TtbbIXeOiC2tn2XCMw4fr6gdluU3vlldKk8tkzkdnpPGr1Sx/53cvUzboNNbTpEQyHwqeA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Wq5G45pflzyR8j; Thu, 22 Aug 2024 09:29:16 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 1CB5C180087; Thu, 22 Aug 2024 09:29:40 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:39 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 03/14] btrfs: convert try_release_subpage_extent_buffer() to take a folio Date: Thu, 22 Aug 2024 09:37:03 +0800 Message-ID: <20240822013714.3278193-4-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And use folio_pos instend of page_offset, which is more consistent with folio usage. At the same time, folio_test_private can handle folio directly without converting from page to folio first. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b9d159fcbbc5..77c1f69f4229 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4171,11 +4171,11 @@ static struct extent_buffer *get_next_extent_buffer( return found; } -static int try_release_subpage_extent_buffer(struct page *page) +static int try_release_subpage_extent_buffer(struct folio *folio) { - struct btrfs_fs_info *fs_info = page_to_fs_info(page); - u64 cur = page_offset(page); - const u64 end = page_offset(page) + PAGE_SIZE; + struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); + u64 cur = folio_pos(folio); + const u64 end = cur + PAGE_SIZE; int ret; while (cur < end) { @@ -4190,7 +4190,7 @@ static int try_release_subpage_extent_buffer(struct page *page) * with spinlock rather than RCU. */ spin_lock(&fs_info->buffer_lock); - eb = get_next_extent_buffer(fs_info, page_folio(page), cur); + eb = get_next_extent_buffer(fs_info, folio, cur); if (!eb) { /* No more eb in the page range after or at cur */ spin_unlock(&fs_info->buffer_lock); @@ -4231,12 +4231,12 @@ static int try_release_subpage_extent_buffer(struct page *page) * Finally to check if we have cleared folio private, as if we have * released all ebs in the page, the folio private should be cleared now. */ - spin_lock(&page->mapping->i_private_lock); - if (!folio_test_private(page_folio(page))) + spin_lock(&folio->mapping->i_private_lock); + if (!folio_test_private(folio)) ret = 1; else ret = 0; - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); return ret; } @@ -4247,7 +4247,7 @@ int try_release_extent_buffer(struct page *page) struct extent_buffer *eb; if (page_to_fs_info(page)->nodesize < PAGE_SIZE) - return try_release_subpage_extent_buffer(page); + return try_release_subpage_extent_buffer(page_folio(page)); /* * We need to make sure nobody is changing folio private, as we rely on From patchwork Thu Aug 22 01:37:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772447 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CABF364CD for ; Thu, 22 Aug 2024 01:29:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; cv=none; b=u04VHzQRDeA+JJm3QP8uB6n3VM48XIEBkQxWtOM8LP/FXShTIXU09WGd5VV9qQ8vMy+F59PeY9Lzae1sKHxp0eWzc+ZnsgP1BS68DukFUiPO/g7MRQcoSsZph0EOOhjuqq+4R9rZ9pGnbksGBI3c/TeKGo4zgJ/B3Kqvf+VzeC8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; c=relaxed/simple; bh=ZsaV82DJrS/7r0BDVfW1MKgAiOdRcOueLhOgM8PSvRA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=W8Av40cmNLT4WveBredIciKjjWEsqkEwQreUpNz00524aDdiSD2yo7WgST9PsnXy7NsnVdgKp+CNJGu8pyDKG/uMJYQcNTiHQiirkCmbuR/WE+W4AiOJr+RssWOPSWafTOpHzYu4ABLEgfpfP+5aU2L6yLnqIDk22Ry+ElklfoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Wq5G51c7KzyR80; Thu, 22 Aug 2024 09:29:17 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 83B521800FF; Thu, 22 Aug 2024 09:29:40 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:39 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 04/14] btrfs: convert try_release_extent_buffer() to take a folio Date: Thu, 22 Aug 2024 09:37:04 +0800 Message-ID: <20240822013714.3278193-5-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Signed-off-by: Li Zetao --- fs/btrfs/disk-io.c | 2 +- fs/btrfs/extent_io.c | 15 +++++++-------- fs/btrfs/extent_io.h | 2 +- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index a6f5441e62d1..ca2f52a28fb0 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -525,7 +525,7 @@ static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags) if (folio_test_writeback(folio) || folio_test_dirty(folio)) return false; - return try_release_extent_buffer(&folio->page); + return try_release_extent_buffer(folio); } static void btree_invalidate_folio(struct folio *folio, size_t offset, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 77c1f69f4229..a0102b9b67ff 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4241,21 +4241,20 @@ static int try_release_subpage_extent_buffer(struct folio *folio) } -int try_release_extent_buffer(struct page *page) +int try_release_extent_buffer(struct folio *folio) { - struct folio *folio = page_folio(page); struct extent_buffer *eb; - if (page_to_fs_info(page)->nodesize < PAGE_SIZE) - return try_release_subpage_extent_buffer(page_folio(page)); + if (folio_to_fs_info(folio)->nodesize < PAGE_SIZE) + return try_release_subpage_extent_buffer(folio); /* * We need to make sure nobody is changing folio private, as we rely on * folio private as the pointer to extent buffer. */ - spin_lock(&page->mapping->i_private_lock); + spin_lock(&folio->mapping->i_private_lock); if (!folio_test_private(folio)) { - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); return 1; } @@ -4270,10 +4269,10 @@ int try_release_extent_buffer(struct page *page) spin_lock(&eb->refs_lock); if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { spin_unlock(&eb->refs_lock); - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); return 0; } - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); /* * If tree ref isn't set then we know the ref on this eb is a real ref, diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 236da2231a0e..f4c93ca46bdd 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -237,7 +237,7 @@ static inline void extent_changeset_free(struct extent_changeset *changeset) } bool try_release_extent_mapping(struct page *page, gfp_t mask); -int try_release_extent_buffer(struct page *page); +int try_release_extent_buffer(struct folio *folio); int btrfs_read_folio(struct file *file, struct folio *folio); void extent_write_locked_range(struct inode *inode, const struct folio *locked_folio, From patchwork Thu Aug 22 01:37:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772450 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DC94383A9 for ; Thu, 22 Aug 2024 01:29:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290186; cv=none; b=b2ycp8AHu/NcJkg45xr+52x9gwDj0ijc3GlopmOjQMaJIvpW2Uzmdfd4vLnx4QrIsj6Zzy7Jt6vxU9GEwf7SO5sE5aPdjK0XK2mhA02jgA6yooXNVzTwMCXNs7YreEa+7IjMC+Lrk7xXQJPNHBule6U/jseHRK/U6OMelRD8FRw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290186; c=relaxed/simple; bh=dkrnOsf7jpX5stbjWjvVADGOILPbQs/omVkSppNGe4Y=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LrBnr06Xv0x3U0Hd5dr7ZeAof8273bwWUvoZNuYTHdwUf9mP0y7aY4I0Dp0oMfZUwOtdiezDLRKBxZmfz0YWCaDlrDxNQkGew6LTsBOZecDHGdToiWmrwU83S+yQxio0LJvqxJcC8+iSSLIitDjfuiOrTlkWuZ2CkOCmrbCAARU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Wq5DL1829z1xvkB; Thu, 22 Aug 2024 09:27:46 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id EA28618001B; Thu, 22 Aug 2024 09:29:40 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:40 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 05/14] btrfs: convert read_key_bytes() to take a folio Date: Thu, 22 Aug 2024 09:37:05 +0800 Message-ID: <20240822013714.3278193-6-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Moreover, use kmap_local_folio() instend of kmap_local_page(), which is more consistent with folio usage. Signed-off-by: Li Zetao --- fs/btrfs/verity.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c index 4042dd6437ae..e36dc99021a0 100644 --- a/fs/btrfs/verity.c +++ b/fs/btrfs/verity.c @@ -284,7 +284,7 @@ static int write_key_bytes(struct btrfs_inode *inode, u8 key_type, u64 offset, * page and ignore dest, but it must still be non-NULL to avoid the * counting-only behavior. * @len: length in bytes to read - * @dest_page: copy into this page instead of the dest buffer + * @dest_folio: copy into this folio instead of the dest buffer * * Helper function to read items from the btree. This returns the number of * bytes read or < 0 for errors. We can return short reads if the items don't @@ -294,7 +294,7 @@ static int write_key_bytes(struct btrfs_inode *inode, u8 key_type, u64 offset, * Returns number of bytes read or a negative error code on failure. */ static int read_key_bytes(struct btrfs_inode *inode, u8 key_type, u64 offset, - char *dest, u64 len, struct page *dest_page) + char *dest, u64 len, struct folio *dest_folio) { struct btrfs_path *path; struct btrfs_root *root = inode->root; @@ -314,7 +314,7 @@ static int read_key_bytes(struct btrfs_inode *inode, u8 key_type, u64 offset, if (!path) return -ENOMEM; - if (dest_page) + if (dest_folio) path->reada = READA_FORWARD; key.objectid = btrfs_ino(inode); @@ -371,15 +371,15 @@ static int read_key_bytes(struct btrfs_inode *inode, u8 key_type, u64 offset, copy_offset = offset - key.offset; if (dest) { - if (dest_page) - kaddr = kmap_local_page(dest_page); + if (dest_folio) + kaddr = kmap_local_folio(dest_folio, 0); data = btrfs_item_ptr(leaf, path->slots[0], void); read_extent_buffer(leaf, kaddr + dest_offset, (unsigned long)data + copy_offset, copy_bytes); - if (dest_page) + if (dest_folio) kunmap_local(kaddr); } @@ -762,7 +762,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode, * [ inode objectid, BTRFS_MERKLE_ITEM_KEY, offset in bytes ] */ ret = read_key_bytes(BTRFS_I(inode), BTRFS_VERITY_MERKLE_ITEM_KEY, off, - folio_address(folio), PAGE_SIZE, &folio->page); + folio_address(folio), PAGE_SIZE, folio); if (ret < 0) { folio_put(folio); return ERR_PTR(ret); From patchwork Thu Aug 22 01:37:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772446 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37018381C8 for ; Thu, 22 Aug 2024 01:29:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; cv=none; b=Qte9VsigvjJw689ivw5zQH4UqhQCRrWJ1eMpLKD3rmRjBmpFjsUmwO2xaHGKA+EWJ/lDgzkt37S4RAIIRCQSRiA8rTlIoyYZIIqe+6lfbJv4n1CHRXHpql3rm+MI56/BHCjH2uMuvfz0uJYCTnCmgwal8qJS0aX6sH4ZIXPFcgA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; c=relaxed/simple; bh=qA5LtxGJvtYAePAgO0iF6JhD5Gne9qJr3Ohnt8xJcQI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jPxfBx2wK3YcBTzKgODfUJGS4HMb0pHRokTXEmUHnPPbuvgJwabnedTULrFdpgSJLE9G2j5A21YKVYOnx3q9OzHcPbXjY5H1GbSI5lbEWSgqzSPhZce0D//CM6OTFx2rpK1x79jjXUAKiVsvX9UMN187Hz8b4XWldMbknKpb8L0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Wq5Fn2Kl5zpVYq; Thu, 22 Aug 2024 09:29:01 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 5BF4C1800D3; Thu, 22 Aug 2024 09:29:41 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:40 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 06/14] btrfs: convert submit_eb_subpage() to take a folio Date: Thu, 22 Aug 2024 09:37:06 +0800 Message-ID: <20240822013714.3278193-7-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Moreover, use folio_pos() instend of page_offset(), which is more consistent with folio usage. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a0102b9b67ff..ca458237ac35 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1818,12 +1818,11 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb, * Return >=0 for the number of submitted extent buffers. * Return <0 for fatal error. */ -static int submit_eb_subpage(struct page *page, struct writeback_control *wbc) +static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) { - struct btrfs_fs_info *fs_info = page_to_fs_info(page); - struct folio *folio = page_folio(page); + struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); int submitted = 0; - u64 page_start = page_offset(page); + u64 folio_start = folio_pos(folio); int bit_start = 0; int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits; @@ -1838,21 +1837,21 @@ static int submit_eb_subpage(struct page *page, struct writeback_control *wbc) * Take private lock to ensure the subpage won't be detached * in the meantime. */ - spin_lock(&page->mapping->i_private_lock); + spin_lock(&folio->mapping->i_private_lock); if (!folio_test_private(folio)) { - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); break; } spin_lock_irqsave(&subpage->lock, flags); if (!test_bit(bit_start + fs_info->subpage_info->dirty_offset, subpage->bitmaps)) { spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); bit_start++; continue; } - start = page_start + bit_start * fs_info->sectorsize; + start = folio_start + bit_start * fs_info->sectorsize; bit_start += sectors_per_node; /* @@ -1861,7 +1860,7 @@ static int submit_eb_subpage(struct page *page, struct writeback_control *wbc) */ eb = find_extent_buffer_nolock(fs_info, start); spin_unlock_irqrestore(&subpage->lock, flags); - spin_unlock(&page->mapping->i_private_lock); + spin_unlock(&folio->mapping->i_private_lock); /* * The eb has already reached 0 refs thus find_extent_buffer() @@ -1912,7 +1911,7 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx) return 0; if (page_to_fs_info(page)->nodesize < PAGE_SIZE) - return submit_eb_subpage(page, wbc); + return submit_eb_subpage(folio, wbc); spin_lock(&mapping->i_private_lock); if (!folio_test_private(folio)) { From patchwork Thu Aug 22 01:37:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772448 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E98E238DD3 for ; Thu, 22 Aug 2024 01:29:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; cv=none; b=KFXdoRaPcvgQ9B7p/LVCucMf0CtWmyzhgenYOtMuAU0lavwCU7DDp7LNhjxbtmUbj7Ww0xXWMiPsS96Z14gFTEVZVPlpv4nq7tTDMoUstlTviimIDpE0kliSnna4a5Iq4SoYVXueeC157eK5CRXjZVcpRvFNm1nwCoEWwZAB9Ls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290185; c=relaxed/simple; bh=e+uf+N1xdKPR42n/gjlVnQ0lxRSjiMtGtUobvwwSDJk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aE5XWSGffy7J2Os+/+/Q1IPjWsDofhUpsGO5mmUiwQyTldO9T2qWV3uAjR9ZVh1j+pQWixWk1IDv3UmZrLQl3R+6vURvF3wNFfNOtKbefEm9Ar1xgDsCxrQv6AL4pvc26dwNgoUNE5lZFDYUtDek5tO+oC4EFAqO838fsGwcMF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Wq5972yrbz20mFZ; Thu, 22 Aug 2024 09:24:59 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id B8A101A0188; Thu, 22 Aug 2024 09:29:41 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:41 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 07/14] btrfs: convert submit_eb_page() to take a folio Date: Thu, 22 Aug 2024 09:37:07 +0800 Message-ID: <20240822013714.3278193-8-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index ca458237ac35..e16477ef0bfb 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1899,18 +1899,17 @@ static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc) * previous call. * Return <0 for fatal error. */ -static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx) +static int submit_eb_page(struct folio *folio, struct btrfs_eb_write_context *ctx) { struct writeback_control *wbc = ctx->wbc; - struct address_space *mapping = page->mapping; - struct folio *folio = page_folio(page); + struct address_space *mapping = folio->mapping; struct extent_buffer *eb; int ret; if (!folio_test_private(folio)) return 0; - if (page_to_fs_info(page)->nodesize < PAGE_SIZE) + if (folio_to_fs_info(folio)->nodesize < PAGE_SIZE) return submit_eb_subpage(folio, wbc); spin_lock(&mapping->i_private_lock); @@ -2009,7 +2008,7 @@ int btree_write_cache_pages(struct address_space *mapping, for (i = 0; i < nr_folios; i++) { struct folio *folio = fbatch.folios[i]; - ret = submit_eb_page(&folio->page, &ctx); + ret = submit_eb_page(folio, &ctx); if (ret == 0) continue; if (ret < 0) { From patchwork Thu Aug 22 01:37:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772449 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B11FB4500E for ; Thu, 22 Aug 2024 01:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.255 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290186; cv=none; b=k/OR5hA74ywn4W25MOfgNwP6Qt0rjeuBqjoCN02JbWAJMgIpa0LbR2vQwj/sA0mpCWSKhRkifT5dvVqe5zy96khGc06SIlPNytHsceNkoVlBIGFrqz2vawJGXSQBCApuzl0Xbn5InPMmsZLSL/SR/xEc24kBq49Dh3k7xTIFcFI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290186; c=relaxed/simple; bh=WhnKWg00fkEtqFw0Zro8jLu9cGktRj3rIe2XDyItdDg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FywMDkipUEinsmJkiQt6lNVwmsiqbDCqRcZMDbICK7ImMaeGM9/f+ARO5jQV2QHoAz8c+W+0D2IXq8VQe7Si1b3USSgAWrc9Efv5xNm1sLbl0Wrr817DYfaRPxcGRXZBJcUXGrDlvo0cYos/bXxK/7MVdfYcRI9JdMW1Od6R5yI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.255 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Wq5Fp2rYBz13TVL; Thu, 22 Aug 2024 09:29:02 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 2A9B2140361; Thu, 22 Aug 2024 09:29:42 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:41 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 08/14] btrfs: convert try_release_extent_state() to take a folio Date: Thu, 22 Aug 2024 09:37:08 +0800 Message-ID: <20240822013714.3278193-9-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Moreover, use folio_pos() instend of page_offset(), which is more consistent with folio usage. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e16477ef0bfb..27ca7a56d8f5 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2397,9 +2397,9 @@ int extent_invalidate_folio(struct extent_io_tree *tree, * to drop the page. */ static bool try_release_extent_state(struct extent_io_tree *tree, - struct page *page, gfp_t mask) + struct folio *folio, gfp_t mask) { - u64 start = page_offset(page); + u64 start = folio_pos(folio); u64 end = start + PAGE_SIZE - 1; bool ret; @@ -2508,7 +2508,7 @@ bool try_release_extent_mapping(struct page *page, gfp_t mask) cond_resched(); } } - return try_release_extent_state(io_tree, page, mask); + return try_release_extent_state(io_tree, folio, mask); } static void __free_extent_buffer(struct extent_buffer *eb) From patchwork Thu Aug 22 01:37:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772456 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B730936B11 for ; Thu, 22 Aug 2024 01:29:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.35 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290192; cv=none; b=mEOcVOAh/CuODK0Trxw3xtYp+PhZDfc+OT4MM96WW+YNpFPm4HfnK+/KMe2y3MZxj0FK74l9KH206veGwwFTfuB1K5Pf1wapa599RGcRWXEg3JHG5n6WOzO9OeI0HEvploo7hOjn7lcASor/stQUaRqfFo4A0bFCSYmuzll6gTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290192; c=relaxed/simple; bh=dcfR+lly5hchEpn5kkI0pOloCg+4qe046RLJ/5zfvE0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NRrp4dtesu8LDW2q85ceK5uBvnwXeR8EpjZBK8PORsuBTNhCpgI/lkjed3KqzWVSsIXL2jE2Jp//DW484+cNAS8pnzdi5RMRanIR+/ijjNBLQMTIih0RZcqmo/trCahvvwcTEQBB9MTSAq0uxBDFfrrg+pNTv3HgUPVGWENHBAw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.35 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Wq5GV6PHbz1S8V0; Thu, 22 Aug 2024 09:29:38 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 8DFC11400FD; Thu, 22 Aug 2024 09:29:42 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:42 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 09/14] btrfs: convert try_release_extent_mapping() to take a folio Date: Thu, 22 Aug 2024 09:37:09 +0800 Message-ID: <20240822013714.3278193-10-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And page_to_inode() can be replaced with folio_to_inode() now. Signed-off-by: Li Zetao --- fs/btrfs/extent_io.c | 6 +++--- fs/btrfs/extent_io.h | 2 +- fs/btrfs/inode.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 27ca7a56d8f5..cfd523b523b3 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2435,11 +2435,11 @@ static bool try_release_extent_state(struct extent_io_tree *tree, * in the range corresponding to the page, both state records and extent * map records are removed */ -bool try_release_extent_mapping(struct page *page, gfp_t mask) +bool try_release_extent_mapping(struct folio *folio, gfp_t mask) { - u64 start = page_offset(page); + u64 start = folio_pos(folio); u64 end = start + PAGE_SIZE - 1; - struct btrfs_inode *inode = page_to_inode(page); + struct btrfs_inode *inode = folio_to_inode(folio); struct extent_io_tree *io_tree = &inode->io_tree; while (start <= end) { diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index f4c93ca46bdd..d1c54788d444 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -236,7 +236,7 @@ static inline void extent_changeset_free(struct extent_changeset *changeset) kfree(changeset); } -bool try_release_extent_mapping(struct page *page, gfp_t mask); +bool try_release_extent_mapping(struct folio *folio, gfp_t mask); int try_release_extent_buffer(struct folio *folio); int btrfs_read_folio(struct file *file, struct folio *folio); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 5e3b834cc72b..e844c409c12a 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -7238,7 +7238,7 @@ static int btrfs_launder_folio(struct folio *folio) static bool __btrfs_release_folio(struct folio *folio, gfp_t gfp_flags) { - if (try_release_extent_mapping(&folio->page, gfp_flags)) { + if (try_release_extent_mapping(folio, gfp_flags)) { wait_subpage_spinlock(folio); clear_page_extent_mapped(folio); return true; From patchwork Thu Aug 22 01:37:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772451 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B24449658 for ; Thu, 22 Aug 2024 01:29:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290187; cv=none; b=QpkoXYxyEZ18ei4Jh8lOd2J0eLTwhyGtLTIItn3blVgpeoktp48Uot0tpJHHxu/XQ54Je0l2o/KciHy6Be96lGSkmHtXwfJ+1kMso3fIwjMI2nhQb1oikhHZXeLf49GkRS53AOInyQygPE91KTF7CxQqH7lF90JKJFUaluPWAMI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290187; c=relaxed/simple; bh=/bRdWSuLR1wbeyxHO6BvFE4HS4WpF5quNs+9E/e3dlg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nzHti09bytDxHyvE34+Dst8nkpcGnx8PShSZot5AlFMkiYFfCtfsx5/xPAyFBzOAIRkqXuGUlbHMZjhixqt4zldtmn5D1ctoeNgZklaJliyhjV3IWFUFvfK5Ga+qu0EuWakeb6YLU8C6h/9JFG2UTrNH3ySDWw7RWSnQfNIV694= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Wq5DN1KN8z1xvkQ; Thu, 22 Aug 2024 09:27:48 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id F22A31400FD; Thu, 22 Aug 2024 09:29:42 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:42 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 10/14] btrfs: convert zlib_decompress() to take a folio Date: Thu, 22 Aug 2024 09:37:10 +0800 Message-ID: <20240822013714.3278193-11-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And memcpy_to_page() can be replaced with memcpy_to_folio(). But there is no memzero_folio(), but it can be replaced equivalently by folio_zero_range(). Signed-off-by: Li Zetao --- fs/btrfs/compression.c | 2 +- fs/btrfs/compression.h | 2 +- fs/btrfs/zlib.c | 14 +++++++------- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 832ab8984c41..19d18f875563 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -142,7 +142,7 @@ static int compression_decompress(int type, struct list_head *ws, unsigned long dest_pgoff, size_t srclen, size_t destlen) { switch (type) { - case BTRFS_COMPRESS_ZLIB: return zlib_decompress(ws, data_in, dest_page, + case BTRFS_COMPRESS_ZLIB: return zlib_decompress(ws, data_in, page_folio(dest_page), dest_pgoff, srclen, destlen); case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, dest_page, dest_pgoff, srclen, destlen); diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h index 5d01f092ae13..f4f7a981cb90 100644 --- a/fs/btrfs/compression.h +++ b/fs/btrfs/compression.h @@ -162,7 +162,7 @@ int zlib_compress_folios(struct list_head *ws, struct address_space *mapping, unsigned long *total_in, unsigned long *total_out); int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb); int zlib_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen); struct list_head *zlib_alloc_workspace(unsigned int level); void zlib_free_workspace(struct list_head *ws); diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 8aa82ee1991e..4ca7ff38234c 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -393,7 +393,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb) } int zlib_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen) { struct workspace *workspace = list_entry(ws, struct workspace, list); @@ -421,12 +421,12 @@ int zlib_decompress(struct list_head *ws, const u8 *data_in, ret = zlib_inflateInit2(&workspace->strm, wbits); if (unlikely(ret != Z_OK)) { - struct btrfs_inode *inode = BTRFS_I(dest_page->mapping->host); + struct btrfs_inode *inode = folio_to_inode(dest_folio); btrfs_err(inode->root->fs_info, "zlib decompression init failed, error %d root %llu inode %llu offset %llu", ret, btrfs_root_id(inode->root), btrfs_ino(inode), - page_offset(dest_page)); + folio_pos(dest_folio)); return -EIO; } @@ -439,16 +439,16 @@ int zlib_decompress(struct list_head *ws, const u8 *data_in, if (ret != Z_STREAM_END) goto out; - memcpy_to_page(dest_page, dest_pgoff, workspace->buf, to_copy); + memcpy_to_folio(dest_folio, dest_pgoff, workspace->buf, to_copy); out: if (unlikely(to_copy != destlen)) { - struct btrfs_inode *inode = BTRFS_I(dest_page->mapping->host); + struct btrfs_inode *inode = folio_to_inode(dest_folio); btrfs_err(inode->root->fs_info, "zlib decompression failed, error %d root %llu inode %llu offset %llu decompressed %lu expected %zu", ret, btrfs_root_id(inode->root), btrfs_ino(inode), - page_offset(dest_page), to_copy, destlen); + folio_pos(dest_folio), to_copy, destlen); ret = -EIO; } else { ret = 0; @@ -457,7 +457,7 @@ int zlib_decompress(struct list_head *ws, const u8 *data_in, zlib_inflateEnd(&workspace->strm); if (unlikely(to_copy < destlen)) - memzero_page(dest_page, dest_pgoff + to_copy, destlen - to_copy); + folio_zero_range(dest_folio, dest_pgoff + to_copy, destlen - to_copy); return ret; } From patchwork Thu Aug 22 01:37:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772452 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D62F055884 for ; Thu, 22 Aug 2024 01:29:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; cv=none; b=aIfsSLXmHWCMSTZu9HN7MpUYbwpRYlKkzi+XjewYTonG0AP7l7NbVjdrRaTFedWrH569CFGSnmt62ndkqZxpFoOEj6gocAKau6rO+3zbhki5uz71Q6RAp1ORDLqNCv6BodEb96hG/HVQJxyah4VpFqJfIwYKReTO9xIat+Kpt10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; c=relaxed/simple; bh=WYTKqEp7+QqUdpoR4DoTZFyfT3vrRkjc5wccaAFMR4s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kFoDW8m9X475NS5GQsFWyLz/zSZS8CLs2Nd6vENZl7TLm+tUpn72C8135cRg/52sKFRq5/Z6+XQowS5iawdx+qkVIEXv2cq4Y8fSftO6Up3jjp6B1h/y55R/k/IxNOypEaOhSGVWd1bPGFEVbkgRpo7AhL38JeamejGctg7iI4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Wq5Bt4Pfwz1HGtZ; Thu, 22 Aug 2024 09:26:30 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 57FDF14022D; Thu, 22 Aug 2024 09:29:43 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:42 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 11/14] btrfs: convert lzo_decompress() to take a folio Date: Thu, 22 Aug 2024 09:37:11 +0800 Message-ID: <20240822013714.3278193-12-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And memcpy_to_page() can be replaced with memcpy_to_folio(). But there is no memzero_folio(), but it can be replaced equivalently by folio_zero_range(). Signed-off-by: Li Zetao --- fs/btrfs/compression.c | 2 +- fs/btrfs/compression.h | 2 +- fs/btrfs/lzo.c | 12 ++++++------ 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 19d18f875563..8e67203ab97d 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -144,7 +144,7 @@ static int compression_decompress(int type, struct list_head *ws, switch (type) { case BTRFS_COMPRESS_ZLIB: return zlib_decompress(ws, data_in, page_folio(dest_page), dest_pgoff, srclen, destlen); - case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, dest_page, + case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, page_folio(dest_page), dest_pgoff, srclen, destlen); case BTRFS_COMPRESS_ZSTD: return zstd_decompress(ws, data_in, dest_page, dest_pgoff, srclen, destlen); diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h index f4f7a981cb90..4b5a7ba54815 100644 --- a/fs/btrfs/compression.h +++ b/fs/btrfs/compression.h @@ -173,7 +173,7 @@ int lzo_compress_folios(struct list_head *ws, struct address_space *mapping, unsigned long *total_in, unsigned long *total_out); int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb); int lzo_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen); struct list_head *lzo_alloc_workspace(unsigned int level); void lzo_free_workspace(struct list_head *ws); diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index 1e2a68b8f62d..72856f6775f7 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -438,11 +438,11 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) } int lzo_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen) { struct workspace *workspace = list_entry(ws, struct workspace, list); - struct btrfs_fs_info *fs_info = page_to_fs_info(dest_page); + struct btrfs_fs_info *fs_info = folio_to_fs_info(dest_folio); const u32 sectorsize = fs_info->sectorsize; size_t in_len; size_t out_len; @@ -467,22 +467,22 @@ int lzo_decompress(struct list_head *ws, const u8 *data_in, out_len = sectorsize; ret = lzo1x_decompress_safe(data_in, in_len, workspace->buf, &out_len); if (unlikely(ret != LZO_E_OK)) { - struct btrfs_inode *inode = BTRFS_I(dest_page->mapping->host); + struct btrfs_inode *inode = folio_to_inode(dest_folio); btrfs_err(fs_info, "lzo decompression failed, error %d root %llu inode %llu offset %llu", ret, btrfs_root_id(inode->root), btrfs_ino(inode), - page_offset(dest_page)); + folio_pos(dest_folio)); ret = -EIO; goto out; } ASSERT(out_len <= sectorsize); - memcpy_to_page(dest_page, dest_pgoff, workspace->buf, out_len); + memcpy_to_folio(dest_folio, dest_pgoff, workspace->buf, out_len); /* Early end, considered as an error. */ if (unlikely(out_len < destlen)) { ret = -EIO; - memzero_page(dest_page, dest_pgoff + out_len, destlen - out_len); + folio_zero_range(dest_folio, dest_pgoff + out_len, destlen - out_len); } out: return ret; From patchwork Thu Aug 22 01:37:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772453 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3104655E48 for ; Thu, 22 Aug 2024 01:29:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; cv=none; b=Xp3mpsr6bgk1o0+769TPWGATX1PrUlq8rZzJUPUndDcgCUIU4vuTVWz5JBAdPMveuAddc9/UR0JB8Th/JqnccDNCyVRTGoQSx1jb34Oir7iqh+k4d/2UsvWWM83cY72myeOJgxOiCkUbT2y3vTsAuslBA/s3oFalZQ2kQ/Mx7k4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; c=relaxed/simple; bh=6UcaBiA4VvP6F+MqoHXTeJzxP08Z4uLo6+oRomO5Agk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rQwLGZAmZ41HrQJKAdejhBHntB12v9U6FNsEzBr+A6DPWKvUkiEXDd+cb1jdxQ+3SLV94AlE8lvffP8Ef0yEmZybM/uX53Y37x+lLzR3Lk2clcFSEAkAIn5KSc8In8d22Ns1BVLzyE5bjoDxFeaweMg8p/hf6uc1fX9CPKCkgdk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Wq5Dq3CgnzpTZC; Thu, 22 Aug 2024 09:28:11 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id BB3DC1404A6; Thu, 22 Aug 2024 09:29:43 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:43 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 12/14] btrfs: convert zstd_decompress() to take a folio Date: Thu, 22 Aug 2024 09:37:12 +0800 Message-ID: <20240822013714.3278193-13-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. And memcpy_to_page() can be replaced with memcpy_to_folio(). But there is no memzero_folio(), but it can be replaced equivalently by folio_zero_range(). Signed-off-by: Li Zetao --- fs/btrfs/compression.c | 2 +- fs/btrfs/compression.h | 2 +- fs/btrfs/zstd.c | 16 ++++++++-------- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 8e67203ab97d..eab79edeb64c 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -146,7 +146,7 @@ static int compression_decompress(int type, struct list_head *ws, dest_pgoff, srclen, destlen); case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, page_folio(dest_page), dest_pgoff, srclen, destlen); - case BTRFS_COMPRESS_ZSTD: return zstd_decompress(ws, data_in, dest_page, + case BTRFS_COMPRESS_ZSTD: return zstd_decompress(ws, data_in, page_folio(dest_page), dest_pgoff, srclen, destlen); case BTRFS_COMPRESS_NONE: default: diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h index 4b5a7ba54815..d2453cf28eef 100644 --- a/fs/btrfs/compression.h +++ b/fs/btrfs/compression.h @@ -183,7 +183,7 @@ int zstd_compress_folios(struct list_head *ws, struct address_space *mapping, unsigned long *total_in, unsigned long *total_out); int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb); int zstd_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen); void zstd_init_workspace_manager(void); void zstd_cleanup_workspace_manager(void); diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 05cf7cebc17c..866607fd3e58 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -656,11 +656,11 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb) } int zstd_decompress(struct list_head *ws, const u8 *data_in, - struct page *dest_page, unsigned long dest_pgoff, size_t srclen, + struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen) { struct workspace *workspace = list_entry(ws, struct workspace, list); - struct btrfs_fs_info *fs_info = btrfs_sb(dest_page->mapping->host->i_sb); + struct btrfs_fs_info *fs_info = btrfs_sb(folio_inode(dest_folio)->i_sb); const u32 sectorsize = fs_info->sectorsize; zstd_dstream *stream; int ret = 0; @@ -669,12 +669,12 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in, stream = zstd_init_dstream( ZSTD_BTRFS_MAX_INPUT, workspace->mem, workspace->size); if (unlikely(!stream)) { - struct btrfs_inode *inode = BTRFS_I(dest_page->mapping->host); + struct btrfs_inode *inode = folio_to_inode(dest_folio); btrfs_err(inode->root->fs_info, "zstd decompression init failed, root %llu inode %llu offset %llu", btrfs_root_id(inode->root), btrfs_ino(inode), - page_offset(dest_page)); + folio_pos(dest_folio)); ret = -EIO; goto finish; } @@ -693,21 +693,21 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in, */ ret = zstd_decompress_stream(stream, &workspace->out_buf, &workspace->in_buf); if (unlikely(zstd_is_error(ret))) { - struct btrfs_inode *inode = BTRFS_I(dest_page->mapping->host); + struct btrfs_inode *inode = folio_to_inode(dest_folio); btrfs_err(inode->root->fs_info, "zstd decompression failed, error %d root %llu inode %llu offset %llu", zstd_get_error_code(ret), btrfs_root_id(inode->root), - btrfs_ino(inode), page_offset(dest_page)); + btrfs_ino(inode), folio_pos(dest_folio)); goto finish; } to_copy = workspace->out_buf.pos; - memcpy_to_page(dest_page, dest_pgoff, workspace->out_buf.dst, to_copy); + memcpy_to_folio(dest_folio, dest_pgoff, workspace->out_buf.dst, to_copy); finish: /* Error or early end. */ if (unlikely(to_copy < destlen)) { ret = -EIO; - memzero_page(dest_page, dest_pgoff + to_copy, destlen - to_copy); + folio_zero_range(dest_folio, dest_pgoff + to_copy, destlen - to_copy); } return ret; } From patchwork Thu Aug 22 01:37:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772454 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7758938DD3 for ; Thu, 22 Aug 2024 01:29:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; cv=none; b=j0vTFZhc3B9FUrkL7mhzT8Dg0ClNGaAYnTqSdKVJGYGbM2a8ds2GptwBWbZkbka++rQuPnsHSTIB9rL9yLhym7eFms6rL51cb+viz3t/C4mzZNMncfQrstk6UXrirBPuEiy7P+ykoC8IwwZmB1x8twMTY9XBhosjf3wBvZo2dl8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; c=relaxed/simple; bh=b5LZsZ0MUgoI3HeshLowtMZNbGOn77j5/5YaV+lZaXk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MEow0q2NKUMPUYl4R7J8rRUioqk2x6zP5U9s5dfbu34Lwub2KItXGE9x/vRFz9Bw6yIkMgF3ObOjEakgvBR4x61ERrdTLoZzgFapEGloDNKNI2WlI7IsrCTzOCJ32yXx2icr+Nq6ABmFgy6U+vCl4M7J/Vsfm4EQf+YWKd1iF1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Wq5GY0jV0z2Cn8v; Thu, 22 Aug 2024 09:29:41 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 313851400FD; Thu, 22 Aug 2024 09:29:44 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:43 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 13/14] btrfs: convert btrfs_decompress() to take a folio Date: Thu, 22 Aug 2024 09:37:13 +0800 Message-ID: <20240822013714.3278193-14-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Based on the previous patch, the compression path can be directly used in folio without converting to page. Signed-off-by: Li Zetao --- fs/btrfs/compression.c | 14 +++++++------- fs/btrfs/compression.h | 2 +- fs/btrfs/inode.c | 2 +- fs/btrfs/reflink.c | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index eab79edeb64c..9869944a5acc 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -138,15 +138,15 @@ static int compression_decompress_bio(struct list_head *ws, } static int compression_decompress(int type, struct list_head *ws, - const u8 *data_in, struct page *dest_page, + const u8 *data_in, struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen) { switch (type) { - case BTRFS_COMPRESS_ZLIB: return zlib_decompress(ws, data_in, page_folio(dest_page), + case BTRFS_COMPRESS_ZLIB: return zlib_decompress(ws, data_in, dest_folio, dest_pgoff, srclen, destlen); - case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, page_folio(dest_page), + case BTRFS_COMPRESS_LZO: return lzo_decompress(ws, data_in, dest_folio, dest_pgoff, srclen, destlen); - case BTRFS_COMPRESS_ZSTD: return zstd_decompress(ws, data_in, page_folio(dest_page), + case BTRFS_COMPRESS_ZSTD: return zstd_decompress(ws, data_in, dest_folio, dest_pgoff, srclen, destlen); case BTRFS_COMPRESS_NONE: default: @@ -1061,10 +1061,10 @@ static int btrfs_decompress_bio(struct compressed_bio *cb) * single page, and we want to read a single page out of it. * start_byte tells us the offset into the compressed data we're interested in */ -int btrfs_decompress(int type, const u8 *data_in, struct page *dest_page, +int btrfs_decompress(int type, const u8 *data_in, struct folio *dest_folio, unsigned long dest_pgoff, size_t srclen, size_t destlen) { - struct btrfs_fs_info *fs_info = page_to_fs_info(dest_page); + struct btrfs_fs_info *fs_info = folio_to_fs_info(dest_folio); struct list_head *workspace; const u32 sectorsize = fs_info->sectorsize; int ret; @@ -1077,7 +1077,7 @@ int btrfs_decompress(int type, const u8 *data_in, struct page *dest_page, ASSERT(dest_pgoff + destlen <= PAGE_SIZE && destlen <= sectorsize); workspace = get_workspace(type, 0); - ret = compression_decompress(type, workspace, data_in, dest_page, + ret = compression_decompress(type, workspace, data_in, dest_folio, dest_pgoff, srclen, destlen); put_workspace(type, workspace); diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h index d2453cf28eef..b6563b6a333e 100644 --- a/fs/btrfs/compression.h +++ b/fs/btrfs/compression.h @@ -96,7 +96,7 @@ void __cold btrfs_exit_compress(void); int btrfs_compress_folios(unsigned int type_level, struct address_space *mapping, u64 start, struct folio **folios, unsigned long *out_folios, unsigned long *total_in, unsigned long *total_out); -int btrfs_decompress(int type, const u8 *data_in, struct page *dest_page, +int btrfs_decompress(int type, const u8 *data_in, struct folio *dest_folio, unsigned long start_byte, size_t srclen, size_t destlen); int btrfs_decompress_buf2page(const char *buf, u32 buf_len, struct compressed_bio *cb, u32 decompressed); diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index e844c409c12a..f9ade617700f 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -6746,7 +6746,7 @@ static noinline int uncompress_inline(struct btrfs_path *path, read_extent_buffer(leaf, tmp, ptr, inline_size); max_size = min_t(unsigned long, PAGE_SIZE, max_size); - ret = btrfs_decompress(compress_type, tmp, &folio->page, 0, inline_size, + ret = btrfs_decompress(compress_type, tmp, folio, 0, inline_size, max_size); /* diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index df6b93b927cd..b768e590a44c 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -118,7 +118,7 @@ static int copy_inline_to_page(struct btrfs_inode *inode, memcpy_to_page(page, offset_in_page(file_offset), data_start, datal); } else { - ret = btrfs_decompress(comp_type, data_start, page, + ret = btrfs_decompress(comp_type, data_start, page_folio(page), offset_in_page(file_offset), inline_size, datal); if (ret) From patchwork Thu Aug 22 01:37:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizetao X-Patchwork-Id: 13772455 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8BBF56766 for ; Thu, 22 Aug 2024 01:29:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; cv=none; b=Y5fkFaWHbGPhDGwdsxCcTknq0zhRB4buxULeFHeDXltRdJeoz9MulsLIKFMoBJBgtUyHrjKxX2vTCljJky0FHXSoptHyaM0cTq/GkbcHZ87RnJAiaLxgAI1QV3GfxPo/Eh79gJTog4XfLMHIsjMP4WeDVGWdMGOo6lDzAgT59G8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724290188; c=relaxed/simple; bh=uDLeiFNv8iQbAb0vm7pscTF80mc3TKjg1EkRm1JDnU4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=GPVsSlcXkK+pD+w3CCYuXwDxgDo7fZNSDnm/QkOK/A6kEqEgcMxqaRlDZ+9fw0GSikgqY5luM6OEMkfkLTBm9uUsQeR9aFSMd6i7y+2V9dGTvNgrsCC/ZKT1Zpf59ISJJLbVA9UzcTNki0tEzdt7876d/kw56/sEoDJpgtS7oC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Wq5DH5KK5zhY5Y; Thu, 22 Aug 2024 09:27:43 +0800 (CST) Received: from kwepemd500012.china.huawei.com (unknown [7.221.188.25]) by mail.maildlp.com (Postfix) with ESMTPS id 942941800D3; Thu, 22 Aug 2024 09:29:44 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemd500012.china.huawei.com (7.221.188.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.34; Thu, 22 Aug 2024 09:29:44 +0800 From: Li Zetao To: , , , CC: , , , Subject: [PATCH 14/14] btrfs: convert copy_inline_to_page() to use folio Date: Thu, 22 Aug 2024 09:37:14 +0800 Message-ID: <20240822013714.3278193-15-lizetao1@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240822013714.3278193-1-lizetao1@huawei.com> References: <20240822013714.3278193-1-lizetao1@huawei.com> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemd500012.china.huawei.com (7.221.188.25) The old page API is being gradually replaced and converted to use folio to improve code readability and avoid repeated conversion between page and folio. Moreover find_or_create_page() is compatible API, and it can replaced with __filemap_get_folio(). Some interfaces have been converted to use folio before, so the conversion operation from page can be eliminated here. Signed-off-by: Li Zetao --- fs/btrfs/reflink.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index b768e590a44c..1681d63f03dd 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -66,7 +66,7 @@ static int copy_inline_to_page(struct btrfs_inode *inode, const size_t inline_size = size - btrfs_file_extent_calc_inline_size(0); char *data_start = inline_data + btrfs_file_extent_calc_inline_size(0); struct extent_changeset *data_reserved = NULL; - struct page *page = NULL; + struct folio *folio = NULL; struct address_space *mapping = inode->vfs_inode.i_mapping; int ret; @@ -83,14 +83,15 @@ static int copy_inline_to_page(struct btrfs_inode *inode, if (ret) goto out; - page = find_or_create_page(mapping, file_offset >> PAGE_SHIFT, - btrfs_alloc_write_mask(mapping)); - if (!page) { + folio = __filemap_get_folio(mapping, file_offset >> PAGE_SHIFT, + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, + btrfs_alloc_write_mask(mapping)); + if (IS_ERR(folio)) { ret = -ENOMEM; goto out_unlock; } - ret = set_page_extent_mapped(page); + ret = set_folio_extent_mapped(folio); if (ret < 0) goto out_unlock; @@ -115,15 +116,15 @@ static int copy_inline_to_page(struct btrfs_inode *inode, set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags); if (comp_type == BTRFS_COMPRESS_NONE) { - memcpy_to_page(page, offset_in_page(file_offset), data_start, - datal); + memcpy_to_folio(folio, offset_in_folio(folio, file_offset), data_start, + datal); } else { - ret = btrfs_decompress(comp_type, data_start, page_folio(page), - offset_in_page(file_offset), + ret = btrfs_decompress(comp_type, data_start, folio, + offset_in_folio(folio, file_offset), inline_size, datal); if (ret) goto out_unlock; - flush_dcache_page(page); + flush_dcache_folio(folio); } /* @@ -139,15 +140,15 @@ static int copy_inline_to_page(struct btrfs_inode *inode, * So what's in the range [500, 4095] corresponds to zeroes. */ if (datal < block_size) - memzero_page(page, datal, block_size - datal); + folio_zero_range(folio, datal, block_size - datal); - btrfs_folio_set_uptodate(fs_info, page_folio(page), file_offset, block_size); - btrfs_folio_clear_checked(fs_info, page_folio(page), file_offset, block_size); - btrfs_folio_set_dirty(fs_info, page_folio(page), file_offset, block_size); + btrfs_folio_set_uptodate(fs_info, folio, file_offset, block_size); + btrfs_folio_clear_checked(fs_info, folio, file_offset, block_size); + btrfs_folio_set_dirty(fs_info, folio, file_offset, block_size); out_unlock: - if (page) { - unlock_page(page); - put_page(page); + if (IS_ERR(folio)) { + folio_unlock(folio); + folio_put(folio); } if (ret) btrfs_delalloc_release_space(inode, data_reserved, file_offset,