From patchwork Thu Dec 14 16:13:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13493229 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="s3s7p4/C" Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48812E8 for ; Thu, 14 Dec 2023 08:13:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/qmrS5BPoMSmeLlMfhJ1H3Qeoqaj2aIAh9UA8EhQCUg=; b=s3s7p4/CNNw8Es44rxB9+kZDO1 AvZkIbg0s5C1r9fYxaCnnuen7x6Hc6HARI8PhxLb3slNXc+V+74JRGpKFJj+eMuP4/19Wa8U6Hi0L JQrmyWeW3jie4CQBxWmf0LkeMekBcfGyBQVq6QUjkjb3OKwLqCQRQXMHucYT9lMiYFy0LUdWTo4C1 BQlpkbGqqfWluTmA5k0fDj3JZyBmc5N1urajPFx6Ln48Z9N6Kfk/2nSp9bii+QLdW50hOl9ICYLfQ bLl+IxKJ6CQxXoM1AkgJMSTmLPoNpi/n7SG7SIQ51PFgTlXo5WpTIEhkdff79waRLFRVHExK138j9 dygcsz8A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDoL6-008U8D-TS; Thu, 14 Dec 2023 16:13:32 +0000 From: "Matthew Wilcox (Oracle)" To: Qu Wenruo Cc: "Matthew Wilcox (Oracle)" , Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org Subject: [PATCH v2 1/3] btrfs; Add set_folio_extent_mapped() Date: Thu, 14 Dec 2023 16:13:29 +0000 Message-Id: <20231214161331.2022416-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231214161331.2022416-1-willy@infradead.org> References: <20231214161331.2022416-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Turn set_page_extent_mapped() into a wrapper around this version. Saves a call to compound_head() for callers who already have a folio and removes a couple of users of page->mapping. Signed-off-by: Matthew Wilcox (Oracle) --- fs/btrfs/extent_io.c | 12 ++++++++---- fs/btrfs/extent_io.h | 1 + 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 2a883c21c99f..ed75413aa9ae 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -936,17 +936,21 @@ static int attach_extent_buffer_folio(struct extent_buffer *eb, int set_page_extent_mapped(struct page *page) { - struct folio *folio = page_folio(page); + return set_folio_extent_mapped(page_folio(page)); +} + +int set_folio_extent_mapped(struct folio *folio) +{ struct btrfs_fs_info *fs_info; - ASSERT(page->mapping); + ASSERT(folio->mapping); if (folio_test_private(folio)) return 0; - fs_info = btrfs_sb(page->mapping->host->i_sb); + fs_info = btrfs_sb(folio->mapping->host->i_sb); - if (btrfs_is_subpage(fs_info, page->mapping)) + if (btrfs_is_subpage(fs_info, folio->mapping)) return btrfs_attach_subpage(fs_info, folio, BTRFS_SUBPAGE_DATA); folio_attach_private(folio, (void *)EXTENT_FOLIO_PRIVATE); diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 46050500529b..2c9d6570b0a3 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -221,6 +221,7 @@ int btree_write_cache_pages(struct address_space *mapping, void extent_readahead(struct readahead_control *rac); int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo, u64 start, u64 len); +int set_folio_extent_mapped(struct folio *folio); int set_page_extent_mapped(struct page *page); void clear_page_extent_mapped(struct page *page); From patchwork Thu Dec 14 16:13:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13493228 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qTU2cPGb" Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0ECA10A for ; Thu, 14 Dec 2023 08:13:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IxmqLcvPd4xkwbVKxnh3K0mGKqCJXdh9J06AdcwhQsU=; b=qTU2cPGbdiXjyzQgOqs0YfoGto hvYTlVURokq6g39bqCsqCwebj4z0eIS3iDqY3e7Fg46TYEAvbd3R5/VntBvj84+sIijoxtzu697WV gQQtsDrxuGFTBAuE73YqY3DlnonAg81MucANeEwBt3DG6lnQLFxnmhWYaLgYflfrho6uE1kJD93pX PykKZrD0T0IeQcQxTtSI/0lwgTRZAiACGsrqhTAoe5a5fKct+Ug5NowAnar9LkeIgI43ruSCwv4da YAg6eBrAU9rT4mpXZjoF2QYKdFdWdQ/3kQp8KUXAkrLlhQQl4G/Ks/v+NCTRq5vVCsgO/XNmZV9Ce xoeqRo0A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDoL7-008U8F-0A; Thu, 14 Dec 2023 16:13:33 +0000 From: "Matthew Wilcox (Oracle)" To: Qu Wenruo Cc: "Matthew Wilcox (Oracle)" , Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org Subject: [PATCH v2 2/3] btrfs: Convert defrag_prepare_one_page() to use a folio Date: Thu, 14 Dec 2023 16:13:30 +0000 Message-Id: <20231214161331.2022416-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231214161331.2022416-1-willy@infradead.org> References: <20231214161331.2022416-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Use a folio throughout defrag_prepare_one_page() to remove dozens of hidden calls to compound_head(). There is no support here for large folios; indeed, turn the existing check for PageCompound into a check for large folios. Signed-off-by: Matthew Wilcox (Oracle) --- fs/btrfs/defrag.c | 53 ++++++++++++++++++++++++----------------------- 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c index c276b136ab63..07c40abfe3d7 100644 --- a/fs/btrfs/defrag.c +++ b/fs/btrfs/defrag.c @@ -868,13 +868,14 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i u64 page_start = (u64)index << PAGE_SHIFT; u64 page_end = page_start + PAGE_SIZE - 1; struct extent_state *cached_state = NULL; - struct page *page; + struct folio *folio; int ret; again: - page = find_or_create_page(mapping, index, mask); - if (!page) - return ERR_PTR(-ENOMEM); + folio = __filemap_get_folio(mapping, index, + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask); + if (IS_ERR(folio)) + return &folio->page; /* * Since we can defragment files opened read-only, we can encounter @@ -884,16 +885,16 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i * executables that explicitly enable them, so this isn't very * restrictive. */ - if (PageCompound(page)) { - unlock_page(page); - put_page(page); + if (folio_test_large(folio)) { + folio_unlock(folio); + folio_put(folio); return ERR_PTR(-ETXTBSY); } - ret = set_page_extent_mapped(page); + ret = set_folio_extent_mapped(folio); if (ret < 0) { - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return ERR_PTR(ret); } @@ -908,17 +909,17 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i if (!ordered) break; - unlock_page(page); + folio_unlock(folio); btrfs_start_ordered_extent(ordered); btrfs_put_ordered_extent(ordered); - lock_page(page); + folio_lock(folio); /* - * We unlocked the page above, so we need check if it was + * We unlocked the folio above, so we need check if it was * released or not. */ - if (page->mapping != mapping || !PagePrivate(page)) { - unlock_page(page); - put_page(page); + if (folio->mapping != mapping || !folio->private) { + folio_unlock(folio); + folio_put(folio); goto again; } } @@ -927,21 +928,21 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i * Now the page range has no ordered extent any more. Read the page to * make it uptodate. */ - if (!PageUptodate(page)) { - btrfs_read_folio(NULL, page_folio(page)); - lock_page(page); - if (page->mapping != mapping || !PagePrivate(page)) { - unlock_page(page); - put_page(page); + if (!folio_test_uptodate(folio)) { + btrfs_read_folio(NULL, folio); + folio_lock(folio); + if (folio->mapping != mapping || !folio->private) { + folio_unlock(folio); + folio_put(folio); goto again; } - if (!PageUptodate(page)) { - unlock_page(page); - put_page(page); + if (!folio_test_uptodate(folio)) { + folio_unlock(folio); + folio_put(folio); return ERR_PTR(-EIO); } } - return page; + return &folio->page; } struct defrag_target_range { From patchwork Thu Dec 14 16:13:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13493230 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JAxdkpha" Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B685010A for ; Thu, 14 Dec 2023 08:13:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6c1yEdTyibPsw2+ybLS2g7GChgPbSh/ST79y2flXAQ4=; b=JAxdkphaaOqGA82RggAij4sNYu vZ2xtVaolV5mx33tFTJK4VPLk74wwy88W5oQ9pY93gzawBHrcnYdCztRQmsFJB/HVl/rNtdVYvN7p bnWirdel3oMYrMA2MrdRZama9cAJAJ1SKU56cqWBhJWhrjpEnTe6bUSSEhs4DmFRF8vQZvxsPKNzY RGZlYkqdhJNznQGFEAX2vMOel5I9ih09iZc3nEwmlvMJPud3IYvw4Y1h+/juJu2Mwu7kTSMHv+sIC H8ZRa1Xn0x1AYLK8FbKlc86grxJ4IAXOOSwZ5WM3ULrybQo5lcaQhwKkWuVSD5kAAYexWvQ/i4SJU kNQlyNyw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDoL7-008U8I-2x; Thu, 14 Dec 2023 16:13:33 +0000 From: "Matthew Wilcox (Oracle)" To: Qu Wenruo Cc: "Matthew Wilcox (Oracle)" , Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org Subject: [PATCH v2 3/3] btrfs: Use a folio array throughout the defrag process Date: Thu, 14 Dec 2023 16:13:31 +0000 Message-Id: <20231214161331.2022416-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231214161331.2022416-1-willy@infradead.org> References: <20231214161331.2022416-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-btrfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Remove more hidden calls to compound_head() by using an array of folios instead of pages. Also neaten the error path in defrag_one_range() by adjusting the length of the array instead of checking for NULL. Signed-off-by: Matthew Wilcox (Oracle) --- fs/btrfs/defrag.c | 44 +++++++++++++++++++++----------------------- 1 file changed, 21 insertions(+), 23 deletions(-) diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c index 07c40abfe3d7..8f90dc46aae6 100644 --- a/fs/btrfs/defrag.c +++ b/fs/btrfs/defrag.c @@ -861,7 +861,7 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em, * NOTE: Caller should also wait for page writeback after the cluster is * prepared, here we don't do writeback wait for each page. */ -static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t index) +static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t index) { struct address_space *mapping = inode->vfs_inode.i_mapping; gfp_t mask = btrfs_alloc_write_mask(mapping); @@ -875,7 +875,7 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i folio = __filemap_get_folio(mapping, index, FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask); if (IS_ERR(folio)) - return &folio->page; + return folio; /* * Since we can defragment files opened read-only, we can encounter @@ -942,7 +942,7 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i return ERR_PTR(-EIO); } } - return &folio->page; + return folio; } struct defrag_target_range { @@ -1163,7 +1163,7 @@ static_assert(PAGE_ALIGNED(CLUSTER_SIZE)); */ static int defrag_one_locked_target(struct btrfs_inode *inode, struct defrag_target_range *target, - struct page **pages, int nr_pages, + struct folio **folios, int nr_pages, struct extent_state **cached_state) { struct btrfs_fs_info *fs_info = inode->root->fs_info; @@ -1172,7 +1172,7 @@ static int defrag_one_locked_target(struct btrfs_inode *inode, const u64 len = target->len; unsigned long last_index = (start + len - 1) >> PAGE_SHIFT; unsigned long start_index = start >> PAGE_SHIFT; - unsigned long first_index = page_index(pages[0]); + unsigned long first_index = folios[0]->index; int ret = 0; int i; @@ -1189,8 +1189,8 @@ static int defrag_one_locked_target(struct btrfs_inode *inode, /* Update the page status */ for (i = start_index - first_index; i <= last_index - first_index; i++) { - ClearPageChecked(pages[i]); - btrfs_folio_clamp_set_dirty(fs_info, page_folio(pages[i]), start, len); + folio_clear_checked(folios[i]); + btrfs_folio_clamp_set_dirty(fs_info, folios[i], start, len); } btrfs_delalloc_release_extents(inode, len); extent_changeset_free(data_reserved); @@ -1206,7 +1206,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, struct defrag_target_range *entry; struct defrag_target_range *tmp; LIST_HEAD(target_list); - struct page **pages; + struct folio **folios; const u32 sectorsize = inode->root->fs_info->sectorsize; u64 last_index = (start + len - 1) >> PAGE_SHIFT; u64 start_index = start >> PAGE_SHIFT; @@ -1217,21 +1217,21 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, ASSERT(nr_pages <= CLUSTER_SIZE / PAGE_SIZE); ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(len, sectorsize)); - pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS); - if (!pages) + folios = kcalloc(nr_pages, sizeof(struct folio *), GFP_NOFS); + if (!folios) return -ENOMEM; /* Prepare all pages */ for (i = 0; i < nr_pages; i++) { - pages[i] = defrag_prepare_one_page(inode, start_index + i); - if (IS_ERR(pages[i])) { - ret = PTR_ERR(pages[i]); - pages[i] = NULL; - goto free_pages; + folios[i] = defrag_prepare_one_folio(inode, start_index + i); + if (IS_ERR(folios[i])) { + ret = PTR_ERR(folios[i]); + nr_pages = i; + goto free_folios; } } for (i = 0; i < nr_pages; i++) - wait_on_page_writeback(pages[i]); + folio_wait_writeback(folios[i]); /* Lock the pages range */ lock_extent(&inode->io_tree, start_index << PAGE_SHIFT, @@ -1251,7 +1251,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, goto unlock_extent; list_for_each_entry(entry, &target_list, list) { - ret = defrag_one_locked_target(inode, entry, pages, nr_pages, + ret = defrag_one_locked_target(inode, entry, folios, nr_pages, &cached_state); if (ret < 0) break; @@ -1265,14 +1265,12 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len, unlock_extent(&inode->io_tree, start_index << PAGE_SHIFT, (last_index << PAGE_SHIFT) + PAGE_SIZE - 1, &cached_state); -free_pages: +free_folios: for (i = 0; i < nr_pages; i++) { - if (pages[i]) { - unlock_page(pages[i]); - put_page(pages[i]); - } + folio_unlock(folios[i]); + folio_put(folios[i]); } - kfree(pages); + kfree(folios); return ret; }