From patchwork Thu Feb 16 16:34:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD4DBC64EC4 for ; Thu, 16 Feb 2023 16:34:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229905AbjBPQet (ORCPT ); Thu, 16 Feb 2023 11:34:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229512AbjBPQer (ORCPT ); Thu, 16 Feb 2023 11:34:47 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1A624ECDE for ; Thu, 16 Feb 2023 08:34:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Pa30k5VAyuYWplxImqFFYEMGSiGW6hyxXI7yKB1dEM0=; b=vIVnoBIHR5sVDpWDa3C8P0Msbr +be2kwJh5TU5NNf70mGkqYFj+1Q+1FLPjpCuSStqrB/WDi1ZUpEEgGwSRiR11saOvRqVpTxsEmlyb Zw+3LCUzcgyaKbWaqjUOLSXDmCx7V4uHh+XKf22XHEDWATqgBUwebYT34X5OTBldR4NH8qKj73vBm TTAwShTe+LiDEfeUYdACjMJpNf1ZAYelRBAYR3QRxpQYhQS1qEjMjFtIXyZLjrbAIsLY6TURuMr2v ffw1Z4rxfAlYMZi7mo5f5faAXnJUxciBvJ/4dn9R9Fk4vzXT4IEMrCXZMUnfZBnef/ELLBnvkaX2a Aq4L+9Dg==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDW-00BCwQ-LU; Thu, 16 Feb 2023 16:34:43 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 01/12] btrfs: remove the force_bio_submit to submit_extent_page Date: Thu, 16 Feb 2023 17:34:26 +0100 Message-Id: <20230216163437.2370948-2-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org If force_bio_submit, submit_extent_page simply calls submit_one_bio as the first thing. This can just be removed to the two callers that set force_bio_submit to true. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index c25fa74d7615f7..b53486ed8804af 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1027,8 +1027,7 @@ static int submit_extent_page(blk_opf_t opf, struct btrfs_bio_ctrl *bio_ctrl, u64 disk_bytenr, struct page *page, size_t size, unsigned long pg_offset, - enum btrfs_compression_type compress_type, - bool force_bio_submit) + enum btrfs_compression_type compress_type) { struct btrfs_inode *inode = BTRFS_I(page->mapping->host); unsigned int cur = pg_offset; @@ -1040,9 +1039,6 @@ static int submit_extent_page(blk_opf_t opf, ASSERT(bio_ctrl->end_io_func); - if (force_bio_submit) - submit_one_bio(bio_ctrl); - while (cur < pg_offset + size) { u32 offset = cur - pg_offset; int added; @@ -1331,10 +1327,11 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, continue; } + if (force_bio_submit) + submit_one_bio(bio_ctrl); ret = submit_extent_page(REQ_OP_READ | read_flags, NULL, bio_ctrl, disk_bytenr, page, iosize, - pg_offset, this_bio_flag, - force_bio_submit); + pg_offset, this_bio_flag); if (ret) { /* * We have to unlock the remaining range, or the page @@ -1645,8 +1642,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, ret = submit_extent_page(op | write_flags, wbc, bio_ctrl, disk_bytenr, page, iosize, - cur - page_offset(page), - 0, false); + cur - page_offset(page), 0); if (ret) { has_error = true; if (!saved_ret) @@ -2139,7 +2135,7 @@ static int write_one_subpage_eb(struct extent_buffer *eb, ret = submit_extent_page(REQ_OP_WRITE | write_flags, wbc, bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page), 0, false); + eb->start - page_offset(page), 0); if (ret) { btrfs_subpage_clear_writeback(fs_info, page, eb->start, eb->len); set_btree_ioerr(page, eb); @@ -2180,7 +2176,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, set_page_writeback(p); ret = submit_extent_page(REQ_OP_WRITE | write_flags, wbc, bio_ctrl, disk_bytenr, p, - PAGE_SIZE, 0, 0, false); + PAGE_SIZE, 0, 0); if (ret) { set_btree_ioerr(p, eb); if (PageWriteback(p)) @@ -4446,9 +4442,10 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len); btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); + submit_one_bio(&bio_ctrl); ret = submit_extent_page(REQ_OP_READ, NULL, &bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page), 0, true); + eb->start - page_offset(page), 0); if (ret) { /* * In the endio function, if we hit something wrong we will @@ -4558,7 +4555,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, ClearPageError(page); err = submit_extent_page(REQ_OP_READ, NULL, &bio_ctrl, page_offset(page), page, - PAGE_SIZE, 0, 0, false); + PAGE_SIZE, 0, 0); if (err) { /* * We failed to submit the bio so it's the From patchwork Thu Feb 16 16:34:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229E8C677F1 for ; Thu, 16 Feb 2023 16:34:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229880AbjBPQeu (ORCPT ); Thu, 16 Feb 2023 11:34:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229897AbjBPQet (ORCPT ); Thu, 16 Feb 2023 11:34:49 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59DFEDC for ; Thu, 16 Feb 2023 08:34:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=EPXV0jNN3+sAM9bkBB3MIiKxWTR4yh4ehCyU8esBYpM=; b=KwRVTsevdGMkgvdA7oda2u84NX iV3X7cJxitBD4BZ4Y8fpWJ6CKROcHXzIxOLF+Lm5BykWNsT88TCpalzBd8q1QQ9uZJdCOKBuA2FZc aU/n3ee5en3CAhX2RNS40ck4Q8kP8+ZM6Ig+kyisfs2lM7hCiuENP0JarGe9uwthsg7PB1mtV9/Qt MRwBtqlMG9fb+9GtP98Sq1dAABGwWlyyFjvAoEXXLL/du3UivbiSFFV9VfPiVUeiMlRcmLZIqF8Iz pl/qAnDLlaijBf0262neM1b98Uya3/lKy+Lql5TOjAe25gcu316utFbBAXy7RjY20/DUzJzLfVx4X FkY++bnw==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDZ-00BCxI-8a; Thu, 16 Feb 2023 16:34:45 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 02/12] btrfs: remove a bogus submit_one_bio call in read_extent_buffer_subpage Date: Thu, 16 Feb 2023 17:34:27 +0100 Message-Id: <20230216163437.2370948-3-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The first call to submit_one_bio call in read_extent_buffer_subpage is for a btrfs_bio_ctrl structure that has just been initialized and thus can't have a non-NULL bio, so remove it. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 1 - 1 file changed, 1 deletion(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b53486ed8804af..e9639128962c99 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4442,7 +4442,6 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len); btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); - submit_one_bio(&bio_ctrl); ret = submit_extent_page(REQ_OP_READ, NULL, &bio_ctrl, eb->start, page, eb->len, eb->start - page_offset(page), 0); From patchwork Thu Feb 16 16:34:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBD30C636D7 for ; Thu, 16 Feb 2023 16:35:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229885AbjBPQe6 (ORCPT ); Thu, 16 Feb 2023 11:34:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229923AbjBPQew (ORCPT ); Thu, 16 Feb 2023 11:34:52 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEE694ECDC for ; Thu, 16 Feb 2023 08:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YMo2VkpnHE22jmb2+aGsWpcKwLrNHRaFzlzh26kHERY=; b=uU4nYsNWNBdWw9E5EOTtkEcwvV /DlQZ+nNH2QdlCNF5hk2TrxfW6nNzAdg152hxdX9E/Y64bUmASOdZrJq7gpvNNXD3tzEcjCUAJcnN MGuX3854pF8Z+BCdL6xSM4bSd6fkR4kDfKzWc0wZyTdbC+HHcc5Wr63NBfGnY3r8YiALl4xZeiiN2 OYQQ9RfFOtD9kCGDHqiIpete9cPOafM9tNZk2wX6c9n5r4UTFA+NlpdxeFulD1vUet4jNLghYyEwk LXke+gvYXKs4KSqATbFw4Xki7WieTerYTm4s94gRbzb7MWcbJzaLY9YciAXMiSN3mhCTFdHMDY+6y FTBhU3Tg==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDb-00BCxz-RE; Thu, 16 Feb 2023 16:34:48 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 03/12] btrfs: store the bio opf in struct btrfs_bio_ctrl Date: Thu, 16 Feb 2023 17:34:28 +0100 Message-Id: <20230216163437.2370948-4-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The bio op and flags never change over the life time of a bio_ctrl, so move it in there instead of passing it down the deep callchain all the way down to alloc_new_bio. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Qu Wenruo --- fs/btrfs/extent_io.c | 65 ++++++++++++++++++++------------------------ 1 file changed, 29 insertions(+), 36 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index e9639128962c99..1539ecea85812e 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -101,6 +101,7 @@ struct btrfs_bio_ctrl { int mirror_num; enum btrfs_compression_type compress_type; u32 len_to_oe_boundary; + blk_opf_t opf; btrfs_bio_end_io_t end_io_func; /* @@ -973,15 +974,15 @@ static void calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl, static void alloc_new_bio(struct btrfs_inode *inode, struct btrfs_bio_ctrl *bio_ctrl, - struct writeback_control *wbc, blk_opf_t opf, + struct writeback_control *wbc, u64 disk_bytenr, u32 offset, u64 file_offset, enum btrfs_compression_type compress_type) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct bio *bio; - bio = btrfs_bio_alloc(BIO_MAX_VECS, opf, inode, bio_ctrl->end_io_func, - NULL); + bio = btrfs_bio_alloc(BIO_MAX_VECS, bio_ctrl->opf, inode, + bio_ctrl->end_io_func, NULL); /* * For compressed page range, its disk_bytenr is always @disk_bytenr * passed in, no matter if we have added any range into previous bio. @@ -1008,7 +1009,6 @@ static void alloc_new_bio(struct btrfs_inode *inode, } /* - * @opf: bio REQ_OP_* and REQ_* flags as one value * @wbc: optional writeback control for io accounting * @disk_bytenr: logical bytenr where the write will be * @page: page to add to the bio @@ -1022,8 +1022,7 @@ static void alloc_new_bio(struct btrfs_inode *inode, * The mirror number for this IO should already be initizlied in * @bio_ctrl->mirror_num. */ -static int submit_extent_page(blk_opf_t opf, - struct writeback_control *wbc, +static int submit_extent_page(struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl, u64 disk_bytenr, struct page *page, size_t size, unsigned long pg_offset, @@ -1045,7 +1044,7 @@ static int submit_extent_page(blk_opf_t opf, /* Allocate new bio if needed */ if (!bio_ctrl->bio) { - alloc_new_bio(inode, bio_ctrl, wbc, opf, disk_bytenr, + alloc_new_bio(inode, bio_ctrl, wbc, disk_bytenr, offset, page_offset(page) + cur, compress_type); } @@ -1189,8 +1188,7 @@ __get_extent_map(struct inode *inode, struct page *page, size_t pg_offset, * return 0 on success, otherwise return error */ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, - struct btrfs_bio_ctrl *bio_ctrl, - blk_opf_t read_flags, u64 *prev_em_start) + struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start) { struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -1329,8 +1327,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, if (force_bio_submit) submit_one_bio(bio_ctrl); - ret = submit_extent_page(REQ_OP_READ | read_flags, NULL, - bio_ctrl, disk_bytenr, page, iosize, + ret = submit_extent_page(NULL, bio_ctrl, disk_bytenr, page, iosize, pg_offset, this_bio_flag); if (ret) { /* @@ -1354,12 +1351,12 @@ int btrfs_read_folio(struct file *file, struct folio *folio) struct btrfs_inode *inode = BTRFS_I(page->mapping->host); u64 start = page_offset(page); u64 end = start + PAGE_SIZE - 1; - struct btrfs_bio_ctrl bio_ctrl = { 0 }; + struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ }; int ret; btrfs_lock_and_flush_ordered_range(inode, start, end, NULL); - ret = btrfs_do_readpage(page, NULL, &bio_ctrl, 0, NULL); + ret = btrfs_do_readpage(page, NULL, &bio_ctrl, NULL); /* * If btrfs_do_readpage() failed we will want to submit the assembled * bio to do the cleanup. @@ -1381,7 +1378,7 @@ static inline void contiguous_readpages(struct page *pages[], int nr_pages, for (index = 0; index < nr_pages; index++) { btrfs_do_readpage(pages[index], em_cached, bio_ctrl, - REQ_RAHEAD, prev_em_start); + prev_em_start); put_page(pages[index]); } } @@ -1531,8 +1528,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, int saved_ret = 0; int ret = 0; int nr = 0; - enum req_op op = REQ_OP_WRITE; - const blk_opf_t write_flags = wbc_to_write_flags(wbc); bool has_error = false; bool compressed; @@ -1639,10 +1634,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, */ btrfs_page_clear_dirty(fs_info, page, cur, iosize); - ret = submit_extent_page(op | write_flags, wbc, - bio_ctrl, disk_bytenr, - page, iosize, - cur - page_offset(page), 0); + ret = submit_extent_page(wbc, bio_ctrl, disk_bytenr, page, + iosize, cur - page_offset(page), 0); if (ret) { has_error = true; if (!saved_ret) @@ -2115,7 +2108,6 @@ static int write_one_subpage_eb(struct extent_buffer *eb, { struct btrfs_fs_info *fs_info = eb->fs_info; struct page *page = eb->pages[0]; - blk_opf_t write_flags = wbc_to_write_flags(wbc); bool no_dirty_ebs = false; int ret; @@ -2133,8 +2125,7 @@ static int write_one_subpage_eb(struct extent_buffer *eb, bio_ctrl->end_io_func = end_bio_subpage_eb_writepage; - ret = submit_extent_page(REQ_OP_WRITE | write_flags, wbc, - bio_ctrl, eb->start, page, eb->len, + ret = submit_extent_page(wbc, bio_ctrl, eb->start, page, eb->len, eb->start - page_offset(page), 0); if (ret) { btrfs_subpage_clear_writeback(fs_info, page, eb->start, eb->len); @@ -2161,7 +2152,6 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, { u64 disk_bytenr = eb->start; int i, num_pages; - blk_opf_t write_flags = wbc_to_write_flags(wbc); int ret = 0; prepare_eb_write(eb); @@ -2174,8 +2164,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, clear_page_dirty_for_io(p); set_page_writeback(p); - ret = submit_extent_page(REQ_OP_WRITE | write_flags, wbc, - bio_ctrl, disk_bytenr, p, + ret = submit_extent_page(wbc, bio_ctrl, disk_bytenr, p, PAGE_SIZE, 0, 0); if (ret) { set_btree_ioerr(p, eb); @@ -2397,6 +2386,7 @@ int btree_write_cache_pages(struct address_space *mapping, { struct extent_buffer *eb_context = NULL; struct btrfs_bio_ctrl bio_ctrl = { + .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, .sync_io = (wbc->sync_mode == WB_SYNC_ALL), }; @@ -2683,10 +2673,6 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) u64 cur = start; unsigned long nr_pages; const u32 sectorsize = btrfs_sb(inode->i_sb)->sectorsize; - struct btrfs_bio_ctrl bio_ctrl = { - .extent_locked = 1, - .sync_io = 1, - }; struct writeback_control wbc_writepages = { .sync_mode = WB_SYNC_ALL, .range_start = start, @@ -2695,6 +2681,11 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) .punt_to_cgroup = 1, .no_cgroup_owner = 1, }; + struct btrfs_bio_ctrl bio_ctrl = { + .opf = REQ_OP_WRITE | wbc_to_write_flags(&wbc_writepages), + .extent_locked = 1, + .sync_io = 1, + }; ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); nr_pages = (round_up(end, PAGE_SIZE) - round_down(start, PAGE_SIZE)) >> @@ -2738,6 +2729,7 @@ int extent_writepages(struct address_space *mapping, struct inode *inode = mapping->host; int ret = 0; struct btrfs_bio_ctrl bio_ctrl = { + .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, .sync_io = (wbc->sync_mode == WB_SYNC_ALL), }; @@ -2755,7 +2747,7 @@ int extent_writepages(struct address_space *mapping, void extent_readahead(struct readahead_control *rac) { - struct btrfs_bio_ctrl bio_ctrl = { 0 }; + struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD }; struct page *pagepool[16]; struct extent_map *em_cached = NULL; u64 prev_em_start = (u64)-1; @@ -4402,6 +4394,7 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, struct page *page = eb->pages[0]; struct extent_state *cached_state = NULL; struct btrfs_bio_ctrl bio_ctrl = { + .opf = REQ_OP_READ, .mirror_num = mirror_num, .parent_check = check, }; @@ -4442,8 +4435,7 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len); btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); - ret = submit_extent_page(REQ_OP_READ, NULL, &bio_ctrl, - eb->start, page, eb->len, + ret = submit_extent_page(NULL, &bio_ctrl, eb->start, page, eb->len, eb->start - page_offset(page), 0); if (ret) { /* @@ -4478,6 +4470,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, int num_pages; unsigned long num_reads = 0; struct btrfs_bio_ctrl bio_ctrl = { + .opf = REQ_OP_READ, .mirror_num = mirror_num, .parent_check = check, }; @@ -4552,9 +4545,9 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, } ClearPageError(page); - err = submit_extent_page(REQ_OP_READ, NULL, - &bio_ctrl, page_offset(page), page, - PAGE_SIZE, 0, 0); + err = submit_extent_page(NULL, &bio_ctrl, + page_offset(page), page, + PAGE_SIZE, 0, 0); if (err) { /* * We failed to submit the bio so it's the From patchwork Thu Feb 16 16:34:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82508C61DA4 for ; Thu, 16 Feb 2023 16:35:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbjBPQfA (ORCPT ); Thu, 16 Feb 2023 11:35:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjBPQez (ORCPT ); Thu, 16 Feb 2023 11:34:55 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E1C83B872 for ; Thu, 16 Feb 2023 08:34:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9ORyp7JAbJ0UHO7TrZEtoMNszEpUi4xn8de6HYiwSUQ=; b=N+jjiIvGoRXaz2eBktxsEQgx2j Ddvnj5qVvigu5A8d7xedhOZvbJ651zjpL8IgAV+EtnlkQQl1Pc2I+O7x9X49eqpQGeRGYgN2owKZd 6qiWuIKhmwz09qNVgWq9J5JOnMAY6mBTbMJQu7mYnJu77CKdE9r6gtV4iAYDcCQHvwk8iSQxY28AG kIixmjtNXQAK6LHELMmXxfl+MEGRbxQSjNK5C6EOLmELY22RPSOEUW4T5rynWCPqNnvuEPFAwuCs8 Usnn22cQ8FuPNyYXF75soqO60prKBcZx6mST5EO0BHpDL76rvYSp4ALTDePiW4EffSYAcEn6Ex4n2 dSQxcteg==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDe-00BCyY-Ep; Thu, 16 Feb 2023 16:34:50 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 04/12] btrfs: remove the sync_io flag in struct btrfs_bio_ctrl Date: Thu, 16 Feb 2023 17:34:29 +0100 Message-Id: <20230216163437.2370948-5-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The sync_io flag is equivalent to wbc->sync_mode == WB_SYNC_ALL, so just check for that and remove the separate flag. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 1539ecea85812e..c27770b76ccd1b 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -118,9 +118,6 @@ struct btrfs_bio_ctrl { * does the unlocking. */ bool extent_locked; - - /* Tell the submit_bio code to use REQ_SYNC */ - bool sync_io; }; static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl) @@ -1802,6 +1799,7 @@ static void end_extent_buffer_writeback(struct extent_buffer *eb) * Return <0 if something went wrong, no page is locked. */ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb, + struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { struct btrfs_fs_info *fs_info = eb->fs_info; @@ -1817,7 +1815,7 @@ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb if (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) { btrfs_tree_unlock(eb); - if (!bio_ctrl->sync_io) + if (wbc->sync_mode != WB_SYNC_ALL) return 0; if (!flush) { submit_write_bio(bio_ctrl, 0); @@ -2260,7 +2258,7 @@ static int submit_eb_subpage(struct page *page, if (!eb) continue; - ret = lock_extent_buffer_for_io(eb, bio_ctrl); + ret = lock_extent_buffer_for_io(eb, wbc, bio_ctrl); if (ret == 0) { free_extent_buffer(eb); continue; @@ -2359,7 +2357,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, *eb_context = eb; - ret = lock_extent_buffer_for_io(eb, bio_ctrl); + ret = lock_extent_buffer_for_io(eb, wbc, bio_ctrl); if (ret <= 0) { btrfs_revert_meta_write_pointer(cache, eb); if (cache) @@ -2388,7 +2386,6 @@ int btree_write_cache_pages(struct address_space *mapping, struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, - .sync_io = (wbc->sync_mode == WB_SYNC_ALL), }; struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info; int ret = 0; @@ -2684,7 +2681,6 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_WRITE | wbc_to_write_flags(&wbc_writepages), .extent_locked = 1, - .sync_io = 1, }; ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize)); @@ -2731,7 +2727,6 @@ int extent_writepages(struct address_space *mapping, struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, - .sync_io = (wbc->sync_mode == WB_SYNC_ALL), }; /* From patchwork Thu Feb 16 16:34:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55DE3C636CC for ; Thu, 16 Feb 2023 16:35:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229803AbjBPQfD (ORCPT ); Thu, 16 Feb 2023 11:35:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbjBPQfB (ORCPT ); Thu, 16 Feb 2023 11:35:01 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E67815598 for ; Thu, 16 Feb 2023 08:34:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=l6B2n1KyE1TG4MHrGP5PBckBBkQ0acs95eTuaYuesC8=; b=ACPJDXXTDedxQyk38CtfIvURBa zKLRmieO9OlVoszrHc1ERmKq9zjkwej+5CmKju9pO13zHpDpl60XsZ7xAXDUP94kExMMu83QXRYdP 9r22Ln3j9cB3Wdn94lDrvv6j/bobL66fDsAo3eu8b5eXK80iYFEc1VrI0553P+lLEMh70bzgRAfxC lYplhdywbuZe2sPCzmNNuNoj3JbdkRjILmUJKQMeBoUINf//MJNBhNd1pmrDHWsD/EuzkVGVBCI1C yr4mwkhqbFWFZ1NmOyxQVWc1LMocFd9oHiZVPIQloOJT/R6BC5M4YA2NfHrCRRh+GwEiuloUp7Cvz TQxDGMEQ==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDg-00BCzQ-Vn; Thu, 16 Feb 2023 16:34:53 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 05/12] btrfs: add a wbc pointer to struct btrfs_bio_ctrl Date: Thu, 16 Feb 2023 17:34:30 +0100 Message-Id: <20230216163437.2370948-6-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Instead of passing down the wbc pointer the deep callchain, just add it to the btrfs_bio_ctrl structure. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 81 ++++++++++++++++++++++---------------------- 1 file changed, 40 insertions(+), 41 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index c27770b76ccd1b..2d30a9bab4fc62 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -103,6 +103,7 @@ struct btrfs_bio_ctrl { u32 len_to_oe_boundary; blk_opf_t opf; btrfs_bio_end_io_t end_io_func; + struct writeback_control *wbc; /* * This is for metadata read, to provide the extra needed verification @@ -971,7 +972,6 @@ static void calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl, static void alloc_new_bio(struct btrfs_inode *inode, struct btrfs_bio_ctrl *bio_ctrl, - struct writeback_control *wbc, u64 disk_bytenr, u32 offset, u64 file_offset, enum btrfs_compression_type compress_type) { @@ -993,7 +993,7 @@ static void alloc_new_bio(struct btrfs_inode *inode, bio_ctrl->compress_type = compress_type; calc_bio_boundaries(bio_ctrl, inode, file_offset); - if (wbc) { + if (bio_ctrl->wbc) { /* * Pick the last added device to support cgroup writeback. For * multi-device file systems this means blk-cgroup policies have @@ -1001,12 +1001,11 @@ static void alloc_new_bio(struct btrfs_inode *inode, * This is a bit odd but has been like that for a long time. */ bio_set_dev(bio, fs_info->fs_devices->latest_dev->bdev); - wbc_init_bio(wbc, bio); + wbc_init_bio(bio_ctrl->wbc, bio); } } /* - * @wbc: optional writeback control for io accounting * @disk_bytenr: logical bytenr where the write will be * @page: page to add to the bio * @size: portion of page that we want to write to @@ -1019,8 +1018,7 @@ static void alloc_new_bio(struct btrfs_inode *inode, * The mirror number for this IO should already be initizlied in * @bio_ctrl->mirror_num. */ -static int submit_extent_page(struct writeback_control *wbc, - struct btrfs_bio_ctrl *bio_ctrl, +static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, u64 disk_bytenr, struct page *page, size_t size, unsigned long pg_offset, enum btrfs_compression_type compress_type) @@ -1041,7 +1039,7 @@ static int submit_extent_page(struct writeback_control *wbc, /* Allocate new bio if needed */ if (!bio_ctrl->bio) { - alloc_new_bio(inode, bio_ctrl, wbc, disk_bytenr, + alloc_new_bio(inode, bio_ctrl, disk_bytenr, offset, page_offset(page) + cur, compress_type); } @@ -1063,8 +1061,8 @@ static int submit_extent_page(struct writeback_control *wbc, ASSERT(added == 0 || added == size - offset); /* At least we added some page, update the account */ - if (wbc && added) - wbc_account_cgroup_owner(wbc, page, added); + if (bio_ctrl->wbc && added) + wbc_account_cgroup_owner(bio_ctrl->wbc, page, added); /* We have reached boundary, submit right now */ if (added < size - offset) { @@ -1324,7 +1322,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, if (force_bio_submit) submit_one_bio(bio_ctrl); - ret = submit_extent_page(NULL, bio_ctrl, disk_bytenr, page, iosize, + ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, pg_offset, this_bio_flag); if (ret) { /* @@ -1511,7 +1509,6 @@ static void find_next_dirty_byte(struct btrfs_fs_info *fs_info, */ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, struct page *page, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl, loff_t i_size, int *nr_ret) @@ -1531,7 +1528,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, ret = btrfs_writepage_cow_fixup(page); if (ret) { /* Fixup worker will requeue */ - redirty_page_for_writepage(wbc, page); + redirty_page_for_writepage(bio_ctrl->wbc, page); unlock_page(page); return 1; } @@ -1540,7 +1537,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, * we don't want to touch the inode after unlocking the page, * so we update the mapping writeback index now */ - wbc->nr_to_write--; + bio_ctrl->wbc->nr_to_write--; bio_ctrl->end_io_func = end_bio_extent_writepage; while (cur <= end) { @@ -1631,7 +1628,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, */ btrfs_page_clear_dirty(fs_info, page, cur, iosize); - ret = submit_extent_page(wbc, bio_ctrl, disk_bytenr, page, + ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, cur - page_offset(page), 0); if (ret) { has_error = true; @@ -1668,7 +1665,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, * Return 0 if everything goes well. * Return <0 for error. */ -static int __extent_writepage(struct page *page, struct writeback_control *wbc, +static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl) { struct folio *folio = page_folio(page); @@ -1682,7 +1679,7 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, loff_t i_size = i_size_read(inode); unsigned long end_index = i_size >> PAGE_SHIFT; - trace___extent_writepage(page, inode, wbc); + trace___extent_writepage(page, inode, bio_ctrl->wbc); WARN_ON(!PageLocked(page)); @@ -1707,14 +1704,14 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, } if (!bio_ctrl->extent_locked) { - ret = writepage_delalloc(BTRFS_I(inode), page, wbc); + ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc); if (ret == 1) return 0; if (ret) goto done; } - ret = __extent_writepage_io(BTRFS_I(inode), page, wbc, bio_ctrl, i_size, + ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr); if (ret == 1) return 0; @@ -1759,6 +1756,8 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, if (PageError(page)) end_extent_writepage(page, ret, page_start, page_end); if (bio_ctrl->extent_locked) { + struct writeback_control *wbc = bio_ctrl->wbc; + /* * If bio_ctrl->extent_locked, it's from extent_write_locked_range(), * the page can either be locked by lock_page() or @@ -1799,7 +1798,6 @@ static void end_extent_buffer_writeback(struct extent_buffer *eb) * Return <0 if something went wrong, no page is locked. */ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { struct btrfs_fs_info *fs_info = eb->fs_info; @@ -1815,7 +1813,7 @@ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb if (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) { btrfs_tree_unlock(eb); - if (wbc->sync_mode != WB_SYNC_ALL) + if (bio_ctrl->wbc->sync_mode != WB_SYNC_ALL) return 0; if (!flush) { submit_write_bio(bio_ctrl, 0); @@ -2101,7 +2099,6 @@ static void prepare_eb_write(struct extent_buffer *eb) * Page locking is only utilized at minimum to keep the VMM code happy. */ static int write_one_subpage_eb(struct extent_buffer *eb, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { struct btrfs_fs_info *fs_info = eb->fs_info; @@ -2123,7 +2120,7 @@ static int write_one_subpage_eb(struct extent_buffer *eb, bio_ctrl->end_io_func = end_bio_subpage_eb_writepage; - ret = submit_extent_page(wbc, bio_ctrl, eb->start, page, eb->len, + ret = submit_extent_page(bio_ctrl, eb->start, page, eb->len, eb->start - page_offset(page), 0); if (ret) { btrfs_subpage_clear_writeback(fs_info, page, eb->start, eb->len); @@ -2140,12 +2137,11 @@ static int write_one_subpage_eb(struct extent_buffer *eb, * dirty anymore, we have submitted a page. Update nr_written in wbc. */ if (no_dirty_ebs) - wbc->nr_to_write--; + bio_ctrl->wbc->nr_to_write--; return ret; } static noinline_for_stack int write_one_eb(struct extent_buffer *eb, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { u64 disk_bytenr = eb->start; @@ -2162,7 +2158,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, clear_page_dirty_for_io(p); set_page_writeback(p); - ret = submit_extent_page(wbc, bio_ctrl, disk_bytenr, p, + ret = submit_extent_page(bio_ctrl, disk_bytenr, p, PAGE_SIZE, 0, 0); if (ret) { set_btree_ioerr(p, eb); @@ -2174,7 +2170,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, break; } disk_bytenr += PAGE_SIZE; - wbc->nr_to_write--; + bio_ctrl->wbc->nr_to_write--; unlock_page(p); } @@ -2204,7 +2200,6 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, * Return <0 for fatal error. */ static int submit_eb_subpage(struct page *page, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb); @@ -2258,7 +2253,7 @@ static int submit_eb_subpage(struct page *page, if (!eb) continue; - ret = lock_extent_buffer_for_io(eb, wbc, bio_ctrl); + ret = lock_extent_buffer_for_io(eb, bio_ctrl); if (ret == 0) { free_extent_buffer(eb); continue; @@ -2267,7 +2262,7 @@ static int submit_eb_subpage(struct page *page, free_extent_buffer(eb); goto cleanup; } - ret = write_one_subpage_eb(eb, wbc, bio_ctrl); + ret = write_one_subpage_eb(eb, bio_ctrl); free_extent_buffer(eb); if (ret < 0) goto cleanup; @@ -2301,7 +2296,7 @@ static int submit_eb_subpage(struct page *page, * previous call. * Return <0 for fatal error. */ -static int submit_eb_page(struct page *page, struct writeback_control *wbc, +static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl, struct extent_buffer **eb_context) { @@ -2314,7 +2309,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, return 0; if (btrfs_sb(page->mapping->host->i_sb)->nodesize < PAGE_SIZE) - return submit_eb_subpage(page, wbc, bio_ctrl); + return submit_eb_subpage(page, bio_ctrl); spin_lock(&mapping->private_lock); if (!PagePrivate(page)) { @@ -2347,7 +2342,8 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, * If for_sync, this hole will be filled with * trasnsaction commit. */ - if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync) + if (bio_ctrl->wbc->sync_mode == WB_SYNC_ALL && + !bio_ctrl->wbc->for_sync) ret = -EAGAIN; else ret = 0; @@ -2357,7 +2353,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, *eb_context = eb; - ret = lock_extent_buffer_for_io(eb, wbc, bio_ctrl); + ret = lock_extent_buffer_for_io(eb, bio_ctrl); if (ret <= 0) { btrfs_revert_meta_write_pointer(cache, eb); if (cache) @@ -2372,7 +2368,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc, btrfs_schedule_zone_finish_bg(cache, eb); btrfs_put_block_group(cache); } - ret = write_one_eb(eb, wbc, bio_ctrl); + ret = write_one_eb(eb, bio_ctrl); free_extent_buffer(eb); if (ret < 0) return ret; @@ -2384,6 +2380,7 @@ int btree_write_cache_pages(struct address_space *mapping, { struct extent_buffer *eb_context = NULL; struct btrfs_bio_ctrl bio_ctrl = { + .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, }; @@ -2428,7 +2425,7 @@ int btree_write_cache_pages(struct address_space *mapping, for (i = 0; i < nr_pages; i++) { struct page *page = pvec.pages[i]; - ret = submit_eb_page(page, wbc, &bio_ctrl, &eb_context); + ret = submit_eb_page(page, &bio_ctrl, &eb_context); if (ret == 0) continue; if (ret < 0) { @@ -2511,9 +2508,9 @@ int btree_write_cache_pages(struct address_space *mapping, * existing IO to complete. */ static int extent_write_cache_pages(struct address_space *mapping, - struct writeback_control *wbc, struct btrfs_bio_ctrl *bio_ctrl) { + struct writeback_control *wbc = bio_ctrl->wbc; struct inode *inode = mapping->host; int ret = 0; int done = 0; @@ -2614,7 +2611,7 @@ static int extent_write_cache_pages(struct address_space *mapping, continue; } - ret = __extent_writepage(page, wbc, bio_ctrl); + ret = __extent_writepage(page, bio_ctrl); if (ret < 0) { done = 1; break; @@ -2679,6 +2676,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) .no_cgroup_owner = 1, }; struct btrfs_bio_ctrl bio_ctrl = { + .wbc = &wbc_writepages, .opf = REQ_OP_WRITE | wbc_to_write_flags(&wbc_writepages), .extent_locked = 1, }; @@ -2701,7 +2699,7 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end) ASSERT(PageLocked(page)); ASSERT(PageDirty(page)); clear_page_dirty_for_io(page); - ret = __extent_writepage(page, &wbc_writepages, &bio_ctrl); + ret = __extent_writepage(page, &bio_ctrl); ASSERT(ret <= 0); if (ret < 0) { found_error = true; @@ -2725,6 +2723,7 @@ int extent_writepages(struct address_space *mapping, struct inode *inode = mapping->host; int ret = 0; struct btrfs_bio_ctrl bio_ctrl = { + .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), .extent_locked = 0, }; @@ -2734,7 +2733,7 @@ int extent_writepages(struct address_space *mapping, * protect the write pointer updates. */ btrfs_zoned_data_reloc_lock(BTRFS_I(inode)); - ret = extent_write_cache_pages(mapping, wbc, &bio_ctrl); + ret = extent_write_cache_pages(mapping, &bio_ctrl); submit_write_bio(&bio_ctrl, ret); btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); return ret; @@ -4430,7 +4429,7 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len); btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); - ret = submit_extent_page(NULL, &bio_ctrl, eb->start, page, eb->len, + ret = submit_extent_page(&bio_ctrl, eb->start, page, eb->len, eb->start - page_offset(page), 0); if (ret) { /* @@ -4540,7 +4539,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, } ClearPageError(page); - err = submit_extent_page(NULL, &bio_ctrl, + err = submit_extent_page(&bio_ctrl, page_offset(page), page, PAGE_SIZE, 0, 0); if (err) { From patchwork Thu Feb 16 16:34:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 494AAC636D7 for ; Thu, 16 Feb 2023 16:35:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229520AbjBPQfE (ORCPT ); Thu, 16 Feb 2023 11:35:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbjBPQfB (ORCPT ); Thu, 16 Feb 2023 11:35:01 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F145C4ECEF for ; Thu, 16 Feb 2023 08:34:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=sOQYZk8mxS6EwPmmAVadj5oNM9++YEd2ktu9sBa0efA=; b=Usisr0+P9FnJpSwVYa6pppMFT3 DZU3m3slB94tFmGnfMCcsW82WW/G0tm76O5nQNEhVw4tmSPhpzu46bJQJ/8jFY91XAvxFAPGh0cZe D424USDBtnwNhxL/DwkTGzWQnXHAscYRW4rkjlhmKv0l9yz02JdwG+lKE5T9N0P+56TiWB+HoDd+8 n0Dq+tQlw+A/dmXOU4n3Qtlvll7uNh6tubWntT6byKSrNrSG696tYm4PUWJcE4vGGhc4UvfFOL+NB /BA1FW/rIHpLUTEh8kXpakgm769Fs3WY1bkF5s9ccYgudj+0/r1mboF4vzmW44jO9XX7j40ifvEgS 1J98ARzg==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDj-00BD0J-K8; Thu, 16 Feb 2023 16:34:56 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 06/12] btrfs: move the compress_type check out of btrfs_bio_add_page Date: Thu, 16 Feb 2023 17:34:31 +0100 Message-Id: <20230216163437.2370948-7-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The compress_type can only change on a per-extent basis. So instead of checking it for every page in btrfs_bio_add_page, do the check once in btrfs_do_readpage, which is the only caller of btrfs_bio_add_page and submit_extent_page that deals with compressed extents. Signed-off-by: Christoph Hellwig --- fs/btrfs/extent_io.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 2d30a9bab4fc62..4fe128d2895f88 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -875,7 +875,6 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array) * a contiguous page to the previous one * @size: portion of page that we want to write * @pg_offset: starting offset in the page - * @compress_type: compression type of the current bio to see if we can merge them * * Attempt to add a page to bio considering stripe alignment etc. * @@ -886,8 +885,7 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array) static int btrfs_bio_add_page(struct btrfs_bio_ctrl *bio_ctrl, struct page *page, u64 disk_bytenr, unsigned int size, - unsigned int pg_offset, - enum btrfs_compression_type compress_type) + unsigned int pg_offset) { struct bio *bio = bio_ctrl->bio; u32 bio_size = bio->bi_iter.bi_size; @@ -898,9 +896,6 @@ static int btrfs_bio_add_page(struct btrfs_bio_ctrl *bio_ctrl, ASSERT(bio); /* The limit should be calculated when bio_ctrl->bio is allocated */ ASSERT(bio_ctrl->len_to_oe_boundary); - if (bio_ctrl->compress_type != compress_type) - return 0; - if (bio->bi_iter.bi_size == 0) { /* We can always add a page into an empty bio. */ @@ -1049,12 +1044,11 @@ static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, */ if (compress_type != BTRFS_COMPRESS_NONE) added = btrfs_bio_add_page(bio_ctrl, page, disk_bytenr, - size - offset, pg_offset + offset, - compress_type); + size - offset, pg_offset + offset); else added = btrfs_bio_add_page(bio_ctrl, page, disk_bytenr + offset, size - offset, - pg_offset + offset, compress_type); + pg_offset + offset); /* Metadata page range should never be split */ if (!is_data_inode(&inode->vfs_inode)) @@ -1320,6 +1314,9 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, continue; } + if (bio_ctrl->compress_type != this_bio_flag) + submit_one_bio(bio_ctrl); + if (force_bio_submit) submit_one_bio(bio_ctrl); ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, From patchwork Thu Feb 16 16:34:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CA59C61DA4 for ; Thu, 16 Feb 2023 16:35:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229921AbjBPQfF (ORCPT ); Thu, 16 Feb 2023 11:35:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjBPQfD (ORCPT ); Thu, 16 Feb 2023 11:35:03 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 155A1497C6 for ; Thu, 16 Feb 2023 08:35:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=SWFcWzJIi7WFLL7LpcgdnC9AlHg8Oeaim3L4HMsEs7o=; b=apR1dTx41nm0kW2lvEQXN0oS+A yR/weNcB4IezMB2GF+TO19vCh1MEs5pE6ftW0gcr6JWWlgIq7xQRHZ+Y6UMKGZnrcrY/B/2Cu3Q5V A5iVg48GB6Uv8CBt1mXu77IMys4sP8SIRLAj+Ekp8ogxX+p4PKCOLA4zUu89rzvr0KpTdzhZ5sEOm aCOGO1oecJlP+y46iw2jSWlaZj6JujeqacstVYpcZStXmDlgXwHoAR0aS9NxfpfcfU8B6JB/XnlHp 6F0hDeaOpARg8wu3Z/NgvScaD+e6qaQBUwh66VZCIfW8gDuuR750bk3SyS/Ka/pMqm6DXLBy4mtIt ZLLDe5+w==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDm-00BD16-5L; Thu, 16 Feb 2023 16:34:58 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 07/12] btrfs: rename the this_bio_flag variable in btrfs_do_readpage Date: Thu, 16 Feb 2023 17:34:32 +0100 Message-Id: <20230216163437.2370948-8-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Rename this_bio_flag to compress_type to match the surrounding code and better document the intent. Also use the proper enum type instead of unsigned long. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Qu Wenruo --- fs/btrfs/extent_io.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 4fe128d2895f88..24a1e988dd0fab 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1213,7 +1213,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, bio_ctrl->end_io_func = end_bio_extent_readpage; begin_page_read(fs_info, page); while (cur <= end) { - unsigned long this_bio_flag = 0; + enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE; bool force_bio_submit = false; u64 disk_bytenr; @@ -1238,11 +1238,11 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, BUG_ON(end < cur); if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) - this_bio_flag = em->compress_type; + compress_type = em->compress_type; iosize = min(extent_map_end(em) - cur, end - cur + 1); iosize = ALIGN(iosize, blocksize); - if (this_bio_flag != BTRFS_COMPRESS_NONE) + if (compress_type != BTRFS_COMPRESS_NONE) disk_bytenr = em->block_start; else disk_bytenr = em->block_start + extent_offset; @@ -1314,13 +1314,13 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, continue; } - if (bio_ctrl->compress_type != this_bio_flag) + if (bio_ctrl->compress_type != compress_type) submit_one_bio(bio_ctrl); if (force_bio_submit) submit_one_bio(bio_ctrl); ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, - pg_offset, this_bio_flag); + pg_offset, compress_type); if (ret) { /* * We have to unlock the remaining range, or the page From patchwork Thu Feb 16 16:34:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4E3AC6379F for ; Thu, 16 Feb 2023 16:35:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229718AbjBPQfG (ORCPT ); Thu, 16 Feb 2023 11:35:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229888AbjBPQfF (ORCPT ); Thu, 16 Feb 2023 11:35:05 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05ABB497C6 for ; Thu, 16 Feb 2023 08:35:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2DlXquWkX4O+lYRhCi+1LOZ2WsriamW7VRzsBwrOMn4=; b=EhHt7CLoiiB6sBzNWYGizGvtlR p+/xenWgiUmu48h6NK2cOgZX46Pr9C07VBmAXferqZEZp+4kVgW9hT3u0Tvcuu3AU/UcRJfsfErp8 oa9XFXyjaauD82kf8nKFiQttjt0VqGSYaItX/Vbg+dHL8ylO05OhPzNsxwK+7iwMjmcu/b3HPtlOI CN/JcwCPLUz7Rj+bkOytCy5Zz7BJHNCm5iDEPFkKUPjY7d9hlgfknPLBiYMj+C6puz7wkEtANDD9U Xh9/MMNto8/upezG4Z580U6Lzi38H1e6FwcMhFYab+r0lbjnURm13YsHWDbzhAD38mgaRONXpLft9 OKQZqziQ==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDo-00BD2e-Sb; Thu, 16 Feb 2023 16:35:01 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 08/12] btrfs: remove the compress_type argument to submit_extent_page Date: Thu, 16 Feb 2023 17:34:33 +0100 Message-Id: <20230216163437.2370948-9-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Update the compress_type in the btrfs_bio_ctrl after forcing out the previous bio in btrfs_do_readpage, so that alloc_new_bio can just use the compress_type member in struct btrfs_bio_ctrl instead of passing the same information redudantly as a function argument. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 24a1e988dd0fab..10134db6057443 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -967,8 +967,7 @@ static void calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl, static void alloc_new_bio(struct btrfs_inode *inode, struct btrfs_bio_ctrl *bio_ctrl, - u64 disk_bytenr, u32 offset, u64 file_offset, - enum btrfs_compression_type compress_type) + u64 disk_bytenr, u32 offset, u64 file_offset) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct bio *bio; @@ -979,13 +978,12 @@ static void alloc_new_bio(struct btrfs_inode *inode, * For compressed page range, its disk_bytenr is always @disk_bytenr * passed in, no matter if we have added any range into previous bio. */ - if (compress_type != BTRFS_COMPRESS_NONE) + if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) bio->bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT; else bio->bi_iter.bi_sector = (disk_bytenr + offset) >> SECTOR_SHIFT; btrfs_bio(bio)->file_offset = file_offset; bio_ctrl->bio = bio; - bio_ctrl->compress_type = compress_type; calc_bio_boundaries(bio_ctrl, inode, file_offset); if (bio_ctrl->wbc) { @@ -1006,7 +1004,6 @@ static void alloc_new_bio(struct btrfs_inode *inode, * @size: portion of page that we want to write to * @pg_offset: offset of the new bio or to check whether we are adding * a contiguous page to the previous one - * @compress_type: compress type for current bio * * The will either add the page into the existing @bio_ctrl->bio, or allocate a * new one in @bio_ctrl->bio. @@ -1015,8 +1012,7 @@ static void alloc_new_bio(struct btrfs_inode *inode, */ static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, u64 disk_bytenr, struct page *page, - size_t size, unsigned long pg_offset, - enum btrfs_compression_type compress_type) + size_t size, unsigned long pg_offset) { struct btrfs_inode *inode = BTRFS_I(page->mapping->host); unsigned int cur = pg_offset; @@ -1035,14 +1031,13 @@ static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, /* Allocate new bio if needed */ if (!bio_ctrl->bio) { alloc_new_bio(inode, bio_ctrl, disk_bytenr, - offset, page_offset(page) + cur, - compress_type); + offset, page_offset(page) + cur); } /* * We must go through btrfs_bio_add_page() to ensure each * page range won't cross various boundaries. */ - if (compress_type != BTRFS_COMPRESS_NONE) + if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) added = btrfs_bio_add_page(bio_ctrl, page, disk_bytenr, size - offset, pg_offset + offset); else @@ -1314,13 +1309,15 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, continue; } - if (bio_ctrl->compress_type != compress_type) + if (bio_ctrl->compress_type != compress_type) { submit_one_bio(bio_ctrl); + bio_ctrl->compress_type = compress_type; + } if (force_bio_submit) submit_one_bio(bio_ctrl); ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, - pg_offset, compress_type); + pg_offset); if (ret) { /* * We have to unlock the remaining range, or the page @@ -1626,7 +1623,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, btrfs_page_clear_dirty(fs_info, page, cur, iosize); ret = submit_extent_page(bio_ctrl, disk_bytenr, page, - iosize, cur - page_offset(page), 0); + iosize, cur - page_offset(page)); if (ret) { has_error = true; if (!saved_ret) @@ -2118,7 +2115,7 @@ static int write_one_subpage_eb(struct extent_buffer *eb, bio_ctrl->end_io_func = end_bio_subpage_eb_writepage; ret = submit_extent_page(bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page), 0); + eb->start - page_offset(page)); if (ret) { btrfs_subpage_clear_writeback(fs_info, page, eb->start, eb->len); set_btree_ioerr(page, eb); @@ -2156,7 +2153,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, clear_page_dirty_for_io(p); set_page_writeback(p); ret = submit_extent_page(bio_ctrl, disk_bytenr, p, - PAGE_SIZE, 0, 0); + PAGE_SIZE, 0); if (ret) { set_btree_ioerr(p, eb); if (PageWriteback(p)) @@ -4427,7 +4424,7 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); ret = submit_extent_page(&bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page), 0); + eb->start - page_offset(page)); if (ret) { /* * In the endio function, if we hit something wrong we will @@ -4538,7 +4535,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, ClearPageError(page); err = submit_extent_page(&bio_ctrl, page_offset(page), page, - PAGE_SIZE, 0, 0); + PAGE_SIZE, 0); if (err) { /* * We failed to submit the bio so it's the From patchwork Thu Feb 16 16:34:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65A40C61DA4 for ; Thu, 16 Feb 2023 16:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229672AbjBPQfJ (ORCPT ); Thu, 16 Feb 2023 11:35:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229888AbjBPQfH (ORCPT ); Thu, 16 Feb 2023 11:35:07 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 478844ECE8 for ; Thu, 16 Feb 2023 08:35:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZmCp5UzzTfP3B3R7/edLq0LjWQq90xViSrZe1H8SNOQ=; b=Ox5s09tJc/Zo5LmAtuubKnc8kX jDGSVjNbLyU38f0orz19uu6PADQ1sF7W7NCMHng9OkfqwIu7fxaCMgMkidt0LZMtlCHh/zkd6gOBg gWc212rHDLMNfl+2nbKU6/4pW5NkdH8Ixa7vTJzbbQvEJWkPnjxcgg7W614/zhNSOjk4pBxA2v4xE rAYeNcWsjScu+yTNwuAmkuCyH63OwEvyyTw2tK6TZ3Pp211eehUWg5wZYl/KhzPE+ISFBrt6FrnbW KC9txur6Tn9YGNToZQJnMTOftMv1YlssQaBW2JgzTFlReM2M97GqUYp0HarJG3aSWXNvTGnlMuvXu 3Flzcdxg==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDr-00BD3x-HA; Thu, 16 Feb 2023 16:35:03 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 09/12] btrfs: remove the submit_extent_page return value Date: Thu, 16 Feb 2023 17:34:34 +0100 Message-Id: <20230216163437.2370948-10-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org submit_extent_page always returns 0. Change it to a void return type and all the unreachable error handling code in the callers. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 156 ++++++++++--------------------------------- 1 file changed, 35 insertions(+), 121 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 10134db6057443..4b27cc50f8b926 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1010,9 +1010,9 @@ static void alloc_new_bio(struct btrfs_inode *inode, * The mirror number for this IO should already be initizlied in * @bio_ctrl->mirror_num. */ -static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, - u64 disk_bytenr, struct page *page, - size_t size, unsigned long pg_offset) +static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, + u64 disk_bytenr, struct page *page, + size_t size, unsigned long pg_offset) { struct btrfs_inode *inode = BTRFS_I(page->mapping->host); unsigned int cur = pg_offset; @@ -1061,7 +1061,6 @@ static int submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, } cur += added; } - return 0; } static int attach_extent_buffer_page(struct extent_buffer *eb, @@ -1194,7 +1193,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, unlock_extent(tree, start, end, NULL); btrfs_page_set_error(fs_info, page, start, PAGE_SIZE); unlock_page(page); - goto out; + return ret; } if (page->index == last_byte >> PAGE_SHIFT) { @@ -1225,8 +1224,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, if (IS_ERR(em)) { unlock_extent(tree, cur, end, NULL); end_page_read(page, false, cur, end + 1 - cur); - ret = PTR_ERR(em); - break; + return PTR_ERR(em); } extent_offset = cur - em->start; BUG_ON(extent_map_end(em) <= cur); @@ -1316,22 +1314,13 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached, if (force_bio_submit) submit_one_bio(bio_ctrl); - ret = submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, - pg_offset); - if (ret) { - /* - * We have to unlock the remaining range, or the page - * will never be unlocked. - */ - unlock_extent(tree, cur, end, NULL); - end_page_read(page, false, cur, end + 1 - cur); - goto out; - } + submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, + pg_offset); cur = cur + iosize; pg_offset += iosize; } -out: - return ret; + + return 0; } int btrfs_read_folio(struct file *file, struct folio *folio) @@ -1622,19 +1611,9 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, */ btrfs_page_clear_dirty(fs_info, page, cur, iosize); - ret = submit_extent_page(bio_ctrl, disk_bytenr, page, - iosize, cur - page_offset(page)); - if (ret) { - has_error = true; - if (!saved_ret) - saved_ret = ret; - - btrfs_page_set_error(fs_info, page, cur, iosize); - if (PageWriteback(page)) - btrfs_page_clear_writeback(fs_info, page, cur, - iosize); - } - + submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, + cur - page_offset(page)); + ret = 0; cur += iosize; nr++; } @@ -2092,13 +2071,12 @@ static void prepare_eb_write(struct extent_buffer *eb) * Unlike the work in write_one_eb(), we rely completely on extent locking. * Page locking is only utilized at minimum to keep the VMM code happy. */ -static int write_one_subpage_eb(struct extent_buffer *eb, - struct btrfs_bio_ctrl *bio_ctrl) +static void write_one_subpage_eb(struct extent_buffer *eb, + struct btrfs_bio_ctrl *bio_ctrl) { struct btrfs_fs_info *fs_info = eb->fs_info; struct page *page = eb->pages[0]; bool no_dirty_ebs = false; - int ret; prepare_eb_write(eb); @@ -2114,17 +2092,8 @@ static int write_one_subpage_eb(struct extent_buffer *eb, bio_ctrl->end_io_func = end_bio_subpage_eb_writepage; - ret = submit_extent_page(bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page)); - if (ret) { - btrfs_subpage_clear_writeback(fs_info, page, eb->start, eb->len); - set_btree_ioerr(page, eb); - unlock_page(page); - - if (atomic_dec_and_test(&eb->io_pages)) - end_extent_buffer_writeback(eb); - return -EIO; - } + submit_extent_page(bio_ctrl, eb->start, page, eb->len, + eb->start - page_offset(page)); unlock_page(page); /* * Submission finished without problem, if no range of the page is @@ -2132,15 +2101,13 @@ static int write_one_subpage_eb(struct extent_buffer *eb, */ if (no_dirty_ebs) bio_ctrl->wbc->nr_to_write--; - return ret; } -static noinline_for_stack int write_one_eb(struct extent_buffer *eb, +static noinline_for_stack void write_one_eb(struct extent_buffer *eb, struct btrfs_bio_ctrl *bio_ctrl) { u64 disk_bytenr = eb->start; int i, num_pages; - int ret = 0; prepare_eb_write(eb); @@ -2152,31 +2119,11 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb, clear_page_dirty_for_io(p); set_page_writeback(p); - ret = submit_extent_page(bio_ctrl, disk_bytenr, p, - PAGE_SIZE, 0); - if (ret) { - set_btree_ioerr(p, eb); - if (PageWriteback(p)) - end_page_writeback(p); - if (atomic_sub_and_test(num_pages - i, &eb->io_pages)) - end_extent_buffer_writeback(eb); - ret = -EIO; - break; - } + submit_extent_page(bio_ctrl, disk_bytenr, p, PAGE_SIZE, 0); disk_bytenr += PAGE_SIZE; bio_ctrl->wbc->nr_to_write--; unlock_page(p); } - - if (unlikely(ret)) { - for (; i < num_pages; i++) { - struct page *p = eb->pages[i]; - clear_page_dirty_for_io(p); - unlock_page(p); - } - } - - return ret; } /* @@ -2256,10 +2203,8 @@ static int submit_eb_subpage(struct page *page, free_extent_buffer(eb); goto cleanup; } - ret = write_one_subpage_eb(eb, bio_ctrl); + write_one_subpage_eb(eb, bio_ctrl); free_extent_buffer(eb); - if (ret < 0) - goto cleanup; submitted++; } return submitted; @@ -2362,10 +2307,8 @@ static int submit_eb_page(struct page *page, btrfs_schedule_zone_finish_bg(cache, eb); btrfs_put_block_group(cache); } - ret = write_one_eb(eb, bio_ctrl); + write_one_eb(eb, bio_ctrl); free_extent_buffer(eb); - if (ret < 0) - return ret; return 1; } @@ -4386,7 +4329,7 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, .mirror_num = mirror_num, .parent_check = check, }; - int ret = 0; + int ret; ASSERT(!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags)); ASSERT(PagePrivate(page)); @@ -4404,14 +4347,13 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, return ret; } - ret = 0; if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags) || PageUptodate(page) || btrfs_subpage_test_uptodate(fs_info, page, eb->start, eb->len)) { set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags); unlock_extent(io_tree, eb->start, eb->start + eb->len - 1, &cached_state); - return ret; + return 0; } clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags); @@ -4423,27 +4365,19 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait, btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len); btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len); - ret = submit_extent_page(&bio_ctrl, eb->start, page, eb->len, - eb->start - page_offset(page)); - if (ret) { - /* - * In the endio function, if we hit something wrong we will - * increase the io_pages, so here we need to decrease it for - * error path. - */ - atomic_dec(&eb->io_pages); - } + submit_extent_page(&bio_ctrl, eb->start, page, eb->len, + eb->start - page_offset(page)); submit_one_bio(&bio_ctrl); - if (ret || wait != WAIT_COMPLETE) { + if (wait != WAIT_COMPLETE) { free_extent_state(cached_state); - return ret; + return 0; } wait_extent_bit(io_tree, eb->start, eb->start + eb->len - 1, EXTENT_LOCKED, &cached_state); if (!test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags)) - ret = -EIO; - return ret; + return -EIO; + return 0; } int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, @@ -4451,8 +4385,6 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, { int i; struct page *page; - int err; - int ret = 0; int locked_pages = 0; int all_uptodate = 1; int num_pages; @@ -4526,27 +4458,9 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, page = eb->pages[i]; if (!PageUptodate(page)) { - if (ret) { - atomic_dec(&eb->io_pages); - unlock_page(page); - continue; - } - ClearPageError(page); - err = submit_extent_page(&bio_ctrl, - page_offset(page), page, - PAGE_SIZE, 0); - if (err) { - /* - * We failed to submit the bio so it's the - * caller's responsibility to perform cleanup - * i.e unlock page/set error bit. - */ - ret = err; - SetPageError(page); - unlock_page(page); - atomic_dec(&eb->io_pages); - } + submit_extent_page(&bio_ctrl, page_offset(page), page, + PAGE_SIZE, 0); } else { unlock_page(page); } @@ -4554,17 +4468,17 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, submit_one_bio(&bio_ctrl); - if (ret || wait != WAIT_COMPLETE) - return ret; + if (wait != WAIT_COMPLETE) + return 0; for (i = 0; i < num_pages; i++) { page = eb->pages[i]; wait_on_page_locked(page); if (!PageUptodate(page)) - ret = -EIO; + return -EIO; } - return ret; + return 0; unlock_exit: while (locked_pages > 0) { @@ -4572,7 +4486,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num, page = eb->pages[locked_pages]; unlock_page(page); } - return ret; + return 0; } static bool report_eb_range(const struct extent_buffer *eb, unsigned long start, From patchwork Thu Feb 16 16:34:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB56CC61DA4 for ; Thu, 16 Feb 2023 16:35:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229930AbjBPQfO (ORCPT ); Thu, 16 Feb 2023 11:35:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229782AbjBPQfJ (ORCPT ); Thu, 16 Feb 2023 11:35:09 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 000C84ECD7 for ; Thu, 16 Feb 2023 08:35:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=apwvBmoOBBU4EmE5ohvT41/0dH15V6j0567gOTaunqk=; b=x8UScBLdhawSsudZ0OtRHqmiQC ovhN9KYTT7jBVwd6KrkYST9p/vZrOk1nmnasXXCzPruswT5XhS1Wpjy0X2lVrxAdEEfX3+9jOkjg9 9HR3sDne3o2RALZopBMqvTt2R1ooVRcksgm9B4q37HDOe8E1tk70H8Ofc1jddRMy18omjQsnX5mWR S+wA5yQpT1AbYdLVO9ct6aT6E9PjnWDVlreDPMX7m4pHxaiccTXuLj6KuwUxkHLye4cD3Qc4/JUtk ZZJ56XV7V9gw4qT12HGm12eu8uEzXL60cfgMMV377OlxWUvPuPabmcyExP3ViifkuGzyU0COaRfVh PFsIsm0w==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDu-00BD4q-40; Thu, 16 Feb 2023 16:35:06 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 10/12] btrfs: simplify the error handling in __extent_writepage_io Date: Thu, 16 Feb 2023 17:34:35 +0100 Message-Id: <20230216163437.2370948-11-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Remove the has_error and saved_ret variables, and just jump to a goto label for error handling from the only place returning an error from the main loop. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 4b27cc50f8b926..4b493f4ee5b2c1 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1502,10 +1502,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, u64 extent_offset; u64 block_start; struct extent_map *em; - int saved_ret = 0; int ret = 0; int nr = 0; - bool has_error = false; bool compressed; ret = btrfs_writepage_cow_fixup(page); @@ -1556,10 +1554,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, if (IS_ERR(em)) { btrfs_page_set_error(fs_info, page, cur, end - cur + 1); ret = PTR_ERR_OR_ZERO(em); - has_error = true; - if (!saved_ret) - saved_ret = ret; - break; + goto out_error; } extent_offset = cur - em->start; @@ -1613,18 +1608,19 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode, submit_extent_page(bio_ctrl, disk_bytenr, page, iosize, cur - page_offset(page)); - ret = 0; cur += iosize; nr++; } + + btrfs_page_assert_not_dirty(fs_info, page); + *nr_ret = nr; + return 0; + +out_error: /* * If we finish without problem, we should not only clear page dirty, * but also empty subpage dirty bits */ - if (!has_error) - btrfs_page_assert_not_dirty(fs_info, page); - else - ret = saved_ret; *nr_ret = nr; return ret; } From patchwork Thu Feb 16 16:34:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E734C636D7 for ; Thu, 16 Feb 2023 16:35:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229782AbjBPQfR (ORCPT ); Thu, 16 Feb 2023 11:35:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229692AbjBPQfP (ORCPT ); Thu, 16 Feb 2023 11:35:15 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F4AD4ECF1 for ; Thu, 16 Feb 2023 08:35:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=sgqdu/c2CG9K7hmsXmFfhSzw51pkrPXVUvx0oxzJVrw=; b=rWI6T4pyoU/yfcGX+OSqZgLux5 CMqyCEwe9kNVEEwxGQpa/hGjtDTU44o4jLv6BtlRBYOt43xT2i9VgsBWLOXuaoWn6/2rQloG4093c f5PTVMLEQqJVKPXP89WFMqgXON/kTo4Mh2RGRKRtBD00UeSW4hQwyM9Mfrd92w/syYIMNVRUlbWuJ qCRcA0DJaGEyXIljPFKvpdjAnqbtJdZe7eDZLMX1JCKxeaVbbTuMhD1UWdoHvoH+mJj/Fj20OhW4O mVRJtk+SRdMRdj8fEsYeV0qR93rOf1oIABptSq/HYnMY3KCLqbOSly6bCgMV9qTNr+227mt4VlClp UCb+1rig==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDw-00BD5g-O3; Thu, 16 Feb 2023 16:35:09 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 11/12] btrfs: check for contiguity in submit_extent_page Date: Thu, 16 Feb 2023 17:34:36 +0100 Message-Id: <20230216163437.2370948-12-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Different loop iterations in btrfs_bio_add_page not only have the same contiguity parameters, but also any non-initial operation operates on a fresh bio anyway. Factor out the contiguity check into a new btrfs_bio_is_contig and only call it once in submit_extent_page before descending into the bio_add_page loop. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 68 +++++++++++++++++++++++--------------------- 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 4b493f4ee5b2c1..65b2e0328e87a5 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -866,6 +866,37 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array) return 0; } +static bool btrfs_bio_is_contig(struct btrfs_bio_ctrl *bio_ctrl, + struct page *page, u64 disk_bytenr, + unsigned int pg_offset) +{ + struct bio *bio = bio_ctrl->bio; + struct bio_vec *bvec = bio_last_bvec_all(bio); + const sector_t sector = disk_bytenr >> SECTOR_SHIFT; + + if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) { + /* + * For compression, all IO should have its logical bytenr + * set to the starting bytenr of the compressed extent. + */ + return bio->bi_iter.bi_sector == sector; + } + + /* + * The contig check requires the following conditions to be met: + * 1) The pages are belonging to the same inode + * This is implied by the call chain. + * + * 2) The range has adjacent logical bytenr + * + * 3) The range has adjacent file offset + * This is required for the usage of btrfs_bio->file_offset. + */ + return bio_end_sector(bio) == sector && + page_offset(bvec->bv_page) + bvec->bv_offset + bvec->bv_len == + page_offset(page) + pg_offset; +} + /* * Attempt to add a page to bio. * @@ -890,44 +921,11 @@ static int btrfs_bio_add_page(struct btrfs_bio_ctrl *bio_ctrl, struct bio *bio = bio_ctrl->bio; u32 bio_size = bio->bi_iter.bi_size; u32 real_size; - const sector_t sector = disk_bytenr >> SECTOR_SHIFT; - bool contig = false; ASSERT(bio); /* The limit should be calculated when bio_ctrl->bio is allocated */ ASSERT(bio_ctrl->len_to_oe_boundary); - if (bio->bi_iter.bi_size == 0) { - /* We can always add a page into an empty bio. */ - contig = true; - } else if (bio_ctrl->compress_type == BTRFS_COMPRESS_NONE) { - struct bio_vec *bvec = bio_last_bvec_all(bio); - - /* - * The contig check requires the following conditions to be met: - * 1) The pages are belonging to the same inode - * This is implied by the call chain. - * - * 2) The range has adjacent logical bytenr - * - * 3) The range has adjacent file offset - * This is required for the usage of btrfs_bio->file_offset. - */ - if (bio_end_sector(bio) == sector && - page_offset(bvec->bv_page) + bvec->bv_offset + - bvec->bv_len == page_offset(page) + pg_offset) - contig = true; - } else { - /* - * For compression, all IO should have its logical bytenr - * set to the starting bytenr of the compressed extent. - */ - contig = bio->bi_iter.bi_sector == sector; - } - - if (!contig) - return 0; - real_size = min(bio_ctrl->len_to_oe_boundary - bio_size, size); /* @@ -1024,6 +1022,10 @@ static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, ASSERT(bio_ctrl->end_io_func); + if (bio_ctrl->bio && + !btrfs_bio_is_contig(bio_ctrl, page, disk_bytenr, pg_offset)) + submit_one_bio(bio_ctrl); + while (cur < pg_offset + size) { u32 offset = cur - pg_offset; int added; From patchwork Thu Feb 16 16:34:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13143496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D135DC636CC for ; Thu, 16 Feb 2023 16:35:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229679AbjBPQfR (ORCPT ); Thu, 16 Feb 2023 11:35:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbjBPQfQ (ORCPT ); Thu, 16 Feb 2023 11:35:16 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D8544AFE2 for ; Thu, 16 Feb 2023 08:35:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=cMQyMJW+YIXWwj00y6IHV9tFSVV9jNqn8nhNN9xnT5Y=; b=WGQdvKAAvLj9FKTd6lS1y0H/wZ XUpSVY2DwQLQEFjzI2/hMPIm7ZSWzd12IFAdkd0h/8WDN5f+Iu8ZSnYcmPz31gsAwLzw4pb1s6fUY iwr4yGNFchEm8VdkKtATeE3opwCYIAcup8OXqmjERYty5B6h3GgvUw1qZHs/Lq9UfthhGIhg0lV1Y I7X78FPwv69boRDu/rF/Mmwnqd67PXLbtk8LENVMKnacLn/FkIAN32Q1NG9hH4DgugSYZ6LNDmAOH 7K1VE4nZNaW1rijYoGRjNt9NrXNmeGEAsMOIiUQE0C3Bf73ELcLuFFqExSepEawv1+c5iP4lxwdd8 sGj+jTnw==; Received: from [2001:4bb8:181:6771:37af:42b9:3236:81df] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pShDz-00BD6a-9Q; Thu, 16 Feb 2023 16:35:11 +0000 From: Christoph Hellwig To: Chris Mason , Josef Bacik , David Sterba Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 12/12] btrfs: simplify submit_extent_page Date: Thu, 16 Feb 2023 17:34:37 +0100 Message-Id: <20230216163437.2370948-13-hch@lst.de> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230216163437.2370948-1-hch@lst.de> References: <20230216163437.2370948-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org bio_add_page always adds either the entire range passed to it or nothing. Based on that btrfs_bio_add_page can only return a length smaller than the passed in one when hitting the ordered extent limit, which can only happen for writes. Given that compressed writes never even use this code path, this means that all the special cases for compressed extent offset handling are dead code. Reflow submit_extent_page to take advantage of this by inlining btrfs_bio_add_page and handling the ordered extent limit by decremeting it for each added range and thus significantly simplifying the loop. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/extent_io.c | 116 +++++++++++-------------------------------- 1 file changed, 30 insertions(+), 86 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 65b2e0328e87a5..0e807455e639a4 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -897,47 +897,6 @@ static bool btrfs_bio_is_contig(struct btrfs_bio_ctrl *bio_ctrl, page_offset(page) + pg_offset; } -/* - * Attempt to add a page to bio. - * - * @bio_ctrl: record both the bio, and its bio_flags - * @page: page to add to the bio - * @disk_bytenr: offset of the new bio or to check whether we are adding - * a contiguous page to the previous one - * @size: portion of page that we want to write - * @pg_offset: starting offset in the page - * - * Attempt to add a page to bio considering stripe alignment etc. - * - * Return >= 0 for the number of bytes added to the bio. - * Can return 0 if the current bio is already at stripe/zone boundary. - * Return <0 for error. - */ -static int btrfs_bio_add_page(struct btrfs_bio_ctrl *bio_ctrl, - struct page *page, - u64 disk_bytenr, unsigned int size, - unsigned int pg_offset) -{ - struct bio *bio = bio_ctrl->bio; - u32 bio_size = bio->bi_iter.bi_size; - u32 real_size; - - ASSERT(bio); - /* The limit should be calculated when bio_ctrl->bio is allocated */ - ASSERT(bio_ctrl->len_to_oe_boundary); - - real_size = min(bio_ctrl->len_to_oe_boundary - bio_size, size); - - /* - * If real_size is 0, never call bio_add_*_page(), as even size is 0, - * bio will still execute its endio function on the page! - */ - if (real_size == 0) - return 0; - - return bio_add_page(bio, page, real_size, pg_offset); -} - static void calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl, struct btrfs_inode *inode, u64 file_offset) { @@ -965,21 +924,14 @@ static void calc_bio_boundaries(struct btrfs_bio_ctrl *bio_ctrl, static void alloc_new_bio(struct btrfs_inode *inode, struct btrfs_bio_ctrl *bio_ctrl, - u64 disk_bytenr, u32 offset, u64 file_offset) + u64 disk_bytenr, u64 file_offset) { struct btrfs_fs_info *fs_info = inode->root->fs_info; struct bio *bio; bio = btrfs_bio_alloc(BIO_MAX_VECS, bio_ctrl->opf, inode, bio_ctrl->end_io_func, NULL); - /* - * For compressed page range, its disk_bytenr is always @disk_bytenr - * passed in, no matter if we have added any range into previous bio. - */ - if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) - bio->bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT; - else - bio->bi_iter.bi_sector = (disk_bytenr + offset) >> SECTOR_SHIFT; + bio->bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT; btrfs_bio(bio)->file_offset = file_offset; bio_ctrl->bio = bio; calc_bio_boundaries(bio_ctrl, inode, file_offset); @@ -1013,56 +965,48 @@ static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl, size_t size, unsigned long pg_offset) { struct btrfs_inode *inode = BTRFS_I(page->mapping->host); - unsigned int cur = pg_offset; - - ASSERT(bio_ctrl); - - ASSERT(pg_offset < PAGE_SIZE && size <= PAGE_SIZE && - pg_offset + size <= PAGE_SIZE); + ASSERT(pg_offset + size <= PAGE_SIZE); ASSERT(bio_ctrl->end_io_func); if (bio_ctrl->bio && !btrfs_bio_is_contig(bio_ctrl, page, disk_bytenr, pg_offset)) submit_one_bio(bio_ctrl); - while (cur < pg_offset + size) { - u32 offset = cur - pg_offset; - int added; + do { + u32 len = size; /* Allocate new bio if needed */ if (!bio_ctrl->bio) { alloc_new_bio(inode, bio_ctrl, disk_bytenr, - offset, page_offset(page) + cur); + page_offset(page) + pg_offset); } - /* - * We must go through btrfs_bio_add_page() to ensure each - * page range won't cross various boundaries. - */ - if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) - added = btrfs_bio_add_page(bio_ctrl, page, disk_bytenr, - size - offset, pg_offset + offset); - else - added = btrfs_bio_add_page(bio_ctrl, page, - disk_bytenr + offset, size - offset, - pg_offset + offset); - - /* Metadata page range should never be split */ - if (!is_data_inode(&inode->vfs_inode)) - ASSERT(added == 0 || added == size - offset); - - /* At least we added some page, update the account */ - if (bio_ctrl->wbc && added) - wbc_account_cgroup_owner(bio_ctrl->wbc, page, added); - - /* We have reached boundary, submit right now */ - if (added < size - offset) { - /* The bio should contain some page(s) */ - ASSERT(bio_ctrl->bio->bi_iter.bi_size); + + /* Cap to the current ordered extent boundary if there is one */ + if (len > bio_ctrl->len_to_oe_boundary) { + ASSERT(bio_ctrl->compress_type == BTRFS_COMPRESS_NONE); + ASSERT(is_data_inode(&inode->vfs_inode)); + len = bio_ctrl->len_to_oe_boundary; + } + + if (bio_add_page(bio_ctrl->bio, page, len, pg_offset) != len) { + /* bio full: move on to a one */ submit_one_bio(bio_ctrl); + continue; } - cur += added; - } + + if (bio_ctrl->wbc) + wbc_account_cgroup_owner(bio_ctrl->wbc, page, len); + + size -= len; + pg_offset += len; + disk_bytenr += len; + bio_ctrl->len_to_oe_boundary -= len; + + /* Ordered extent boundary: move on to a new bio */ + if (bio_ctrl->len_to_oe_boundary == 0) + submit_one_bio(bio_ctrl); + } while (size); } static int attach_extent_buffer_page(struct extent_buffer *eb,