From patchwork Mon Jun 19 02:28:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62AB9EB64D8 for ; Mon, 19 Jun 2023 02:29:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229718AbjFSC3K (ORCPT ); Sun, 18 Jun 2023 22:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229709AbjFSC3H (ORCPT ); Sun, 18 Jun 2023 22:29:07 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19BB2E49; Sun, 18 Jun 2023 19:29:06 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-668689ce13fso288577b3a.0; Sun, 18 Jun 2023 19:29:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141744; x=1689733744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yTSBRxmT6EFcDgvBi8GZxfXilWylDwjfXz02P+8Pqkw=; b=B2SJtG/5MC21ka2BlkOuMekjI/oI6XjptLljcNLMcZjPfIqCoET6d/G/ocWRMhFx1b 4gQvu5Bsarh9jEOOEuhRJiJXxjN7UtR7wTk6W6+4bhepSDVr4p89I1Co6lkJOnIDWLw/ xYmZN71p5pdg4LfVXBnOEIRXbLwDTSp1RzVqd9GtKaKj5udi3TJMs8CHReMDZOdBJS7Z 35rZyfRtxjj0VFg1lQUNyrVvve/3keMb7MYjnfC7D720GzlVWy3tNNb+Fm983njukizR zqOV/hSy3Xx7nvyvHC8JVhAHk/polF++uCOrcGih4r0YNwNgTa7dtCPUhkFJM+zng4Pl aI0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141744; x=1689733744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yTSBRxmT6EFcDgvBi8GZxfXilWylDwjfXz02P+8Pqkw=; b=PWhMQr+LqpunCjlOQtbejzA3otSL68egG3vQqpJ48+QWi3uf89hRfnkzxi81cDZ0SF 8Gek6gAkBDeJBSz3BcaWpFWaWzw3VbtzNVGK490IKQj2LNkmxEhjsIBmGfBLwR2WIf4i gNVosiJGuV2eKfWnyvh9vYWyvaxlSxxk+KhiiujLzuxBjxk42oCMyT6/VYBFN3dTp0TF /wMhGzHsRsCPwBgkd6HSXvVF0vnH8PmRIRkNnXyAFfWTdw0qZoRFQniT1FUnE758ZNNF bJawL9CxorWNKzw6Zx1EIlphD/ExAsfAuu/6MwE9+5hhk0rRbRED7OBDOPpq+kwh05uI DfEA== X-Gm-Message-State: AC+VfDyo0ZOYeWU36Neqvj2LtJpmd5r09Q5CXGaGy6XnvSs8KgA4Rlfk +xgCIk3Hte+aLzc8OtL2maZPhp1PhBc= X-Google-Smtp-Source: ACHHUZ6ExRC5qV050fC6EEkKDzkE9HaTCOd4YOKM4KpkpcGkc2EEcdw3HCofosEEjm6eT4KkO3emDQ== X-Received: by 2002:a05:6a00:b51:b0:668:71a1:2e74 with SMTP id p17-20020a056a000b5100b0066871a12e74mr1607558pfo.5.1687141744342; Sun, 18 Jun 2023 19:29:04 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:03 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Christoph Hellwig Subject: [PATCHv10 1/8] iomap: Rename iomap_page to iomap_folio_state and others Date: Mon, 19 Jun 2023 07:58:44 +0530 Message-Id: <75f5cd6a20bd3f1aa88150b903d9ef21d4c6856d.1687140389.git.ritesh.list@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org struct iomap_page actually tracks per-block state of a folio. Hence it make sense to rename some of these function names and data structures for e.g. 1. struct iomap_page (iop) -> struct iomap_folio_state (ifs) 2. iomap_page_create() -> ifs_alloc() 3. iomap_page_release() -> ifs_free() 4. iomap_iop_set_range_uptodate() -> ifs_set_range_uptodate() 5. to_iomap_page() -> folio->private Since in later patches we are also going to add per-block dirty state tracking to iomap_folio_state. Hence this patch also renames "uptodate" & "uptodate_lock" members of iomap_folio_state to "state" and"state_lock". We don't really need to_iomap_page() function, instead directly open code it as folio->private; Reviewed-by: Christoph Hellwig Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 151 ++++++++++++++++++++--------------------- 1 file changed, 72 insertions(+), 79 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 063133ec77f4..2675a3e0ac1d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -24,64 +24,57 @@ #define IOEND_BATCH_SIZE 4096 /* - * Structure allocated for each folio when block size < folio size - * to track sub-folio uptodate status and I/O completions. + * Structure allocated for each folio to track per-block uptodate state + * and I/O completions. */ -struct iomap_page { +struct iomap_folio_state { atomic_t read_bytes_pending; atomic_t write_bytes_pending; - spinlock_t uptodate_lock; - unsigned long uptodate[]; + spinlock_t state_lock; + unsigned long state[]; }; -static inline struct iomap_page *to_iomap_page(struct folio *folio) -{ - if (folio_test_private(folio)) - return folio_get_private(folio); - return NULL; -} - static struct bio_set iomap_ioend_bioset; -static struct iomap_page * -iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags) +static struct iomap_folio_state *ifs_alloc(struct inode *inode, + struct folio *folio, unsigned int flags) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; unsigned int nr_blocks = i_blocks_per_folio(inode, folio); gfp_t gfp; - if (iop || nr_blocks <= 1) - return iop; + if (ifs || nr_blocks <= 1) + return ifs; if (flags & IOMAP_NOWAIT) gfp = GFP_NOWAIT; else gfp = GFP_NOFS | __GFP_NOFAIL; - iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), + ifs = kzalloc(struct_size(ifs, state, BITS_TO_LONGS(nr_blocks)), gfp); - if (iop) { - spin_lock_init(&iop->uptodate_lock); + if (ifs) { + spin_lock_init(&ifs->state_lock); if (folio_test_uptodate(folio)) - bitmap_fill(iop->uptodate, nr_blocks); - folio_attach_private(folio, iop); + bitmap_fill(ifs->state, nr_blocks); + folio_attach_private(folio, ifs); } - return iop; + return ifs; } -static void iomap_page_release(struct folio *folio) +static void ifs_free(struct folio *folio) { - struct iomap_page *iop = folio_detach_private(folio); + struct iomap_folio_state *ifs = folio_detach_private(folio); struct inode *inode = folio->mapping->host; unsigned int nr_blocks = i_blocks_per_folio(inode, folio); - if (!iop) + if (!ifs) return; - WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending)); - WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending)); - WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) != + WARN_ON_ONCE(atomic_read(&ifs->read_bytes_pending)); + WARN_ON_ONCE(atomic_read(&ifs->write_bytes_pending)); + WARN_ON_ONCE(bitmap_full(ifs->state, nr_blocks) != folio_test_uptodate(folio)); - kfree(iop); + kfree(ifs); } /* @@ -90,7 +83,7 @@ static void iomap_page_release(struct folio *folio) static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; @@ -105,12 +98,12 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, * per-block uptodate status and adjust the offset and length if needed * to avoid reading in already uptodate ranges. */ - if (iop) { + if (ifs) { unsigned int i; /* move forward for each leading block marked uptodate */ for (i = first; i <= last; i++) { - if (!test_bit(i, iop->uptodate)) + if (!test_bit(i, ifs->state)) break; *pos += block_size; poff += block_size; @@ -120,7 +113,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, /* truncate len if we find any trailing uptodate block(s) */ for ( ; i <= last; i++) { - if (test_bit(i, iop->uptodate)) { + if (test_bit(i, ifs->state)) { plen -= (last - i + 1) * block_size; last = i - 1; break; @@ -144,26 +137,26 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, *lenp = plen; } -static void iomap_iop_set_range_uptodate(struct folio *folio, - struct iomap_page *iop, size_t off, size_t len) +static void ifs_set_range_uptodate(struct folio *folio, + struct iomap_folio_state *ifs, size_t off, size_t len) { struct inode *inode = folio->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; unsigned long flags; - spin_lock_irqsave(&iop->uptodate_lock, flags); - bitmap_set(iop->uptodate, first, last - first + 1); - if (bitmap_full(iop->uptodate, i_blocks_per_folio(inode, folio))) + spin_lock_irqsave(&ifs->state_lock, flags); + bitmap_set(ifs->state, first, last - first + 1); + if (bitmap_full(ifs->state, i_blocks_per_folio(inode, folio))) folio_mark_uptodate(folio); - spin_unlock_irqrestore(&iop->uptodate_lock, flags); + spin_unlock_irqrestore(&ifs->state_lock, flags); } static void iomap_set_range_uptodate(struct folio *folio, - struct iomap_page *iop, size_t off, size_t len) + struct iomap_folio_state *ifs, size_t off, size_t len) { - if (iop) - iomap_iop_set_range_uptodate(folio, iop, off, len); + if (ifs) + ifs_set_range_uptodate(folio, ifs, off, len); else folio_mark_uptodate(folio); } @@ -171,16 +164,16 @@ static void iomap_set_range_uptodate(struct folio *folio, static void iomap_finish_folio_read(struct folio *folio, size_t offset, size_t len, int error) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; if (unlikely(error)) { folio_clear_uptodate(folio); folio_set_error(folio); } else { - iomap_set_range_uptodate(folio, iop, offset, len); + iomap_set_range_uptodate(folio, ifs, offset, len); } - if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) + if (!ifs || atomic_sub_and_test(len, &ifs->read_bytes_pending)) folio_unlock(folio); } @@ -213,7 +206,7 @@ struct iomap_readpage_ctx { static int iomap_read_inline_data(const struct iomap_iter *iter, struct folio *folio) { - struct iomap_page *iop; + struct iomap_folio_state *ifs; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -231,15 +224,15 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (offset > 0) - iop = iomap_page_create(iter->inode, folio, iter->flags); + ifs = ifs_alloc(iter->inode, folio, iter->flags); else - iop = to_iomap_page(folio); + ifs = folio->private; addr = kmap_local_folio(folio, offset); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(folio, iop, offset, PAGE_SIZE - poff); + iomap_set_range_uptodate(folio, ifs, offset, PAGE_SIZE - poff); return 0; } @@ -260,7 +253,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, loff_t pos = iter->pos + offset; loff_t length = iomap_length(iter) - offset; struct folio *folio = ctx->cur_folio; - struct iomap_page *iop; + struct iomap_folio_state *ifs; loff_t orig_pos = pos; size_t poff, plen; sector_t sector; @@ -269,20 +262,20 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, return iomap_read_inline_data(iter, folio); /* zero post-eof blocks as the page may be mapped */ - iop = iomap_page_create(iter->inode, folio, iter->flags); + ifs = ifs_alloc(iter->inode, folio, iter->flags); iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; if (iomap_block_needs_zeroing(iter, pos)) { folio_zero_range(folio, poff, plen); - iomap_set_range_uptodate(folio, iop, poff, plen); + iomap_set_range_uptodate(folio, ifs, poff, plen); goto done; } ctx->cur_folio_in_bio = true; - if (iop) - atomic_add(plen, &iop->read_bytes_pending); + if (ifs) + atomic_add(plen, &ifs->read_bytes_pending); sector = iomap_sector(iomap, pos); if (!ctx->bio || @@ -436,11 +429,11 @@ EXPORT_SYMBOL_GPL(iomap_readahead); */ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; struct inode *inode = folio->mapping->host; unsigned first, last, i; - if (!iop) + if (!ifs) return false; /* Caller's range may extend past the end of this folio */ @@ -451,7 +444,7 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count) last = (from + count - 1) >> inode->i_blkbits; for (i = first; i <= last; i++) - if (!test_bit(i, iop->uptodate)) + if (!test_bit(i, ifs->state)) return false; return true; } @@ -490,7 +483,7 @@ bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags) */ if (folio_test_dirty(folio) || folio_test_writeback(folio)) return false; - iomap_page_release(folio); + ifs_free(folio); return true; } EXPORT_SYMBOL_GPL(iomap_release_folio); @@ -507,12 +500,12 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) if (offset == 0 && len == folio_size(folio)) { WARN_ON_ONCE(folio_test_writeback(folio)); folio_cancel_dirty(folio); - iomap_page_release(folio); + ifs_free(folio); } else if (folio_test_large(folio)) { - /* Must release the iop so the page can be split */ + /* Must release the ifs so the page can be split */ WARN_ON_ONCE(!folio_test_uptodate(folio) && folio_test_dirty(folio)); - iomap_page_release(folio); + ifs_free(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidate_folio); @@ -547,7 +540,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t len, struct folio *folio) { const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct iomap_page *iop; + struct iomap_folio_state *ifs; loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -559,8 +552,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return 0; folio_clear_error(folio); - iop = iomap_page_create(iter->inode, folio, iter->flags); - if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1) + ifs = ifs_alloc(iter->inode, folio, iter->flags); + if ((iter->flags & IOMAP_NOWAIT) && !ifs && nr_blocks > 1) return -EAGAIN; do { @@ -589,7 +582,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (status) return status; } - iomap_set_range_uptodate(folio, iop, poff, plen); + iomap_set_range_uptodate(folio, ifs, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -696,7 +689,7 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct folio *folio) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; flush_dcache_folio(folio); /* @@ -712,7 +705,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; - iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); + iomap_set_range_uptodate(folio, ifs, offset_in_folio(folio, pos), len); filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -1290,17 +1283,17 @@ EXPORT_SYMBOL_GPL(iomap_page_mkwrite); static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, size_t len, int error) { - struct iomap_page *iop = to_iomap_page(folio); + struct iomap_folio_state *ifs = folio->private; if (error) { folio_set_error(folio); mapping_set_error(inode->i_mapping, error); } - WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); - WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !ifs); + WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) <= 0); - if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) + if (!ifs || atomic_sub_and_test(len, &ifs->write_bytes_pending)) folio_end_writeback(folio); } @@ -1567,7 +1560,7 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, */ static void iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, - struct iomap_page *iop, struct iomap_writepage_ctx *wpc, + struct iomap_folio_state *ifs, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { sector_t sector = iomap_sector(&wpc->iomap, pos); @@ -1585,8 +1578,8 @@ iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, bio_add_folio(wpc->ioend->io_bio, folio, len, poff); } - if (iop) - atomic_add(len, &iop->write_bytes_pending); + if (ifs) + atomic_add(len, &ifs->write_bytes_pending); wpc->ioend->io_size += len; wbc_account_cgroup_owner(wbc, &folio->page, len); } @@ -1612,7 +1605,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct folio *folio, u64 end_pos) { - struct iomap_page *iop = iomap_page_create(inode, folio, 0); + struct iomap_folio_state *ifs = ifs_alloc(inode, folio, 0); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); @@ -1620,7 +1613,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); + WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) != 0); /* * Walk through the folio to find areas to write back. If we @@ -1628,7 +1621,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * invalid, grab a new one. */ for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { - if (iop && !test_bit(i, iop->uptodate)) + if (ifs && !test_bit(i, ifs->state)) continue; error = wpc->ops->map_blocks(wpc, inode, pos); @@ -1639,7 +1632,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, folio, ifs, wpc, wbc, &submit_list); count++; } From patchwork Mon Jun 19 02:28:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC5A9EB64DB for ; Mon, 19 Jun 2023 02:29:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229728AbjFSC3M (ORCPT ); Sun, 18 Jun 2023 22:29:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229725AbjFSC3K (ORCPT ); Sun, 18 Jun 2023 22:29:10 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53818E47; Sun, 18 Jun 2023 19:29:09 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-6686708c986so1560950b3a.0; Sun, 18 Jun 2023 19:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141748; x=1689733748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CfrcRZSVSc6FW1n1MsNyxKR20bTVgx4c9T+kFzqOa1Y=; b=bW0JsG/reWfDG+u9jNK1U11TWIgRONrCGYFwL1nlLVPe3ASeKsNr61sJSWSEYj4CXM abQYPrKM+8NbBdLPqgSnJr33kHiSNLfeQnKRK5sC5mM8Ggk42jQUAeo4QBp22lZ1cG64 Jqj2FOyy3Ev88rgnwVjrovxveDofGI9IxnIAWy0jnjBFZ4cqRFLuWUwYuh7AJyLCZMJW Q6SEpf8X07/gg0P5SIaJ90WYmsNA7ZhIFhAFlqq061CTQ9tVXVZ4iTdLR8b7aUTqod5F SHxs8KZE4WDqqPmWJ5SOIpy7N6tFr8HHYkqntqzwEZMqrqNeJ8zE4QoRKpKYjhR3or3r Afzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141748; x=1689733748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CfrcRZSVSc6FW1n1MsNyxKR20bTVgx4c9T+kFzqOa1Y=; b=KH7is+m0dR4qQj4sLnIk5beYPTyBv+wf6lxthOwNYJz/yOfG70/YhXR53u/SYu/v/a kG/LvoAHEw7Wn21731VcmqOCNtvMtGNRzyuuZWXMqtIXLWqaJzS6eUuVcpTfaUnIoRrg woJm39ZpPLjgGuRVH45sq4rSN1aBiMEH63emf0hXg82lU8Mduegm3G1wG5sAPGBrnr+J DYzKp6Xw05jReRqRjZRCgOwaBrGkpflwYy4sErR0uLsfsw3Hagp2J1uCbMSqdWjiZnP7 slqFKl/8N71jdB7Su4AkX+HZjQ1shPYhjAlAqpz0C73j9EPIjPx4u+pF+C2ugEchAb6U oziA== X-Gm-Message-State: AC+VfDwfg0MtUaG7mYuDu+XkGjmbV9Nt3K8zhPT4XZjitAfujaLuLwSb VQEKG8ay0SXvoXGYNzEyRZBplaKwmE4= X-Google-Smtp-Source: ACHHUZ7azD7f7y28Qj3VDpZQtADUJXLOmJmUnP7dWC+ZQ7EQQTx9Qty4n0R62Y0g1qpOSYdBssKnAw== X-Received: by 2002:a05:6a20:9389:b0:122:958:9fc7 with SMTP id x9-20020a056a20938900b0012209589fc7mr473435pzh.40.1687141748019; Sun, 18 Jun 2023 19:29:08 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:07 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Christoph Hellwig Subject: [PATCHv10 2/8] iomap: Drop ifs argument from iomap_set_range_uptodate() Date: Mon, 19 Jun 2023 07:58:45 +0530 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org iomap_folio_state (ifs) can be derived directly from the folio, making it unnecessary to pass "ifs" as an argument to iomap_set_range_uptodate(). This patch eliminates "ifs" argument from iomap_set_range_uptodate() function. Also, the definition of iomap_set_range_uptodate() and ifs_set_range_uptodate() functions are moved above ifs_alloc(). In upcoming patches, we plan to introduce additional helper routines for handling dirty state, with the intention of consolidating all of "ifs" state handling routines at one place. Reviewed-by: Christoph Hellwig Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 67 +++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 34 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 2675a3e0ac1d..3ff7688b360a 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -36,6 +36,33 @@ struct iomap_folio_state { static struct bio_set iomap_ioend_bioset; +static void ifs_set_range_uptodate(struct folio *folio, + struct iomap_folio_state *ifs, size_t off, size_t len) +{ + struct inode *inode = folio->mapping->host; + unsigned int first_blk = off >> inode->i_blkbits; + unsigned int last_blk = (off + len - 1) >> inode->i_blkbits; + unsigned int nr_blks = last_blk - first_blk + 1; + unsigned long flags; + + spin_lock_irqsave(&ifs->state_lock, flags); + bitmap_set(ifs->state, first_blk, nr_blks); + if (bitmap_full(ifs->state, i_blocks_per_folio(inode, folio))) + folio_mark_uptodate(folio); + spin_unlock_irqrestore(&ifs->state_lock, flags); +} + +static void iomap_set_range_uptodate(struct folio *folio, size_t off, + size_t len) +{ + struct iomap_folio_state *ifs = folio->private; + + if (ifs) + ifs_set_range_uptodate(folio, ifs, off, len); + else + folio_mark_uptodate(folio); +} + static struct iomap_folio_state *ifs_alloc(struct inode *inode, struct folio *folio, unsigned int flags) { @@ -137,30 +164,6 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, *lenp = plen; } -static void ifs_set_range_uptodate(struct folio *folio, - struct iomap_folio_state *ifs, size_t off, size_t len) -{ - struct inode *inode = folio->mapping->host; - unsigned first = off >> inode->i_blkbits; - unsigned last = (off + len - 1) >> inode->i_blkbits; - unsigned long flags; - - spin_lock_irqsave(&ifs->state_lock, flags); - bitmap_set(ifs->state, first, last - first + 1); - if (bitmap_full(ifs->state, i_blocks_per_folio(inode, folio))) - folio_mark_uptodate(folio); - spin_unlock_irqrestore(&ifs->state_lock, flags); -} - -static void iomap_set_range_uptodate(struct folio *folio, - struct iomap_folio_state *ifs, size_t off, size_t len) -{ - if (ifs) - ifs_set_range_uptodate(folio, ifs, off, len); - else - folio_mark_uptodate(folio); -} - static void iomap_finish_folio_read(struct folio *folio, size_t offset, size_t len, int error) { @@ -170,7 +173,7 @@ static void iomap_finish_folio_read(struct folio *folio, size_t offset, folio_clear_uptodate(folio); folio_set_error(folio); } else { - iomap_set_range_uptodate(folio, ifs, offset, len); + iomap_set_range_uptodate(folio, offset, len); } if (!ifs || atomic_sub_and_test(len, &ifs->read_bytes_pending)) @@ -206,7 +209,6 @@ struct iomap_readpage_ctx { static int iomap_read_inline_data(const struct iomap_iter *iter, struct folio *folio) { - struct iomap_folio_state *ifs; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -224,15 +226,13 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (offset > 0) - ifs = ifs_alloc(iter->inode, folio, iter->flags); - else - ifs = folio->private; + ifs_alloc(iter->inode, folio, iter->flags); addr = kmap_local_folio(folio, offset); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(folio, ifs, offset, PAGE_SIZE - poff); + iomap_set_range_uptodate(folio, offset, PAGE_SIZE - poff); return 0; } @@ -269,7 +269,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, if (iomap_block_needs_zeroing(iter, pos)) { folio_zero_range(folio, poff, plen); - iomap_set_range_uptodate(folio, ifs, poff, plen); + iomap_set_range_uptodate(folio, poff, plen); goto done; } @@ -582,7 +582,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (status) return status; } - iomap_set_range_uptodate(folio, ifs, poff, plen); + iomap_set_range_uptodate(folio, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -689,7 +689,6 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct folio *folio) { - struct iomap_folio_state *ifs = folio->private; flush_dcache_folio(folio); /* @@ -705,7 +704,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; - iomap_set_range_uptodate(folio, ifs, offset_in_folio(folio, pos), len); + iomap_set_range_uptodate(folio, offset_in_folio(folio, pos), len); filemap_dirty_folio(inode->i_mapping, folio); return copied; } From patchwork Mon Jun 19 02:28:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E390EB64DA for ; Mon, 19 Jun 2023 02:29:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229737AbjFSC3P (ORCPT ); Sun, 18 Jun 2023 22:29:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229725AbjFSC3O (ORCPT ); Sun, 18 Jun 2023 22:29:14 -0400 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07F8AE47; Sun, 18 Jun 2023 19:29:13 -0700 (PDT) Received: by mail-qt1-x831.google.com with SMTP id d75a77b69052e-3f9b4a656deso25757811cf.0; Sun, 18 Jun 2023 19:29:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141751; x=1689733751; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a6oE7Mn3oTNCl4aJn0H7Lb1RcLZ8SJmI5NLzOO0Lx8E=; b=Dp/GKcOCz4/Ggu9zt2D4ELSYo3xnyrA85Kqgo4Zu0zCu4cB0cQMke154cJrMkF4wJS SgpiS0/iFFgFO8X2s9n1iQrUsHRZANJzqn4NP6Jwh+rIeNdYRYyeNngDpgz+OFVSm9vE WQeLriLXVMG8a/PiNtJifNXGZT+y07oK+2FHvbW1qruZJsXJb51z4aC255Fvnt+GDiYq GtDSZLAxHQi3VSXve7RIgCMlLFV9jEy6yDvqcEz35paSbQOzq2Gj43tf7+cFyWq2HRQR HsSX8WDtzrc3tZ+9pjnbbi+DBRmOzVNSwyUY0GbQWKBjDnINw7UyP3JNwaAx624Xffny 7X9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141751; x=1689733751; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a6oE7Mn3oTNCl4aJn0H7Lb1RcLZ8SJmI5NLzOO0Lx8E=; b=Dg/iPCivWRzkwexnT/ig6eK7NeOsOR2939ptjgtbTHC1yUllapxkm5t80OlQgtXNFO 1wkA1ksUK0XWOYGS7gTvZG1ruW2rmGABZpa06iL6DRnB9vr38oEeyErOkPyfZVl1iYbq tc494326AifpQMoaU/VTiBKlYyE9E/fHgrsY941Vjwhv4lCNsjLfmoQ/Kwj7OuHm2IMT N7iVHPHtKbpJ0Lt0EoEMP7B+jwSCHYxFhlmerkiCcdIm9kDNC9w6FJD7Y010L47QxwlS Jen1E8PYYwzfMeafTNH1WV+/VarqzVAOAg5JhDU6s9u8MHguHkcHqPSS4kByTvyyKOBX Xuyw== X-Gm-Message-State: AC+VfDyr4N82zn8l0GcQrIW4fJoQYAopglZiR3MDutx4MkhRnIS7UKO1 zglc0uI7dFcqrLxav+ey57muNLglnHE= X-Google-Smtp-Source: ACHHUZ59eyflEJNjGcQT/7rnuTLHUH2f/I8EGwcGFTQqkrf0L+Ct0DZm50AOnD1OkLcX4f33joEjwQ== X-Received: by 2002:ac8:594d:0:b0:3ff:2179:c48d with SMTP id 13-20020ac8594d000000b003ff2179c48dmr1509843qtz.28.1687141751567; Sun, 18 Jun 2023 19:29:11 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:11 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Christoph Hellwig Subject: [PATCHv10 3/8] iomap: Add some uptodate state handling helpers for ifs state bitmap Date: Mon, 19 Jun 2023 07:58:46 +0530 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch adds two of the helper routines ifs_is_fully_uptodate() and ifs_block_is_uptodate() for managing uptodate state of "ifs" state bitmap. In later patches ifs state bitmap array will also handle dirty state of all blocks of a folio. Hence this patch adds some helper routines for handling uptodate state of the ifs state bitmap. Reviewed-by: Christoph Hellwig Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 3ff7688b360a..790b8413b44c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -36,6 +36,20 @@ struct iomap_folio_state { static struct bio_set iomap_ioend_bioset; +static inline bool ifs_is_fully_uptodate(struct folio *folio, + struct iomap_folio_state *ifs) +{ + struct inode *inode = folio->mapping->host; + + return bitmap_full(ifs->state, i_blocks_per_folio(inode, folio)); +} + +static inline bool ifs_block_is_uptodate(struct iomap_folio_state *ifs, + unsigned int block) +{ + return test_bit(block, ifs->state); +} + static void ifs_set_range_uptodate(struct folio *folio, struct iomap_folio_state *ifs, size_t off, size_t len) { @@ -47,7 +61,7 @@ static void ifs_set_range_uptodate(struct folio *folio, spin_lock_irqsave(&ifs->state_lock, flags); bitmap_set(ifs->state, first_blk, nr_blks); - if (bitmap_full(ifs->state, i_blocks_per_folio(inode, folio))) + if (ifs_is_fully_uptodate(folio, ifs)) folio_mark_uptodate(folio); spin_unlock_irqrestore(&ifs->state_lock, flags); } @@ -92,14 +106,12 @@ static struct iomap_folio_state *ifs_alloc(struct inode *inode, static void ifs_free(struct folio *folio) { struct iomap_folio_state *ifs = folio_detach_private(folio); - struct inode *inode = folio->mapping->host; - unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (!ifs) return; WARN_ON_ONCE(atomic_read(&ifs->read_bytes_pending)); WARN_ON_ONCE(atomic_read(&ifs->write_bytes_pending)); - WARN_ON_ONCE(bitmap_full(ifs->state, nr_blocks) != + WARN_ON_ONCE(ifs_is_fully_uptodate(folio, ifs) != folio_test_uptodate(folio)); kfree(ifs); } @@ -130,7 +142,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, /* move forward for each leading block marked uptodate */ for (i = first; i <= last; i++) { - if (!test_bit(i, ifs->state)) + if (!ifs_block_is_uptodate(ifs, i)) break; *pos += block_size; poff += block_size; @@ -140,7 +152,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, /* truncate len if we find any trailing uptodate block(s) */ for ( ; i <= last; i++) { - if (test_bit(i, ifs->state)) { + if (ifs_block_is_uptodate(ifs, i)) { plen -= (last - i + 1) * block_size; last = i - 1; break; @@ -444,7 +456,7 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count) last = (from + count - 1) >> inode->i_blkbits; for (i = first; i <= last; i++) - if (!test_bit(i, ifs->state)) + if (!ifs_block_is_uptodate(ifs, i)) return false; return true; } @@ -1620,7 +1632,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * invalid, grab a new one. */ for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { - if (ifs && !test_bit(i, ifs->state)) + if (ifs && !ifs_block_is_uptodate(ifs, i)) continue; error = wpc->ops->map_blocks(wpc, inode, pos); From patchwork Mon Jun 19 02:28:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83EBDEB64D8 for ; Mon, 19 Jun 2023 02:29:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229739AbjFSC3T (ORCPT ); Sun, 18 Jun 2023 22:29:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229725AbjFSC3Q (ORCPT ); Sun, 18 Jun 2023 22:29:16 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5FB8E47; Sun, 18 Jun 2023 19:29:15 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-666e6541c98so2407298b3a.2; Sun, 18 Jun 2023 19:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141755; x=1689733755; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tLImmVLn3mGGa5KeTYkDF4FcAsovaMmgH/N39hJNTNU=; b=fPUZBsWLSzAeUaAg5HZQQtmBkVtsXIf3TCVuAdK5D5TGRxn4/jgJb5mGDH0BQiHLjF GIwaMLaX7PWiDvIA7WQjDMjZvOo3N5VnIGl670G8ecMlV9XX7FSFD5ccSfnVwVvVn3oa PdNvM69lYVG9ycLwFrY9J4SKnwF7ykciuvNxLYT+pgpzTXl+rf9g7uDD02AYD6jo11Ah C/HaFtNdee43OvZp9ljS3dEBZ3SMS02ycEwwe2Q2922CQk9nxXzyMg1wSfwZJ+K7y+fQ /FcDQ6DWSG/8nd5xNo0021zwKZ8UFFN4C1OCauD042YFzeHbpGRj8Enn/oi3YyfQKFFr LQPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141755; x=1689733755; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tLImmVLn3mGGa5KeTYkDF4FcAsovaMmgH/N39hJNTNU=; b=NFgPdaqBDqf7kpnG2T4o2QXrMcMGNKm8NOJr1nAqH8nUKBZOUwPyuJMr/yWbXd1hMr r+NAZ6DbA5s7GctnMBBCEOUsDj6iEGKK4VZbFTHRiJGMxq3Xc8znlQDQOcCA69Q8ViE2 X9VKV4Fz4mVPh8U230yEoFYKFfIBdjylBJqYl80KNd8VKwYzj+QH4uXEg4x0YVnM7Ffu DGBi62dQh6pAUnLZXY787EHyTpolKv5T43Im8wObsCzkVlpcifMTxOq64749kH1OJeoz 47UFFD4i6UtKdzcv25uxY6HyIKbubdY69DkZGegU1y8dQ6hnU0vQWfOIIKIzC85ezsGu rllg== X-Gm-Message-State: AC+VfDyl5EFFq56Z8yhEEJfRplFlYxYhTZsoAE+cJfV7j8EriAZ3bJ+W U1NPq/lvec7xThxzYbFj37SePXTTqjc= X-Google-Smtp-Source: ACHHUZ5G4WBoh6SNbxorIV7l8ifJoH8b6V5xe3rpURHKtQ0kEwn83U75cBkPZfpare/hVZ1tGzB+wA== X-Received: by 2002:a05:6a20:4401:b0:11d:ae2e:6954 with SMTP id ce1-20020a056a20440100b0011dae2e6954mr13122013pzb.15.1687141754873; Sun, 18 Jun 2023 19:29:14 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:14 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" Subject: [PATCHv10 4/8] iomap: Fix possible overflow condition in iomap_write_delalloc_scan Date: Mon, 19 Jun 2023 07:58:47 +0530 Message-Id: <0343bb9a59f8c3d647004df471445153bf1f8a3d.1687140389.git.ritesh.list@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org folio_next_index() returns an unsigned long value which left shifted by PAGE_SHIFT could possibly cause an overflow. Instead use folio_pos(folio) + folio_size(folio), which does this correctly. Suggested-by: Matthew Wilcox Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 790b8413b44c..8206f3628586 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -933,7 +933,7 @@ static int iomap_write_delalloc_scan(struct inode *inode, * the end of this data range, not the end of the folio. */ *punch_start_byte = min_t(loff_t, end_byte, - folio_next_index(folio) << PAGE_SHIFT); + folio_pos(folio) + folio_size(folio)); } /* move offset to start of next folio in range */ From patchwork Mon Jun 19 02:28:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3022CEB64DA for ; Mon, 19 Jun 2023 02:29:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229767AbjFSC3Z (ORCPT ); Sun, 18 Jun 2023 22:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229750AbjFSC3U (ORCPT ); Sun, 18 Jun 2023 22:29:20 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A8A4E49; Sun, 18 Jun 2023 19:29:19 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id 41be03b00d2f7-54fba092ef5so2457111a12.2; Sun, 18 Jun 2023 19:29:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141758; x=1689733758; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WWsRu9lf/AHg76xNxq1yUxMqefCmChrWsrWNBoCROHU=; b=AlpPLfyiQA8L3Db4lUewZxgPeK1a9flv/KxPiDptyCFXgM2rN7ecTf8CDdlP+IGCk5 k6iY8Z+3F+/pY9iwBdPlOMmYYyI6TFkT6GPe3dQv1Wpq0I+Q1HEsUR2uP6lMF+PtqJYz kK8Zn1IopgbiXdinY2a38N/H9JJo5hPExIbP1LVdzja1k2g8MTu/Hpw+gbGF7/x+Pldu v4e8z+HaxVrwzG8HHJYjBYwnnH6LzU75ooSFLDVNYkwgQeV/vSbNe2iHWA9ZHoB+1mgQ Bu8M1Dut9z4P098ZIRgREgrSi/hzD5+l9OSH+rV9hcGBbz/5gV5WMVGMoDj72fEdVy1p 49qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141758; x=1689733758; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WWsRu9lf/AHg76xNxq1yUxMqefCmChrWsrWNBoCROHU=; b=l9li8roXhqe0FHCSajrKYBEg2K/Sf3Wrcg/CYEbU38LdqgztLG/cp7dE3dj4q7XxVW QnsavjRumhxTiMqAzo1L3YHSUN5UCQGqJy971thIA7ChzXEg6nhRcYk4bJXeWjqRA3pS uOCgvF66NBRZW6cvNFDBWqlokYe9ltmyoK++k/6SQH9q6Cz85c6GEdAGRIeVV2EEOY/5 sfN6+xV75UcdR8i8ROhr6yy3WakQtzX5wP/KnaBbJe5YhYApje+sSG+iNqav+OWeQ9pS RF3g73xPTloYWwnWzHd77O+BMvTy8k9LgQt/eAS7xKD4JhsBp2+lgO5qRP2Md83Jh3Ms HH9g== X-Gm-Message-State: AC+VfDyer2zEvy5fly/mbQHdSHyIHpzg60t4IzBYk8Dusd6kNP8KBDrq sO2nzHrCJuOTrKX9uYsdqhQuJqKQYTQ= X-Google-Smtp-Source: ACHHUZ4OwvcEHhaEgiYsZQE9FIG20K1YASUUck59wcJxjycWFq4kRnwKp7CfbMxIhdc4NOJXQClamA== X-Received: by 2002:a05:6a20:734f:b0:10c:3cf3:ef7e with SMTP id v15-20020a056a20734f00b0010c3cf3ef7emr12209630pzc.42.1687141758244; Sun, 18 Jun 2023 19:29:18 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:17 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" Subject: [PATCHv10 5/8] iomap: Use iomap_punch_t typedef Date: Mon, 19 Jun 2023 07:58:48 +0530 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org It makes it much easier if we have iomap_punch_t typedef for "punch" function pointer in all delalloc related punch, scan and release functions. It will be useful in later patches when we will factor out iomap_write_delalloc_punch() function. Suggested-by: Matthew Wilcox Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 8206f3628586..e03ffdc259c4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -23,6 +23,7 @@ #define IOEND_BATCH_SIZE 4096 +typedef int (*iomap_punch_t)(struct inode *inode, loff_t offset, loff_t length); /* * Structure allocated for each folio to track per-block uptodate state * and I/O completions. @@ -900,7 +901,7 @@ EXPORT_SYMBOL_GPL(iomap_file_buffered_write); */ static int iomap_write_delalloc_scan(struct inode *inode, loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte, - int (*punch)(struct inode *inode, loff_t offset, loff_t length)) + iomap_punch_t punch) { while (start_byte < end_byte) { struct folio *folio; @@ -978,8 +979,7 @@ static int iomap_write_delalloc_scan(struct inode *inode, * the code to subtle off-by-one bugs.... */ static int iomap_write_delalloc_release(struct inode *inode, - loff_t start_byte, loff_t end_byte, - int (*punch)(struct inode *inode, loff_t pos, loff_t length)) + loff_t start_byte, loff_t end_byte, iomap_punch_t punch) { loff_t punch_start_byte = start_byte; loff_t scan_end_byte = min(i_size_read(inode), end_byte); @@ -1072,8 +1072,7 @@ static int iomap_write_delalloc_release(struct inode *inode, */ int iomap_file_buffered_write_punch_delalloc(struct inode *inode, struct iomap *iomap, loff_t pos, loff_t length, - ssize_t written, - int (*punch)(struct inode *inode, loff_t pos, loff_t length)) + ssize_t written, iomap_punch_t punch) { loff_t start_byte; loff_t end_byte; From patchwork Mon Jun 19 02:28:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE763EB64D7 for ; Mon, 19 Jun 2023 02:29:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229759AbjFSC3Z (ORCPT ); Sun, 18 Jun 2023 22:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229752AbjFSC3X (ORCPT ); Sun, 18 Jun 2023 22:29:23 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCDC3E4D; Sun, 18 Jun 2023 19:29:22 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-666eef03ebdso503249b3a.1; Sun, 18 Jun 2023 19:29:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141762; x=1689733762; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JYZjaySF9JoUvT+4F61KVWv6hVHnaw6nITF7B1RwW2Q=; b=GznOkpzisZDETZz2EGPGiRV87zD3UEIWZTafxCLV8Wh9UMl+ZYgFGCKbrSCGm2wdJf 9UYsxiTrEm4Kb6L8J8jb1Afd1M6pdM7lbNi30Pxgt5q2+AyvG9aN6AVfxeYraDxQJrJ8 9ysuggk9l9F5CAXmMNE4carjIZg1tACcUNcKUqMYBJsQruwlanweyEMGQqcZY7nbP5m8 YwMvvHo8AoYNofpmSC9vPDHkBbyZ3QVDrKcc7Ejz41uN1eQF+Y27FvsTwv0ZJagGYXEw 7WJtf+72IrhRu6/Q5K1T9CNiSlVBHPXdGvLQJUu0axAUmm+Fbg11bhXuTjwZ390h61o3 xp5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141762; x=1689733762; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JYZjaySF9JoUvT+4F61KVWv6hVHnaw6nITF7B1RwW2Q=; b=TaCQxtQus24gl2NcyMz/o8DUb7yosZKHHDM2L8ABq9f0nyEIQGwVoAtjtLabTL211C WFFE6sL+huAV2ils3LetbyQ602MC9PMYLNnGCen2jD1UaId6uZVzEu/RIesdbNwLsisu shqeQ2mvBcF2/38bcP/fgsCXovkcvLyAkd9pQr+T+6uBbd2qUBBL4Nxb/sTaogPNlhzX 9pRVF4aRerUNe93xbZJtdsvRlDI/+J+66h+o/8bDRUviFoScSlrpi4B6ZJ0OsurwMuqV gYY/rd4Z4kBNAsEspJmz9i93VnBBgTlH+3G6EFg77midRWvb3opIyFtMUE050y7vYUww WEJg== X-Gm-Message-State: AC+VfDwgOYnsvfMJFU6kh9TuhlDt/qJzfZNIGQnHJjj1CpaTOGvyfmip ymjU0bmtZCaYp4j3tBF3Y4YqRrr387o= X-Google-Smtp-Source: ACHHUZ7z+dXHOphk2XXW6H2JNWA8xIcju4RWXI7suk4uHY2NamCGpeaBzP6TkrcK6MmUwGn9tlU6jQ== X-Received: by 2002:a05:6a00:21c2:b0:666:ada0:4b01 with SMTP id t2-20020a056a0021c200b00666ada04b01mr5195831pfj.32.1687141761726; Sun, 18 Jun 2023 19:29:21 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:21 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Christoph Hellwig Subject: [PATCHv10 6/8] iomap: Refactor iomap_write_delalloc_punch() function out Date: Mon, 19 Jun 2023 07:58:49 +0530 Message-Id: <1b7e89f65fbee7cc0b1909f136d40552e68d9829.1687140389.git.ritesh.list@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch factors iomap_write_delalloc_punch() function out. This function is resposible for actual punch out operation. The reason for doing this is, to avoid deep indentation when we bring punch-out of individual non-dirty blocks within a dirty folio in a later patch (which adds per-block dirty status handling to iomap) to avoid delalloc block leak. Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 53 ++++++++++++++++++++++++++---------------- 1 file changed, 33 insertions(+), 20 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index e03ffdc259c4..2d79061022d8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -882,6 +882,32 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, } EXPORT_SYMBOL_GPL(iomap_file_buffered_write); +static int iomap_write_delalloc_punch(struct inode *inode, struct folio *folio, + loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte, + iomap_punch_t punch) +{ + int ret = 0; + + if (!folio_test_dirty(folio)) + return ret; + + /* if dirty, punch up to offset */ + if (start_byte > *punch_start_byte) { + ret = punch(inode, *punch_start_byte, + start_byte - *punch_start_byte); + if (ret) + return ret; + } + /* + * Make sure the next punch start is correctly bound to + * the end of this data range, not the end of the folio. + */ + *punch_start_byte = min_t(loff_t, end_byte, + folio_pos(folio) + folio_size(folio)); + + return ret; +} + /* * Scan the data range passed to us for dirty page cache folios. If we find a * dirty folio, punch out the preceeding range and update the offset from which @@ -905,6 +931,7 @@ static int iomap_write_delalloc_scan(struct inode *inode, { while (start_byte < end_byte) { struct folio *folio; + int ret; /* grab locked page */ folio = filemap_lock_folio(inode->i_mapping, @@ -915,26 +942,12 @@ static int iomap_write_delalloc_scan(struct inode *inode, continue; } - /* if dirty, punch up to offset */ - if (folio_test_dirty(folio)) { - if (start_byte > *punch_start_byte) { - int error; - - error = punch(inode, *punch_start_byte, - start_byte - *punch_start_byte); - if (error) { - folio_unlock(folio); - folio_put(folio); - return error; - } - } - - /* - * Make sure the next punch start is correctly bound to - * the end of this data range, not the end of the folio. - */ - *punch_start_byte = min_t(loff_t, end_byte, - folio_pos(folio) + folio_size(folio)); + ret = iomap_write_delalloc_punch(inode, folio, punch_start_byte, + start_byte, end_byte, punch); + if (ret) { + folio_unlock(folio); + folio_put(folio); + return ret; } /* move offset to start of next folio in range */ From patchwork Mon Jun 19 02:28:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 417C2EB64D8 for ; Mon, 19 Jun 2023 02:29:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229772AbjFSC32 (ORCPT ); Sun, 18 Jun 2023 22:29:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229769AbjFSC31 (ORCPT ); Sun, 18 Jun 2023 22:29:27 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD3AE49; Sun, 18 Jun 2023 19:29:26 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-666683eb028so1464079b3a.0; Sun, 18 Jun 2023 19:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141765; x=1689733765; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=avYaHpRrNXBTBhOS3iew364J6rgNtP8dJzHXxbv0G/s=; b=WirzkyTrrRuOs/txJ6VhSCPkRu1BBkPQS0WjnOYST30Bc8p21h+G/10hWQ0SxZtena edLRLdEWROAW02bYQmk/oLwMiALkQcSegSzPIxtPQwucdupHSL7Cd9ZXWfU8sP+Vc9wQ 2w1oP3OfE7YlEFm3VXNV0UPz+NUwMFVgKEOd2MH+zukx2O8ociScjKHAiSt0q6l6rWYC 2BAEpFHHJjGYp2W2j//dkI+oBG5vqSfDBWwoBTb/kb/KOdVXw9RtomEAFKo8YbaxY/CT wWYZXvVjEcFHz/wlLGSuICaxr4XuFBNTSH3swFao8Ppy7kiVWh8Yze6hMxjo9DA2+DV4 Jpkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141765; x=1689733765; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=avYaHpRrNXBTBhOS3iew364J6rgNtP8dJzHXxbv0G/s=; b=Jl7B/AwOfeLzSd8UcbjC8Bg1sQl+uaXaKtt5cWhqoPdRFJuHOMqv6ZlLSl0dCgGFNP kvQ7J5MRXCZPFSQnvTdvm9HEJGClyyL8s+OZ9DHpI3HHOvA1EZjWDt8jc8uziLnNTL3d CJ8kEy/YS7PWVK2ECFqp5D0SGHz1kG8Lv+2Nr72Jsy5T/sCX5kkL0v7HRniut3z+g5eq RSrBz2AGR2QTlaFLUspzfkwwkyCuSFpBhGPRQqE3gSP/pzqFXjUiYHG+w9dSrphB87uS V20Ysqqoz7kH7i6kPkDlun7o0P3GZLhEj3It2YpY5UFhA/YwXXmLQZAykhy7HHVvoDCC xXDA== X-Gm-Message-State: AC+VfDzr/XfbYPkQNwtLvBGJ1mZo9HwFUdKqxEyqXorTyb1VcH3Nphd+ r1Ax0E+WUZnuoTCpWBM4nHb46OtSsw0= X-Google-Smtp-Source: ACHHUZ5GyfNaCMcyjkbFcM/T9XDWgBWjEdBVvPZj2gDiQkPZHAJPGTEfm0G2LYGzyJBkdwf1U0k5Kg== X-Received: by 2002:a05:6a00:2342:b0:668:71a1:2e68 with SMTP id j2-20020a056a00234200b0066871a12e68mr1451293pfj.11.1687141765332; Sun, 18 Jun 2023 19:29:25 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:24 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Christoph Hellwig Subject: [PATCHv10 7/8] iomap: Allocate ifs in ->write_begin() early Date: Mon, 19 Jun 2023 07:58:50 +0530 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We dont need to allocate an ifs in ->write_begin() for writes where the position and length completely overlap with the given folio. Therefore, such cases are skipped. Currently when the folio is uptodate, we only allocate ifs at writeback time (in iomap_writepage_map()). This is ok until now, but when we are going to add support for per-block dirty state bitmap in ifs, this could cause some performance degradation. The reason is that if we don't allocate ifs during ->write_begin(), then we will never mark the necessary dirty bits in ->write_end() call. And we will have to mark all the bits as dirty at the writeback time, that could cause the same write amplification and performance problems as it is now. Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Ritesh Harjani (IBM) --- fs/iomap/buffered-io.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 2d79061022d8..391d918ddd22 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -561,14 +561,23 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t from = offset_in_folio(folio, pos), to = from + len; size_t poff, plen; - if (folio_test_uptodate(folio)) + /* + * If the write completely overlaps the current folio, then + * entire folio will be dirtied so there is no need for + * per-block state tracking structures to be attached to this folio. + */ + if (pos <= folio_pos(folio) && + pos + len >= folio_pos(folio) + folio_size(folio)) return 0; - folio_clear_error(folio); ifs = ifs_alloc(iter->inode, folio, iter->flags); if ((iter->flags & IOMAP_NOWAIT) && !ifs && nr_blocks > 1) return -EAGAIN; + if (folio_test_uptodate(folio)) + return 0; + folio_clear_error(folio); + do { iomap_adjust_read_range(iter->inode, folio, &block_start, block_end - block_start, &poff, &plen); From patchwork Mon Jun 19 02:28:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ritesh Harjani (IBM)" X-Patchwork-Id: 13283949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83F68EB64D7 for ; Mon, 19 Jun 2023 02:29:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229777AbjFSC3d (ORCPT ); Sun, 18 Jun 2023 22:29:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229569AbjFSC3c (ORCPT ); Sun, 18 Jun 2023 22:29:32 -0400 Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com [IPv6:2607:f8b0:4864:20::835]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92A58E4C; Sun, 18 Jun 2023 19:29:30 -0700 (PDT) Received: by mail-qt1-x835.google.com with SMTP id d75a77b69052e-3f9e36e5ea8so25500941cf.3; Sun, 18 Jun 2023 19:29:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687141769; x=1689733769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BO1To0LqRLNdbiQA/kflsw/IzgNitRtvfDgKhsX/Ado=; b=Lj1r1m8YzA8T4RFDREbsLFQEDEKRHW6bzwm0ixipbg6d7v6N+5nnMYQ8BRCmrXo75j u6qvzNyL931UiBaY4+9zQCssNmFBBGpSdb4u3bRGaf/UM/adOHyUbZy4eN07q3cvfPHD HoJ/IHUzuio6wG3buFOzBujdv7l48AeqPmLEhVASGC8aOG1sg1gBgxH6EcZqTDEENehV hyrXJVnHb+5wiimkUhsSWCvF39Fw6DPzH3g8mw1uv7ZX+F/BIWweGnq4MXx3prqFbfC/ MRfND04zBUa2r0RrvzF707P4JyjHNqXoXCIiOvQxS+QYHUgA4y4pg7k/zthv+ljw95X3 KKWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687141769; x=1689733769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BO1To0LqRLNdbiQA/kflsw/IzgNitRtvfDgKhsX/Ado=; b=L9tHH70giFWWi3IYoiBO7IX3k/vuM/pLAM3ZDP4fnDkZEmuS3ny5Jm0bTRL4k8qBLn T2NtHpzOZvnWCPbQ4moPjnsTWUuVAt97Xf4Hs5GxSlsjZuG1drUfnGJyQS17tDi1Sv+9 3g3JfngQSSPmFVTvzMCxF2plO6fTx+uaOnb8D7TqvuYGJVdjCGfLBtZeQbJ2+gv6Qqy0 +DW/0NdwL5Pe87mAxvvcyG/r0rNSJjLON78ReAvzEICc9w5BImF8Uznlh0mEsWNRAQym nodu/bjI/nu2twszjrN5aodn/Flz8VOGaDHmmDp4NjqoJTOA2a57hNun/mPkKnY4hhaI HwcQ== X-Gm-Message-State: AC+VfDy/5Xq2C55s7F5MHmifbIft+pFtSEZiDLLbBelWywo3GhhiA/gg 0EmnKOVdoCDzSFnYYykeuOGaZeCiAfQ= X-Google-Smtp-Source: ACHHUZ4wVlo84kyIQcD/m9j9kxlIPokw6x7CZRca2vlMW30WAgA1HDjv96WEzlVGqr+LM3893BHgvQ== X-Received: by 2002:a05:622a:15d4:b0:3fd:ec4a:2198 with SMTP id d20-20020a05622a15d400b003fdec4a2198mr3661066qty.18.1687141768981; Sun, 18 Jun 2023 19:29:28 -0700 (PDT) Received: from dw-tp.ihost.com ([49.207.220.159]) by smtp.gmail.com with ESMTPSA id g18-20020aa78752000000b0064ff1f1df65sm399531pfo.61.2023.06.18.19.29.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Jun 2023 19:29:28 -0700 (PDT) From: "Ritesh Harjani (IBM)" To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , Andreas Gruenbacher , Matthew Wilcox , Christoph Hellwig , Brian Foster , Ojaswin Mujoo , Disha Goel , "Ritesh Harjani (IBM)" , Aravinda Herle Subject: [PATCHv10 8/8] iomap: Add per-block dirty state tracking to improve performance Date: Mon, 19 Jun 2023 07:58:51 +0530 Message-Id: <6db62a08dda3a348303e2262454837149c2afe2a.1687140389.git.ritesh.list@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When filesystem blocksize is less than folio size (either with mapping_large_folio_support() or with blocksize < pagesize) and when the folio is uptodate in pagecache, then even a byte write can cause an entire folio to be written to disk during writeback. This happens because we currently don't have a mechanism to track per-block dirty state within struct iomap_folio_state. We currently only track uptodate state. This patch implements support for tracking per-block dirty state in iomap_folio_state->state bitmap. This should help improve the filesystem write performance and help reduce write amplification. Performance testing of below fio workload reveals ~16x performance improvement using nvme with XFS (4k blocksize) on Power (64K pagesize) FIO reported write bw scores improved from around ~28 MBps to ~452 MBps. 1. [global] ioengine=psync rw=randwrite overwrite=1 pre_read=1 direct=0 bs=4k size=1G dir=./ numjobs=8 fdatasync=1 runtime=60 iodepth=64 group_reporting=1 [fio-run] 2. Also our internal performance team reported that this patch improves their database workload performance by around ~83% (with XFS on Power) Reported-by: Aravinda Herle Reported-by: Brian Foster Signed-off-by: Ritesh Harjani (IBM) --- fs/gfs2/aops.c | 2 +- fs/iomap/buffered-io.c | 189 ++++++++++++++++++++++++++++++++++++----- fs/xfs/xfs_aops.c | 2 +- fs/zonefs/file.c | 2 +- include/linux/iomap.h | 1 + 5 files changed, 171 insertions(+), 25 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index a5f4be6b9213..75efec3c3b71 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -746,7 +746,7 @@ static const struct address_space_operations gfs2_aops = { .writepages = gfs2_writepages, .read_folio = gfs2_read_folio, .readahead = gfs2_readahead, - .dirty_folio = filemap_dirty_folio, + .dirty_folio = iomap_dirty_folio, .release_folio = iomap_release_folio, .invalidate_folio = iomap_invalidate_folio, .bmap = gfs2_bmap, diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 391d918ddd22..50f5840bb5f9 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -25,7 +25,7 @@ typedef int (*iomap_punch_t)(struct inode *inode, loff_t offset, loff_t length); /* - * Structure allocated for each folio to track per-block uptodate state + * Structure allocated for each folio to track per-block uptodate, dirty state * and I/O completions. */ struct iomap_folio_state { @@ -35,31 +35,55 @@ struct iomap_folio_state { unsigned long state[]; }; +enum iomap_block_state { + IOMAP_ST_UPTODATE, + IOMAP_ST_DIRTY, + + IOMAP_ST_MAX, +}; + +static void ifs_calc_range(struct folio *folio, size_t off, size_t len, + enum iomap_block_state state, unsigned int *first_blkp, + unsigned int *nr_blksp) +{ + struct inode *inode = folio->mapping->host; + unsigned int blks_per_folio = i_blocks_per_folio(inode, folio); + unsigned int first = off >> inode->i_blkbits; + unsigned int last = (off + len - 1) >> inode->i_blkbits; + + *first_blkp = first + (state * blks_per_folio); + *nr_blksp = last - first + 1; +} + static struct bio_set iomap_ioend_bioset; static inline bool ifs_is_fully_uptodate(struct folio *folio, struct iomap_folio_state *ifs) { struct inode *inode = folio->mapping->host; + unsigned int blks_per_folio = i_blocks_per_folio(inode, folio); + unsigned int nr_blks = (IOMAP_ST_UPTODATE + 1) * blks_per_folio; - return bitmap_full(ifs->state, i_blocks_per_folio(inode, folio)); + return bitmap_full(ifs->state, nr_blks); } -static inline bool ifs_block_is_uptodate(struct iomap_folio_state *ifs, - unsigned int block) +static inline bool ifs_block_is_uptodate(struct folio *folio, + struct iomap_folio_state *ifs, unsigned int block) { - return test_bit(block, ifs->state); + struct inode *inode = folio->mapping->host; + unsigned int blks_per_folio = i_blocks_per_folio(inode, folio); + + return test_bit(block + IOMAP_ST_UPTODATE * blks_per_folio, ifs->state); } static void ifs_set_range_uptodate(struct folio *folio, struct iomap_folio_state *ifs, size_t off, size_t len) { - struct inode *inode = folio->mapping->host; - unsigned int first_blk = off >> inode->i_blkbits; - unsigned int last_blk = (off + len - 1) >> inode->i_blkbits; - unsigned int nr_blks = last_blk - first_blk + 1; + unsigned int first_blk, nr_blks; unsigned long flags; + ifs_calc_range(folio, off, len, IOMAP_ST_UPTODATE, &first_blk, + &nr_blks); spin_lock_irqsave(&ifs->state_lock, flags); bitmap_set(ifs->state, first_blk, nr_blks); if (ifs_is_fully_uptodate(folio, ifs)) @@ -78,6 +102,55 @@ static void iomap_set_range_uptodate(struct folio *folio, size_t off, folio_mark_uptodate(folio); } +static inline bool ifs_block_is_dirty(struct folio *folio, + struct iomap_folio_state *ifs, int block) +{ + struct inode *inode = folio->mapping->host; + unsigned int blks_per_folio = i_blocks_per_folio(inode, folio); + + return test_bit(block + IOMAP_ST_DIRTY * blks_per_folio, ifs->state); +} + +static void ifs_clear_range_dirty(struct folio *folio, + struct iomap_folio_state *ifs, size_t off, size_t len) +{ + unsigned int first_blk, nr_blks; + unsigned long flags; + + ifs_calc_range(folio, off, len, IOMAP_ST_DIRTY, &first_blk, &nr_blks); + spin_lock_irqsave(&ifs->state_lock, flags); + bitmap_clear(ifs->state, first_blk, nr_blks); + spin_unlock_irqrestore(&ifs->state_lock, flags); +} + +static void iomap_clear_range_dirty(struct folio *folio, size_t off, size_t len) +{ + struct iomap_folio_state *ifs = folio->private; + + if (ifs) + ifs_clear_range_dirty(folio, ifs, off, len); +} + +static void ifs_set_range_dirty(struct folio *folio, + struct iomap_folio_state *ifs, size_t off, size_t len) +{ + unsigned int first_blk, nr_blks; + unsigned long flags; + + ifs_calc_range(folio, off, len, IOMAP_ST_DIRTY, &first_blk, &nr_blks); + spin_lock_irqsave(&ifs->state_lock, flags); + bitmap_set(ifs->state, first_blk, nr_blks); + spin_unlock_irqrestore(&ifs->state_lock, flags); +} + +static void iomap_set_range_dirty(struct folio *folio, size_t off, size_t len) +{ + struct iomap_folio_state *ifs = folio->private; + + if (ifs) + ifs_set_range_dirty(folio, ifs, off, len); +} + static struct iomap_folio_state *ifs_alloc(struct inode *inode, struct folio *folio, unsigned int flags) { @@ -93,14 +166,24 @@ static struct iomap_folio_state *ifs_alloc(struct inode *inode, else gfp = GFP_NOFS | __GFP_NOFAIL; - ifs = kzalloc(struct_size(ifs, state, BITS_TO_LONGS(nr_blocks)), - gfp); - if (ifs) { - spin_lock_init(&ifs->state_lock); - if (folio_test_uptodate(folio)) - bitmap_fill(ifs->state, nr_blocks); - folio_attach_private(folio, ifs); - } + /* + * ifs->state tracks two sets of state flags when the + * filesystem block size is smaller than the folio size. + * The first state tracks per-block uptodate and the + * second tracks per-block dirty state. + */ + ifs = kzalloc(struct_size(ifs, state, + BITS_TO_LONGS(IOMAP_ST_MAX * nr_blocks)), gfp); + if (!ifs) + return ifs; + + spin_lock_init(&ifs->state_lock); + if (folio_test_uptodate(folio)) + bitmap_set(ifs->state, 0, nr_blocks); + if (folio_test_dirty(folio)) + bitmap_set(ifs->state, nr_blocks, nr_blocks); + folio_attach_private(folio, ifs); + return ifs; } @@ -143,7 +226,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, /* move forward for each leading block marked uptodate */ for (i = first; i <= last; i++) { - if (!ifs_block_is_uptodate(ifs, i)) + if (!ifs_block_is_uptodate(folio, ifs, i)) break; *pos += block_size; poff += block_size; @@ -153,7 +236,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, /* truncate len if we find any trailing uptodate block(s) */ for ( ; i <= last; i++) { - if (ifs_block_is_uptodate(ifs, i)) { + if (ifs_block_is_uptodate(folio, ifs, i)) { plen -= (last - i + 1) * block_size; last = i - 1; break; @@ -457,7 +540,7 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count) last = (from + count - 1) >> inode->i_blkbits; for (i = first; i <= last; i++) - if (!ifs_block_is_uptodate(ifs, i)) + if (!ifs_block_is_uptodate(folio, ifs, i)) return false; return true; } @@ -523,6 +606,17 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) } EXPORT_SYMBOL_GPL(iomap_invalidate_folio); +bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio) +{ + struct inode *inode = mapping->host; + size_t len = folio_size(folio); + + ifs_alloc(inode, folio, 0); + iomap_set_range_dirty(folio, 0, len); + return filemap_dirty_folio(mapping, folio); +} +EXPORT_SYMBOL_GPL(iomap_dirty_folio); + static void iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) { @@ -727,6 +821,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, offset_in_folio(folio, pos), len); + iomap_set_range_dirty(folio, offset_in_folio(folio, pos), copied); filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -891,6 +986,43 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, } EXPORT_SYMBOL_GPL(iomap_file_buffered_write); +static int iomap_write_delalloc_ifs_punch(struct inode *inode, + struct folio *folio, loff_t start_byte, loff_t end_byte, + iomap_punch_t punch) +{ + unsigned int first_blk, last_blk, i; + loff_t last_byte; + u8 blkbits = inode->i_blkbits; + struct iomap_folio_state *ifs; + int ret = 0; + + /* + * When we have per-block dirty tracking, there can be + * blocks within a folio which are marked uptodate + * but not dirty. In that case it is necessary to punch + * out such blocks to avoid leaking any delalloc blocks. + */ + ifs = folio->private; + if (!ifs) + return ret; + + last_byte = min_t(loff_t, end_byte - 1, + folio_pos(folio) + folio_size(folio) - 1); + first_blk = offset_in_folio(folio, start_byte) >> blkbits; + last_blk = offset_in_folio(folio, last_byte) >> blkbits; + for (i = first_blk; i <= last_blk; i++) { + if (!ifs_block_is_dirty(folio, ifs, i)) { + ret = punch(inode, folio_pos(folio) + (i << blkbits), + 1 << blkbits); + if (ret) + return ret; + } + } + + return ret; +} + + static int iomap_write_delalloc_punch(struct inode *inode, struct folio *folio, loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte, iomap_punch_t punch) @@ -907,6 +1039,13 @@ static int iomap_write_delalloc_punch(struct inode *inode, struct folio *folio, if (ret) return ret; } + + /* Punch non-dirty blocks within folio */ + ret = iomap_write_delalloc_ifs_punch(inode, folio, start_byte, + end_byte, punch); + if (ret) + return ret; + /* * Make sure the next punch start is correctly bound to * the end of this data range, not the end of the folio. @@ -1637,7 +1776,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct folio *folio, u64 end_pos) { - struct iomap_folio_state *ifs = ifs_alloc(inode, folio, 0); + struct iomap_folio_state *ifs = folio->private; struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); @@ -1645,6 +1784,11 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, int error = 0, count = 0, i; LIST_HEAD(submit_list); + if (!ifs && nblocks > 1) { + ifs = ifs_alloc(inode, folio, 0); + iomap_set_range_dirty(folio, 0, folio_size(folio)); + } + WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) != 0); /* @@ -1653,7 +1797,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * invalid, grab a new one. */ for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { - if (ifs && !ifs_block_is_uptodate(ifs, i)) + if (ifs && !ifs_block_is_dirty(folio, ifs, i)) continue; error = wpc->ops->map_blocks(wpc, inode, pos); @@ -1697,6 +1841,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, } } + iomap_clear_range_dirty(folio, 0, end_pos - folio_pos(folio)); folio_start_writeback(folio); folio_unlock(folio); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 451942fb38ec..2fca4b4e7fd8 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -578,7 +578,7 @@ const struct address_space_operations xfs_address_space_operations = { .read_folio = xfs_vm_read_folio, .readahead = xfs_vm_readahead, .writepages = xfs_vm_writepages, - .dirty_folio = filemap_dirty_folio, + .dirty_folio = iomap_dirty_folio, .release_folio = iomap_release_folio, .invalidate_folio = iomap_invalidate_folio, .bmap = xfs_vm_bmap, diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c index 132f01d3461f..e508c8e97372 100644 --- a/fs/zonefs/file.c +++ b/fs/zonefs/file.c @@ -175,7 +175,7 @@ const struct address_space_operations zonefs_file_aops = { .read_folio = zonefs_read_folio, .readahead = zonefs_readahead, .writepages = zonefs_writepages, - .dirty_folio = filemap_dirty_folio, + .dirty_folio = iomap_dirty_folio, .release_folio = iomap_release_folio, .invalidate_folio = iomap_invalidate_folio, .migrate_folio = filemap_migrate_folio, diff --git a/include/linux/iomap.h b/include/linux/iomap.h index e2b836c2e119..eb9335c46bf3 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -264,6 +264,7 @@ bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count); struct folio *iomap_get_folio(struct iomap_iter *iter, loff_t pos); bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags); void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); +bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio); int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,