From patchwork Mon Nov 8 04:05:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12607813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C984C433FE for ; Mon, 8 Nov 2021 05:04:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7075F61208 for ; Mon, 8 Nov 2021 05:04:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230205AbhKHFHI (ORCPT ); Mon, 8 Nov 2021 00:07:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbhKHFHH (ORCPT ); Mon, 8 Nov 2021 00:07:07 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C57DC061570; Sun, 7 Nov 2021 21:04:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7IXo2o8FJyRMGLPdB55C0nZsqEEI7PRbrvWxFCTwLcw=; b=ugvPMeOprIW156VNNFFhIlFBRn 16x5ztD9wQHFqzxo6xuXBvg2BLnBWtzBb6qip0H+8Q/9bVDKJWa+Q3L2y3lCCWsln/P72sA2UAR22 UIHsLRAG734lE+7okwnjTltXn/hIO74f6eFM8UJDYElJ2IJiTeuOtfqKY5Iw/3lNXTVt8N2dc1qEk RO89/vCaTGFNyIkeXwSQ/UHiIz/LH8pBP4vkGkQ3sYZfVis11P8FQXXyV+VMCA3OEdStMKrw7zNil HFfxZZEQ1dkQUl6zinH+WtofQXdjcnL6iFYOToWbpd6t8AbGZzRDfLwb0HkBkLdGgIsVDXc7PZq2Z qwSFtl4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwlk-008Akf-Ei; Mon, 08 Nov 2021 05:01:12 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 23/28] iomap: Simplify iomap_writepage_map() Date: Mon, 8 Nov 2021 04:05:46 +0000 Message-Id: <20211108040551.1942823-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename end_offset to end_pos and file_offset to pos to match the rest of the file. Simplify the loop by calculating nblocks up front instead of each time around the loop. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 20610b1364d6..87190b86ef1f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1293,37 +1293,36 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_offset) + struct page *page, u64 end_pos) { struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + unsigned nblocks = i_blocks_per_folio(inode, folio); + u64 pos = folio_pos(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. + * Walk through the folio to find areas to write back. If we + * run off the end of the current map or find the current map + * invalid, grab a new one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, &submit_list); count++; } @@ -1347,7 +1346,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * now. */ if (wpc->ops->discard_folio) - wpc->ops->discard_folio(folio, file_offset); + wpc->ops->discard_folio(folio, pos); if (!count) { ClearPageUptodate(page); unlock_page(page);