From patchwork Thu Jan 25 08:57:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13530264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7744DC47258 for ; Thu, 25 Jan 2024 08:58:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DD4D6B0093; Thu, 25 Jan 2024 03:58:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 03D1E6B0096; Thu, 25 Jan 2024 03:58:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E201A6B0099; Thu, 25 Jan 2024 03:58:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CE0406B0093 for ; Thu, 25 Jan 2024 03:58:41 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A6093140C2D for ; Thu, 25 Jan 2024 08:58:41 +0000 (UTC) X-FDA: 81717232842.10.336D1DC Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf01.hostedemail.com (Postfix) with ESMTP id 0F1A340012 for ; Thu, 25 Jan 2024 08:58:39 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b="m/Q1LhPC"; dmarc=none; spf=none (imf01.hostedemail.com: domain of BATV+b97a0fc658bf3e588113+7459+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+b97a0fc658bf3e588113+7459+infradead.org+hch@bombadil.srs.infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706173120; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uF8JIxjy5aR+l38o23/kghoy+Tugnu4jphYuZf7Sd0E=; b=AeSq+a299m47Hpq6zlo0klYBMJX8EqsnYr54/rvCP8/JWiYyXVWJw7whiNt5p7q8A57Gv7 yUjkUfv+m/tYb12A9mbX2PxpgM2uA89VwUpWzW0gows0UAlPdKlfBNQjJ9faPfX30S82l0 QKmr79ynVBnUJy+YRCbWiGc9aeX+7dQ= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b="m/Q1LhPC"; dmarc=none; spf=none (imf01.hostedemail.com: domain of BATV+b97a0fc658bf3e588113+7459+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+b97a0fc658bf3e588113+7459+infradead.org+hch@bombadil.srs.infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706173120; a=rsa-sha256; cv=none; b=jlECT6IhGKHjeQoHmqmUR8poeFzf2DUV7Pj7I4rOB8u1P9zYEW1kUkcNtwot1sqfJa9mhH LjOxADMqkUEtEGsahSJzNxTwQi2Z3Mgzy2iVkiNTM7ACJ/qDIoFoj/iilLf5Zj7Jd1B/5w PgchH8OLDQG3JHZ17wUit8Rwl1LsXxE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=uF8JIxjy5aR+l38o23/kghoy+Tugnu4jphYuZf7Sd0E=; b=m/Q1LhPC2/vHTSRu90DWvDptL5 BKQMoE7BL5NkeXAax6P9Yv8CPEYpHoiimUbS48P6z7/UaStZmi3exEaCGCxntBxKnEwJjDTnI5VJP Hf/ZvL0N6tR+UAfkRsSsa0kOyxZryKyVHCuJ+SxCxuef8kZRlV6Ym+mhT7BEBLe/BosuzkrdLBmOZ nsnk9cVwIBEe3hI5Yr0oDGINfAcHWHBX17YJwjH2PzML7Wnv4py4ZII5IEDM6VZuzb+pM1vr/GPre prsVBlnIAvbyHoHFjjv3sboVrNToPf28igDzhaChDmBlBkJfOdLkdbj3l9AHoxNZ8CoaNeULCDXY4 vuN06pFA==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1rSvZG-007QKS-21; Thu, 25 Jan 2024 08:58:39 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 09/19] writeback: Simplify the loops in write_cache_pages() Date: Thu, 25 Jan 2024 09:57:48 +0100 Message-Id: <20240125085758.2393327-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240125085758.2393327-1-hch@lst.de> References: <20240125085758.2393327-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspam-User: X-Stat-Signature: nn8z3ft195s6gtn3tpzsx8ny6hfma75e X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0F1A340012 X-HE-Tag: 1706173119-579791 X-HE-Meta: U2FsdGVkX1/QNZRxCVbucFKbZ8a1Iu7YLIAjAiX7+Ma8Kh3mjqDOFS8cJ6I2l6bSnJhfSgftloUh8bqpJJz3t+N4B63o6IN5qNAunBOllN19kZ1s6BZyUpY+4FcBNXqJOqqXYePHLVd4u+udIfzfxS1TO6GtOfOd42Qmroj7Vwkh1gp+vY66nFNkD3xqNj8AQaf5HaK4uVpD0Xl4ZAfTuqijYHxM4VzD5/wq10109y3H7SucFnr1N2a+bKDoZW9om5tX+HXkQbtjL/vG0M1puYRWUxbAtw7VRhfymmJzJZR1184wlJAQuGowuNbVvmX64Idy9sWnCNRkXkYQGx/OAvxej+/U3OuXuPwLRacO54DszCWir1lgNcZG4AnJxhgZ/SUtucmFqXIXuSTAEQBsgvamd8pSIRRxNoSBYqeYK9fAu75HMEsSywJSceYSwucEhjfJvfNVVv5yhJnQ0/FqaRK5VJPPFgjwcv1I7k08bNFaz4w2auBARKSSFJoL0qYhWqZ+RK9cPpp1Gw/eZNokaLe22YXs7bgEK9sEoJHy5J4SqcpVbw7Nfa0SRocZuKPv2dTHVqsIRDzrjzCCSBWuydlxqBxINlXzynPTEAVi6fFu/fYc9OBEgo51hJR+GGLrzdvoLIVqKHtv2hhg01KKnE7Nfv0RInjnFCw0r6gDrVTi51BQT/WXqeh89btreuOvekOPY6vgeumyoXF3T6Wf7Hk7IAE3JiVcB0cZflk8ubQmxgN8nD8Yje3l/hPRzjg0pSTTthGd3vHlmuBtsb/0vD0wPyAFDzRyRDAPSCYfzK8CBnfheBiaMg+/NlTgMaPkgbouyi0KjCM4nUrj2rtME2+N+JW/WWlE4D9UgaEJJF0yFW/WJeZyYVkGqyHC3jvjx0M6rYiBuTEmIKip0Wom16q0w6v9INQbGjNb+GLpU/PzDOPl0Xvfx2+w39BQU4+qbqNJe78bPRGenXmb0jm 1UjuUC5k +0bBQTvebgTQTdjhyOwc9pqWf6+ViAh2w69r56pJ78NcYG/M6DOtqVAiok+fKvz4fXQJF/7Vwcwx8pam2iakx1NAf+9mLtomxPxh1cc1l9Vwgex0gX6ZTFBJYTgVDK+QjXiKCIlhUhFTLI1uAjVbHMjjiM2Ok7rdeVt6bZAD5ykaCU7ZCt9CZUbQ/hoVLQE2/xqXRk6iqchcEdYtqGMgZOUeRnreKFyq2vniCEqbfHz5NQ2lGpdNCGr+jGN6fk06MR3eMeV/RZNU6MugY4prTYGg088jZJtfg5mDC4KIchxgFC5H/79HlS/tG9GnK7XC+cgC/AEF1bv3zAzMsuPO3UXIYs6eRUFfD/kq7JbIm1KptRc9MEENfgCC/7coFvYJk1Gz3VSHlnOnz8SUrDr3z3OZc4q4LKFQ4Cw2A+fFxYGsj5ghE64qYjTzT4R7fqJTrcvwjgSyF4Kq6cUznHyNXAGrW9W6g+dAkcoOxkF5jrRCGzf1viwtx1e8kxBCcFFp3o1HpdJuL6ii2dE60EokwVIgbM7XSlOgxSdp4BhB90dgZDuo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Collapse the two nested loops into one. This is needed as a step towards turning this into an iterator. Note that this drops the "index <= end" check in the previous outer loop and just relies on filemap_get_folios_tag() to return 0 entries when index > end. This actually has a subtle implication when end == -1 because then the returned index will be -1 as well and thus if there is page present on index -1, we could be looping indefinitely. But as the comment in filemap_get_folios_tag documents this as already broken anyway we should not worry about it here either. The fix for that would probably a change to the filemap_get_folios_tag() calling convention. Signed-off-by: Matthew Wilcox (Oracle) [hch: updated the commit log based on feedback from Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 94 ++++++++++++++++++++++----------------------- 1 file changed, 46 insertions(+), 48 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index cec683c7217d2e..d6ac414ddce9ca 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2475,6 +2475,7 @@ int write_cache_pages(struct address_space *mapping, { int error; pgoff_t end; /* Inclusive */ + int i = 0; if (wbc->range_cyclic) { wbc->index = mapping->writeback_index; /* prev offset */ @@ -2489,63 +2490,60 @@ int write_cache_pages(struct address_space *mapping, folio_batch_init(&wbc->fbatch); wbc->err = 0; - while (wbc->index <= end) { - int i; - - writeback_get_batch(mapping, wbc); + for (;;) { + struct folio *folio; + unsigned long nr; + if (i == wbc->fbatch.nr) { + writeback_get_batch(mapping, wbc); + i = 0; + } if (wbc->fbatch.nr == 0) break; - for (i = 0; i < wbc->fbatch.nr; i++) { - struct folio *folio = wbc->fbatch.folios[i]; - unsigned long nr; + folio = wbc->fbatch.folios[i++]; - folio_lock(folio); - if (!folio_prepare_writeback(mapping, wbc, folio)) { - folio_unlock(folio); - continue; - } + folio_lock(folio); + if (!folio_prepare_writeback(mapping, wbc, folio)) { + folio_unlock(folio); + continue; + } - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); - error = writepage(folio, wbc, data); - nr = folio_nr_pages(folio); - wbc->nr_to_write -= nr; + error = writepage(folio, wbc, data); + nr = folio_nr_pages(folio); + wbc->nr_to_write -= nr; - /* - * Handle the legacy AOP_WRITEPAGE_ACTIVATE magic return - * value. Eventually all instances should just unlock - * the folio themselves and return 0; - */ - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } - - if (error && !wbc->err) - wbc->err = error; + /* + * Handle the legacy AOP_WRITEPAGE_ACTIVATE magic return value. + * Eventually all instances should just unlock the folio + * themselves and return 0; + */ + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; + } - /* - * For integrity sync we have to keep going until we - * have written all the folios we tagged for writeback - * prior to entering this loop, even if we run past - * wbc->nr_to_write or encounter errors. This is - * because the file system may still have state to clear - * for each folio. We'll eventually return the first - * error encountered. - * - * For background writeback just push done_index past - * this folio so that we can just restart where we left - * off and media errors won't choke writeout for the - * entire file. - */ - if (wbc->sync_mode == WB_SYNC_NONE && - (wbc->err || wbc->nr_to_write <= 0)) { - writeback_finish(mapping, wbc, - folio->index + nr); - return error; - } + if (error && !wbc->err) + wbc->err = error; + + /* + * For integrity sync we have to keep going until we have + * written all the folios we tagged for writeback prior to + * entering this loop, even if we run past wbc->nr_to_write or + * encounter errors. This is because the file system may still + * have state to clear for each folio. We'll eventually return + * the first error encountered. + * + * For background writeback just push done_index past this folio + * so that we can just restart where we left off and media + * errors won't choke writeout for the entire file. + */ + if (wbc->sync_mode == WB_SYNC_NONE && + (wbc->err || wbc->nr_to_write <= 0)) { + writeback_finish(mapping, wbc, folio->index + nr); + return error; } }