From patchwork Mon Dec 18 15:35:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13497160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5B60C35274 for ; Mon, 18 Dec 2023 15:36:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 089888D000D; Mon, 18 Dec 2023 10:36:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 039EE8D0009; Mon, 18 Dec 2023 10:36:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF6538D000D; Mon, 18 Dec 2023 10:36:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CF18C8D0009 for ; Mon, 18 Dec 2023 10:36:14 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A8993140180 for ; Mon, 18 Dec 2023 15:36:14 +0000 (UTC) X-FDA: 81580340268.15.84820AA Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf28.hostedemail.com (Postfix) with ESMTP id 0BD6DC0020 for ; Mon, 18 Dec 2023 15:36:12 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=C5A4jUNK; dmarc=none; spf=none (imf28.hostedemail.com: domain of BATV+09f18f96a25a69770120+7421+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+09f18f96a25a69770120+7421+infradead.org+hch@bombadil.srs.infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702913773; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vvqaN8/KIqeuNXG6MsZVXoxWgrAefdh1jtznLiH0jnU=; b=LAMMK9NxoxKPBYcGmXvvbBRLDfAgsCESXPZvGFItieGmu1feuFo9TpukSNfhPU0th/5gs7 a7SlIxJIWuQ2HQ+57r+xMl9rmFe+KqZELxTQFdtOdHFz3Q6zpJ+VFx3FpBTP4tsSpUJxfQ tznJuoJSFsTzMcsAwbw/9hIWXNrZNVo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=C5A4jUNK; dmarc=none; spf=none (imf28.hostedemail.com: domain of BATV+09f18f96a25a69770120+7421+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+09f18f96a25a69770120+7421+infradead.org+hch@bombadil.srs.infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702913773; a=rsa-sha256; cv=none; b=Zj/XFBKMPD5OvQzUE9SxdhsCVq5W8YZgTE835hKcftodJ50eOA1pxT1bqiRe9tkjt6RO1r s1nS8pq6YVFB/4qNOJwyVVxBq+fvxP7hH69wfbw82cErS6qIiXxXJPrEH96tuB7vyEDBkx YbtmP1IJIDjoyttsZWyZn+s9PurZysA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=vvqaN8/KIqeuNXG6MsZVXoxWgrAefdh1jtznLiH0jnU=; b=C5A4jUNKNWWGNM0AFYi1+TbZqg OyJLFLMAYeRWAkGFHpJj2/i2TpKNAExClZQ0eq/kfZp9NcqLA2do4BbRXo2Q4W+iK4j5Vs5EGWTtw cidcKA8PuBHBAzMjolCFFwvjF1I65aziLi6OSncFKhd+PQGJunCSWzOvrXiVNjvRdL+5XxP+vGK8s 5lB8Xhy8YqBjIl7CVAQyC5btI+HQsTN1MM7VZBSIoR4vpfh7l2M+kWRPnHGFvl25PWqUT6OFS6ICi R5oHvPfCAlXyzNuVQr2cvIxcGzsS5vrPEMkOuUUNuuUzCVzHfnUMHd5Uzh71D7Wv6+5fte9QYjw/K yYSn0SnQ==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1rFFf9-00BEMU-2Y; Mon, 18 Dec 2023 15:36:12 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Jan Kara , David Howells , Brian Foster , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/17] writeback: Factor out writeback_finish() Date: Mon, 18 Dec 2023 16:35:42 +0100 Message-Id: <20231218153553.807799-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231218153553.807799-1-hch@lst.de> References: <20231218153553.807799-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: 0BD6DC0020 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: jcrp9wxsj7qgpuxswtaxjdcqy5i6wjhq X-HE-Tag: 1702913772-339372 X-HE-Meta: U2FsdGVkX1/sSCmxMi+upeLdOGDmy04IP8zlJO3wtnGAulOzFGHaBJbFu0bxfXYYgmZalYlyBljn91gKdlLPukJP9cKlw+qe2siQnITMlC+WpPWeMJh8dKZxIa2bIq274VyOGSlyKf5eDyq7xWtExF43Fm44fLYypsmQhJr3G5n9EhV5FXBAQ/sspJR7I+Za8CN4XvsavrhyzBso+QnAXTWXj6d/BMXJn6RwwCPpDAYAFkupeEAOlC3XJNZQn/l+pBLURofmRChhFsij+RZGQhi6W7KFi/DXvm68KSiSCWlX5Yqi44WYYwgqzbwHfNQ+jqTHfb/3yWUz4/O0uPQ4VFw4K+mWuhjcRkJ8LKEQL9lpiU47aik5mpUMThX8JG9jTO7GeXmf0EYL/eVY5kY+m5htd6SUM9EgbDjRQAVowiw2oRG5Bj3Vo4uJr/Z/BDcYQSr+xq8OBaqf5PCTtqglKAAI5AuHCA1vqAbraNhUzMOad5u+KiXbOtPocJH78ll5V+EuXT2gjLVHCBTP4rnFRFqUmlnmaRl+12yueUQyec8FwrtM0d0r3E1P4f8hPLfGYSC5SR4qtXko6UmR+JdzJ6+cgPf29Bzh3wCUUAjF4Sng8YMlL1ajEv5ZNPSYLz8HAEvS8p33ax8mZvUvvrds3QQg6rPq8HcjNk8aTP4LwLyDz/YabB6eM2OCBFL6DtYma1BGIaBTPE6tvuTAXUp2GajZq5pRlHj7+MHd8b8OHo7wpfygSHEO0nE7dfLq61qACPEJPmsa+gkfQWul8uM4J2G9FqvGYSO8MTuDsytO28z609+0ZkFUDjfMH8K3IKh4STwo80FjxPckRFCg4HmPOq1J6IX/JI7/td69Kh1JseULUNJrgEHrLE4orjvzvCoci7PFklpQrF85mcQblROCDWu1dOhUTAu/PXsKwHc+ms4T1ksYoHlCQ43FUqRTyAOsePIqXxWPnqr6SvRja3y LUj/7wwI 59USPLjfsfG12vx/SgapfbNbj6k2SHlIGblw1NN6nVH94doqnT7n4wR+bLbwlAGrAOVOE4AmnEi4Kg6S/fe3kC2tFkf3iUhh8XeNvrnETVY/92KXxczQn5TfaVegV2Or6yxmBCWS1sDLG/6bVTXAnW0d2A7YAERpiwqaD6ikrJZpwAusiQP07AOipGWt9EuAMqXSylJo4i13KWji6UEKMM9QCt0Wc7sXtq4AEYmLtH0lEuGZ26zZ4WWBaa5/x++96W6jijAAZs3IAyaeGs/vYroi/e2msFi8TchJ98pRpfL9RJccXwQAF1Dxe2BVUfNfxP1gncqf/kR2LL7zFLk79UcXaHZtcrJySgMOhKzp1kNov4LqUJzilYoEAaT3b67S5KxvbTWM8sh0a2pIB3hysNVZ8aIQMmQeoJaa0rW8+pkYEebQ/DY21hBcbbNt5PbkcoAmEDwluinzWF3sdDqwGKbEjRBfbRCvJARpYPsKGbEaQGjZf09xYZVAdVLhr9E93WPFLL4mwQ8jT0N7dAAQKd+YLmGTs089Wm6UX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Instead of having a 'done' variable that controls the nested loops, have a writeback_finish() that can be returned directly. This involves keeping more things in writeback_control, but it's just moving stuff allocated on the stack to being allocated slightly earlier on the stack. Signed-off-by: Matthew Wilcox (Oracle) [hch: heavily rebased, reordered and commented struct writeback_control] Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara --- include/linux/writeback.h | 6 +++ mm/page-writeback.c | 79 ++++++++++++++++++++------------------- 2 files changed, 47 insertions(+), 38 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 833ec38fc3e0c9..390f2dd03cf27e 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -11,6 +11,7 @@ #include #include #include +#include struct bio; @@ -40,6 +41,7 @@ enum writeback_sync_modes { * in a manner such that unspecified fields are set to zero. */ struct writeback_control { + /* public fields that can be set and/or consumed by the caller: */ long nr_to_write; /* Write this many pages, and decrement this for each page written */ long pages_skipped; /* Pages which were not written */ @@ -77,6 +79,10 @@ struct writeback_control { */ struct swap_iocb **swap_plug; + /* internal fields used by the ->writepages implementation: */ + struct folio_batch fbatch; + int err; + #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ struct inode *inode; /* inode being written out */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c798c0d6d0abb4..564d5faf562ba7 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2360,6 +2360,29 @@ void tag_pages_for_writeback(struct address_space *mapping, } EXPORT_SYMBOL(tag_pages_for_writeback); +static void writeback_finish(struct address_space *mapping, + struct writeback_control *wbc, pgoff_t done_index) +{ + folio_batch_release(&wbc->fbatch); + + /* + * For range cyclic writeback we need to remember where we stopped so + * that we can continue there next time we are called. If we hit the + * last page and there is more work to be done, wrap back to the start + * of the file. + * + * For non-cyclic writeback we always start looking up at the beginning + * of the file if we are called again, which can only happen due to + * -ENOMEM from the file system. + */ + if (wbc->range_cyclic) { + if (wbc->err || wbc->nr_to_write <= 0) + mapping->writeback_index = done_index; + else + mapping->writeback_index = 0; + } +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2395,17 +2418,12 @@ int write_cache_pages(struct address_space *mapping, struct writeback_control *wbc, writepage_t writepage, void *data) { - int ret = 0; - int done = 0; int error; - struct folio_batch fbatch; int nr_folios; pgoff_t index; pgoff_t end; /* Inclusive */ - pgoff_t done_index; xa_mark_t tag; - folio_batch_init(&fbatch); if (wbc->range_cyclic) { index = mapping->writeback_index; /* prev offset */ end = -1; @@ -2419,22 +2437,23 @@ int write_cache_pages(struct address_space *mapping, } else { tag = PAGECACHE_TAG_DIRTY; } - done_index = index; - while (!done && (index <= end)) { + + folio_batch_init(&wbc->fbatch); + wbc->err = 0; + + while (index <= end) { int i; nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch); + tag, &wbc->fbatch); if (nr_folios == 0) break; for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; + struct folio *folio = wbc->fbatch.folios[i]; unsigned long nr; - done_index = folio->index; - folio_lock(folio); /* @@ -2481,6 +2500,9 @@ int write_cache_pages(struct address_space *mapping, folio_unlock(folio); error = 0; } + + if (error && !wbc->err) + wbc->err = error; /* * For integrity sync we have to keep going until we @@ -2496,38 +2518,19 @@ int write_cache_pages(struct address_space *mapping, * off and media errors won't choke writeout for the * entire file. */ - if (error && !ret) - ret = error; - if (wbc->sync_mode == WB_SYNC_NONE) { - if (ret || wbc->nr_to_write <= 0) { - done_index = folio->index + nr; - done = 1; - break; - } + if (wbc->sync_mode == WB_SYNC_NONE && + (wbc->err || wbc->nr_to_write <= 0)) { + writeback_finish(mapping, wbc, + folio->index + nr); + return error; } } - folio_batch_release(&fbatch); + folio_batch_release(&wbc->fbatch); cond_resched(); } - /* - * For range cyclic writeback we need to remember where we stopped so - * that we can continue there next time we are called. If we hit the - * last page and there is more work to be done, wrap back to the start - * of the file. - * - * For non-cyclic writeback we always start looking up at the beginning - * of the file if we are called again, which can only happen due to - * -ENOMEM from the file system. - */ - if (wbc->range_cyclic) { - if (done) - mapping->writeback_index = done_index; - else - mapping->writeback_index = 0; - } - - return ret; + writeback_finish(mapping, wbc, 0); + return 0; } EXPORT_SYMBOL(write_cache_pages);