From patchwork Fri Dec 22 15:08:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13503414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACB77C46CD2 for ; Fri, 22 Dec 2023 15:08:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98F226B0082; Fri, 22 Dec 2023 10:08:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 918256B0083; Fri, 22 Dec 2023 10:08:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B9EB6B0085; Fri, 22 Dec 2023 10:08:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5E6EF6B0082 for ; Fri, 22 Dec 2023 10:08:45 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 38DA71A0EB2 for ; Fri, 22 Dec 2023 15:08:45 +0000 (UTC) X-FDA: 81594786210.24.BEF522A Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf29.hostedemail.com (Postfix) with ESMTP id 5BE06120002 for ; Fri, 22 Dec 2023 15:08:43 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=IUhhnEiE; dmarc=none; spf=none (imf29.hostedemail.com: domain of BATV+af923ccb694ecd533eae+7425+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+af923ccb694ecd533eae+7425+infradead.org+hch@bombadil.srs.infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703257723; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/YruVy693sIGMqSVhVr/4+JF9RoXWgnIrtLSdSPK4mw=; b=Hty44tIKCDQ2LvmSaMaGufFIIB48lM5cs9fsCZTwWOeVlDNjbYXu2arUrQ+zqoo+YnLmvr GG3J1M5fKMt5Pmeyzf6U75bCue3v68UQy7i9GrWOaGyxreFpItozetfgryyb+LmzF56IEZ HE7T8Gbo4urMQiH/c5nzoOGMifbBmAs= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=IUhhnEiE; dmarc=none; spf=none (imf29.hostedemail.com: domain of BATV+af923ccb694ecd533eae+7425+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+af923ccb694ecd533eae+7425+infradead.org+hch@bombadil.srs.infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703257723; a=rsa-sha256; cv=none; b=6dIwwttKBXDDkPo6va1TXvfvDs0Ky2pL9OvMxVvZvruQsMPCV1DPT07W/3pbI+l3nZltaG Z1N0Y5XKwNeaNQxeEzyBiYcjCSm2p+HxWoXbdODhCh56lEjTwL6IlJlyR6weIZNsQXiU+y LLr5yLPatAK3Nt+ZimpdLpWiDh7+nhs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=/YruVy693sIGMqSVhVr/4+JF9RoXWgnIrtLSdSPK4mw=; b=IUhhnEiESKEpLJdHRKcnKDOIrR X/YC3xSsenqLCvV2hqbEjMjE89Dggeuq/l0/DFlCqq2/Dv9lihc3A4g65Is18xfqqudV8B8C6VTg8 98zB4odPXmtfjhGlT9Qr5bZE5v6hPJC3SONm67/k3Xp57dGsKfSil1ishKDCXwqnoavb2P7c8sUfr xpVag96xhLM5Gv7QXvrXSKHI2r/WRkoMO8M3woP4O5IILeOj3HNKE56jPhc81gRlYy/HbeVTNijyj iI71h+qfhB9lepfDz1Iwo3wR6OeeSGbmlaG8fi6gRgnQVU6rZW669WmymrBZG0afHVD/S/0WwrTLO MR/NZ56Q==; Received: from 2a02-8389-2341-5b80-39d3-4735-9a3c-88d8.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:39d3:4735:9a3c:88d8] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1rGh8k-006BPW-0l; Fri, 22 Dec 2023 15:08:42 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Jan Kara , David Howells , Brian Foster , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 06/17] writeback: Factor out writeback_finish() Date: Fri, 22 Dec 2023 16:08:16 +0100 Message-Id: <20231222150827.1329938-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222150827.1329938-1-hch@lst.de> References: <20231222150827.1329938-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: 5BE06120002 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: ebcgreszkchjhorhg8fjbn6jkp3a8uzu X-HE-Tag: 1703257723-566541 X-HE-Meta: U2FsdGVkX18QEjKjiZMLup2UcVihGyMEq3TTaCUHXHq+e8iOqefkkV9RFQ8Qek8La0YPQBnfpac5QeUiSzMzir+sNrRVxbqozpIoIHeE2AtVFhKvtCUPxjGNLL5adCGksDRW05yC9qe9xCfvBAP8pXlgLW05nuEoSE8dnNP25jLy1f7pEW6YinfwXSdGkMrEm7X9iegy7kbCDwBkfuSKiJR17Z3BR29X6Z4p1FMehsHv+M3SL6HVYKqz/LLwUEglIb18QuFCiPC5m94zdB1AJMWkH/jamjV4sTMJ71Ty1kVaGNAI3cwtWdfVpb38v8nL1m18apKUl4wVBIga+w/RbKlAjIl9WbvGQuqWAcqGKR6L+0ZEwRvOhqyimPZe6edBSaLlIsny1c5P9beiuTl9rYERx681KJkfS/GUqni5346HbqD42qPPSDJ862k66jqFghwsW9eGromzVxZbvXUpJrVP1Fgp/PIK/F+aV+bG/LyfSmgKwYqoCrJMuzmhLPFBHaqNiG2SLUNAoOTbiNC8dgZePyFHHKrlZ3nuUmpYctwj8GLQQNjj1iPgHj62Wg/Il+rCKfvfUIsabmo9DdVnQ0MsVmu4pJxj0ZJX6Z9LisKZOAmcpkFZbriIhtFuxOgECpYXJ0MudtVC1B0zgwgxLnVeew11Vw++a72Y2FcKB0XQHPuQG3bBwXsMl47fZUhiLaZKBmS/UJodeVsszPySW4bS2UqEemL0w9g5+VO0DdAIz9WngSiurM6tD5Ilrr5zHAxaLBwZLDZpo+W5aTQCHtL1L+HYs4D41kaSeZ+cm/3FiUYxxZp4rAexHD+liZgbnjIsQsppn5ao8+a+2JLi+RwmapSvuh2xRghkKHhtwP20Ym36nQBbNQK+7he+s1Z0WJjZJvU0AQ9OkY7bbNB71LBVFmNq+fjtW2/92G8SY8w00dxk5Oigkh4oB+k7ei44SMmZN25jlqjKtbOmxPY 11T+eWmq NslztNn57XR1DcQ03Zvo/VAODsMpcI+mbkPEZGCY+e9VwueUKmladZdZ+M3fXZ+kbpQcc3s0G7XlD7L3Q9orT2RJNdJcx0IcEM7c2b9fj7j/U2ZccHH+DhvppU0UUdV1OWg12xZuPlN78wylo90d+0xc3z55Df8m8oZp8KjrHcdYu2aOdJbge/LaZLGLHF62deFDce8rBCzz75ZbHRBisib4KMqgQ1tI1snrnk5IfjZ5MKnzlmYf9R87p/fwl/wO/zxgnKY6UThyp+vCp0fn2lD58TD4QvXaeXYdVFt/2nNwHW3yvZYVI5J2tvBCA633bmYsmU6iXZ1Edj1rGjnFJBGmh/2TMO7oZ2MbtwfnVjQXqhsdTqyCa9qf9miDUWoX7f/7xOEhTsm2eHB0BlfbAmKkEM1umWkSGvwmiDU2EQuCwLFM3zDm3VGrOknJTFHFnnmnldaCgxaJTV0yQx5/lTRTDsurxNwruoIK92rHC95k0jvLDR1vVC5LXtxi3KUI7vqbHrfDhqFAtNBnWLt21JD6aXZd8x9JBN/fvNrZFJRa7ofbhEzUZo+O1XZ6rVSAhYHfP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Instead of having a 'done' variable that controls the nested loops, have a writeback_finish() that can be returned directly. This involves keeping more things in writeback_control, but it's just moving stuff allocated on the stack to being allocated slightly earlier on the stack. Signed-off-by: Matthew Wilcox (Oracle) [hch: heavily rebased, reordered and commented struct writeback_control] Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/writeback.h | 6 +++ mm/page-writeback.c | 79 ++++++++++++++++++++------------------- 2 files changed, 47 insertions(+), 38 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 833ec38fc3e0c9..390f2dd03cf27e 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -11,6 +11,7 @@ #include #include #include +#include struct bio; @@ -40,6 +41,7 @@ enum writeback_sync_modes { * in a manner such that unspecified fields are set to zero. */ struct writeback_control { + /* public fields that can be set and/or consumed by the caller: */ long nr_to_write; /* Write this many pages, and decrement this for each page written */ long pages_skipped; /* Pages which were not written */ @@ -77,6 +79,10 @@ struct writeback_control { */ struct swap_iocb **swap_plug; + /* internal fields used by the ->writepages implementation: */ + struct folio_batch fbatch; + int err; + #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ struct inode *inode; /* inode being written out */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c798c0d6d0abb4..564d5faf562ba7 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2360,6 +2360,29 @@ void tag_pages_for_writeback(struct address_space *mapping, } EXPORT_SYMBOL(tag_pages_for_writeback); +static void writeback_finish(struct address_space *mapping, + struct writeback_control *wbc, pgoff_t done_index) +{ + folio_batch_release(&wbc->fbatch); + + /* + * For range cyclic writeback we need to remember where we stopped so + * that we can continue there next time we are called. If we hit the + * last page and there is more work to be done, wrap back to the start + * of the file. + * + * For non-cyclic writeback we always start looking up at the beginning + * of the file if we are called again, which can only happen due to + * -ENOMEM from the file system. + */ + if (wbc->range_cyclic) { + if (wbc->err || wbc->nr_to_write <= 0) + mapping->writeback_index = done_index; + else + mapping->writeback_index = 0; + } +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2395,17 +2418,12 @@ int write_cache_pages(struct address_space *mapping, struct writeback_control *wbc, writepage_t writepage, void *data) { - int ret = 0; - int done = 0; int error; - struct folio_batch fbatch; int nr_folios; pgoff_t index; pgoff_t end; /* Inclusive */ - pgoff_t done_index; xa_mark_t tag; - folio_batch_init(&fbatch); if (wbc->range_cyclic) { index = mapping->writeback_index; /* prev offset */ end = -1; @@ -2419,22 +2437,23 @@ int write_cache_pages(struct address_space *mapping, } else { tag = PAGECACHE_TAG_DIRTY; } - done_index = index; - while (!done && (index <= end)) { + + folio_batch_init(&wbc->fbatch); + wbc->err = 0; + + while (index <= end) { int i; nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch); + tag, &wbc->fbatch); if (nr_folios == 0) break; for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; + struct folio *folio = wbc->fbatch.folios[i]; unsigned long nr; - done_index = folio->index; - folio_lock(folio); /* @@ -2481,6 +2500,9 @@ int write_cache_pages(struct address_space *mapping, folio_unlock(folio); error = 0; } + + if (error && !wbc->err) + wbc->err = error; /* * For integrity sync we have to keep going until we @@ -2496,38 +2518,19 @@ int write_cache_pages(struct address_space *mapping, * off and media errors won't choke writeout for the * entire file. */ - if (error && !ret) - ret = error; - if (wbc->sync_mode == WB_SYNC_NONE) { - if (ret || wbc->nr_to_write <= 0) { - done_index = folio->index + nr; - done = 1; - break; - } + if (wbc->sync_mode == WB_SYNC_NONE && + (wbc->err || wbc->nr_to_write <= 0)) { + writeback_finish(mapping, wbc, + folio->index + nr); + return error; } } - folio_batch_release(&fbatch); + folio_batch_release(&wbc->fbatch); cond_resched(); } - /* - * For range cyclic writeback we need to remember where we stopped so - * that we can continue there next time we are called. If we hit the - * last page and there is more work to be done, wrap back to the start - * of the file. - * - * For non-cyclic writeback we always start looking up at the beginning - * of the file if we are called again, which can only happen due to - * -ENOMEM from the file system. - */ - if (wbc->range_cyclic) { - if (done) - mapping->writeback_index = done_index; - else - mapping->writeback_index = 0; - } - - return ret; + writeback_finish(mapping, wbc, 0); + return 0; } EXPORT_SYMBOL(write_cache_pages);