From patchwork Mon Feb 12 07:13:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552728 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94F5A7462; Mon, 12 Feb 2024 07:14:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722042; cv=none; b=RYbz1SUWyBFa7VEYPhkGGRU4an3dPgYzAS33bBTS6YwaX7cbnunIj5HnhdefE7o47kW7xsUtKjVOa01tT244l2B+MD2qhoN5+88lkYucuQb6AAiJRrYurnwEnEtWclVCVrTqPXs7IwioMEZUE0XBBRg6Onw4vDIDntgqpY3OvXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722042; c=relaxed/simple; bh=rP1TPgcHWkhJLnsYeLGXh3XxbkZLo6v2aWpQwVYAC9c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=F1qKTr2hASxOIemq++07bTCbls2SpDa7uM14UMex7A7Tpx1WRfZgHyP74NgKL4OrRJJhbxoXs595B3+8sNwZSRut3PmTIvDg2PCrY9ayeJBIkluY/3+pXwyTaeQYwIf5FsgeMqgRmrd4GNQagwr2C32ahhsXVQy6qGvg3AM/y7w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=fBxYIBr5; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="fBxYIBr5" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=UWZt1qWFcFm+fzPf8IEKeJSyXZf+l++RirmcZF+UAzI=; b=fBxYIBr50i75typXVRTy376Pn/ CQXblV/zQ6YusIu7nNdSNa4irkybtfukPQvmp93qD0JxWc9W6DZVzThYo50vrOq4Dwr7te99PFh3f vzYWhbE86of4eVstB9bX6ZqvUGSxRZlf5jabXXILmkHa2L17gOEj110mYC4ev4VI7z2JOVfYNksqC UEmRO/qWjT+mPYRCr6o514Qc2Y0rZlBM7rt6vQCVvmVi8GiGqCDnxmdlvCsfOWIi00QRG37mlCH50 DOtT3lNEwJoy+DtTN1lTNFOgaru1s+ViQ62OBfv0b/y3Sxwc5zWfuVV8e3nM9aGaGzHJh0tqVNpmV eVw6Q71w==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQVr-00000004Smh-1CN2; Mon, 12 Feb 2024 07:13:59 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/14] writeback: don't call mapping_set_error in writepage_cb Date: Mon, 12 Feb 2024 08:13:35 +0100 Message-Id: <20240212071348.1369918-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html writepage_cb is the iterator callback for write_cache_pages, which already tracks all errors and returns them to the caller. There is no need to additionally cal mapping_set_error which is intended for contexts where the error can't be directly returned (e.g. the I/O completion handlers). Remove the mapping_set_error call in writepage_cb which is not only superfluous but also buggy as it can be called with the error argument set to AOP_WRITEPAGE_ACTIVATE, which is not actually an error but a magic return value asking the caller to unlock the page. Signed-off-by: Christoph Hellwig --- mm/page-writeback.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 3f255534986a2f..62901fa905f01e 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2534,9 +2534,8 @@ static int writepage_cb(struct folio *folio, struct writeback_control *wbc, void *data) { struct address_space *mapping = data; - int ret = mapping->a_ops->writepage(&folio->page, wbc); - mapping_set_error(mapping, ret); - return ret; + + return mapping->a_ops->writepage(&folio->page, wbc); } int do_writepages(struct address_space *mapping, struct writeback_control *wbc) From patchwork Mon Feb 12 07:13:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552729 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FAD4DF41; Mon, 12 Feb 2024 07:14:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722045; cv=none; b=qQTtuJgIAw3A793eTbVItRgvhAgDDwlUpN+vfAvmDe2MopWzj1UABBT7+huxUcPQ43wzCoC9R0W1pH5VaNEOozuOapFIaEJWUUXw2ijvrfMniiEpgDAgW5ZiALSHnQpwyxpcaDoU9GyD+fh9p6pbWeR1wt07v/eI5nyJRhRNF/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722045; c=relaxed/simple; bh=Zp3cCLINEokOiPdXtcRCwweX4H6+xgszIGGxoW504HM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MidxacPvdn8IhUI0bJ7wDzC5PQNcW0wv+Is+TGcSj6/1dpxc6oM7chOMYKzTnOubx8kkGjOzFqZGzI6IE66TtlxtQKSLa0FHSII3mNehN9J0yuzGBM9ZvLrkzkw0NAQlj2BWKtfdQsHZEC0tijkv4RYaMWcG3eJe7C8eVRjnT7Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=lhWVSV/4; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="lhWVSV/4" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=+lqvdXiJyeUBbv4RbCH+VsOF1ziPvzDxge6GroiRQvo=; b=lhWVSV/4NJ4gJRcE/4QSmGNsRr Heu4UlhYcFVsZC14064f5jmUtC5kEA1lwOPdwrtQf/Bd7PJ5yNt6me07tldTSZffNrXe9k9NZy+5B Q9u+5SQ1ktlUZn5sZQ8fUhl2Vnbn0E+3wvmBv13JFwaQXgdvnYDwIzTGHIinQdW+iQtDq8jmJTTrz g3wzeNkmBYPbwoft5gs6t2NdV9u2q9PCKcYUZVS3UVElrQHEBVTvNFaSYRv0dndWcI5M47ywQfpbR LDsxxgMQmNqqi/ArS94YpOJDoT+J02951fSMYJiE3fyr8JXxC0wQeiCpo3cdaZ3zkymXgD/xDHVRr BnWZab8w==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQVt-00000004SoV-3u3n; Mon, 12 Feb 2024 07:14:02 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 02/14] writeback: remove a duplicate prototype for tag_pages_for_writeback Date: Mon, 12 Feb 2024 08:13:36 +0100 Message-Id: <20240212071348.1369918-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Signed-off-by: Christoph Hellwig [hch: split from a larger patch] Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/writeback.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 453736fd1d23ce..4b8cf9e4810bad 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -363,8 +363,6 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb); typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc, void *data); -void tag_pages_for_writeback(struct address_space *mapping, - pgoff_t start, pgoff_t end); int write_cache_pages(struct address_space *mapping, struct writeback_control *wbc, writepage_t writepage, void *data); From patchwork Mon Feb 12 07:13:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552730 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 169D310A0A; Mon, 12 Feb 2024 07:14:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722051; cv=none; b=nq5034kbkSChHFEdNGLOo+U8Cr1y0GTYhhNMD+n+CXyn70GPLraXQDD/GGTspgNmEgroA+313s8jjk+gN586DQCYD1ph07XxwWHVdn+G+3YHWr5OPE7Rfz4KTyOmMk2uPZoy5rdKOjzadetmOu6OVrxIQjqgDmnucqxJVijEB6Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722051; c=relaxed/simple; bh=qDZNRvNkd4jQjG015zJAhoIiQiTD5FYJGjIDp41iL4Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=B88gtsj4RUMcL8B268vib7RbKu2/3Bby618SzmDhZ8wvIb0CesY2XPLyQbVNo1VjncyMKOkIhLJsUvRdF4shRJF9GfVTjbWZAhgQWd+y93it3AQdU6LvPNt6AkZZ/7i2oT/ajkKAOfHtNxBV6qr33/gX+lzKqKBz8d12I91AGhc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=hQnFDH25; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="hQnFDH25" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=64fRIZ4p3xI47I0UjDfQWbOQW89h4igNM/M4APDkPmk=; b=hQnFDH252tFU20hQNpYyUplf2R jumeUc/X4ZUzuexuSNHm/zyXaPRqcggyQGFdoyLASrp+lMWu36MwQwZda9DfA0i/tGTxurxcVuYk2 7vQX8a5ep+6PGO+lgk/7ZmNu31DToIMDnMWWY+Inpo4BxJEsJm3nuNY2w1S2m/Udr5dnhsz4m+w3g hzaOAlGV33RWtrmBE7qG4n77S7sz8swMFa3bkYWn2FVEhrxmydbyy+8dvY+/NHpSd8nKTnUt5B4MC VAQhbSlVmXb6BrgLz0ZDBvn5840bRz3glHriIl64SpSG9IT83gI/0PvG0Y+zfo6FPqO9mOGthd9Ck GyUqemog==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQVw-00000004SpN-2Uys; Mon, 12 Feb 2024 07:14:06 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 03/14] writeback: fix done_index when hitting the wbc->nr_to_write Date: Mon, 12 Feb 2024 08:13:37 +0100 Message-Id: <20240212071348.1369918-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html When write_cache_pages finishes writing out a folio, it fails to update done_index to account for the number of pages in the folio just written. That means when range_cyclic writeback is restarted, it will be restarted at this folio instead of after it as it should. Fix that by updating done_index before breaking out of the loop. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 62901fa905f01e..aa3b432f77e37a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2505,6 +2505,7 @@ int write_cache_pages(struct address_space *mapping, * keep going until we have written all the pages * we tagged for writeback prior to entering this loop. */ + done_index = folio->index + nr; wbc->nr_to_write -= nr; if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE) { From patchwork Mon Feb 12 07:13:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552731 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E71F91119F; Mon, 12 Feb 2024 07:14:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722054; cv=none; b=CKfGHveq/6Gr/Sy4His1NWDbRY/hqLLxrFoVJPKcSUggytalHG8B4TTduHPtP2uPDjml4NLFqjXcjMQuxi1TmODTwcRniczNALWMgzhaj7EOOOOIXCHVoAQIuZJSYL2T1xLmuyfeQTKJdSs5h8W4WOvpZ6bQdZmtNx1XTW3IP2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722054; c=relaxed/simple; bh=GL1mB2qr1ND5+ILCIRB0lCKQZDgTWyOuUcvG6Wq3oDI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=vFbZowERhwS4NcupDOy2eX8nLRSf5URhhQJ3BRHQ113/KbHuyedvaJcd+dKikIq3r/iDuYV+M1h5COI19IWSW1aqAI5WokMH+DeerjNiG1Vr96u3ld0OlPpDwmIan9qonUpoGMZ0WR7vymQvhFc9WjaF8SGFzWmJb4IdVG7VyiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=dJIvNT8i; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dJIvNT8i" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2QW1YOpfZggQarO2hJ9pUOEEOVq5PMz3pyBHLyK4cac=; b=dJIvNT8iPWSyPAEA1cLuE0R6F/ +MI3j0k2PaFmoed7PIV31w4cjyiO/A9tRJ1wuYiDzg0P6+2xn9yGpkqkoU84H/uyqGDD8/8PxBBfQ AFkG/svjPlhqN9q9KovsSyxhMQnnhYP6eKBcGdGQ7cpErALum72bOaABOLi3nWI08NNWV6YIRTDM4 XaZWfmZtuFb7tRrSdMIatAztr8Wku7CuRHPn8WD52iMGkhxkDKK2RFyyLSDnYhpbrmDbaWIBV1wEY EM+HyvMdvT7MosOnjNsAmMCaAYPWgIVZa2GMzHwxyB+/wjpNd1Fa1LR87khgeDjw5xO/CjND/s9mR /fh/xOzA==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQW1-00000004Sqd-1p3o; Mon, 12 Feb 2024 07:14:09 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 04/14] writeback: also update wbc->nr_to_write on writeback failure Date: Mon, 12 Feb 2024 08:13:38 +0100 Message-Id: <20240212071348.1369918-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html When exiting write_cache_pages early due to a non-integrity write failure, wbc->nr_to_write currently doesn't account for the folio we just failed to write. This doesn't matter because the callers always ingore the value on a failure, but moving the update to common code will allow to simplify the code, so do it. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index aa3b432f77e37a..06afba8f078515 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2473,6 +2473,7 @@ int write_cache_pages(struct address_space *mapping, trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); error = writepage(folio, wbc, data); nr = folio_nr_pages(folio); + wbc->nr_to_write -= nr; if (unlikely(error)) { /* * Handle errors according to the type of @@ -2506,7 +2507,6 @@ int write_cache_pages(struct address_space *mapping, * we tagged for writeback prior to entering this loop. */ done_index = folio->index + nr; - wbc->nr_to_write -= nr; if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE) { done = 1; From patchwork Mon Feb 12 07:13:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552732 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5651012B9F; Mon, 12 Feb 2024 07:14:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722056; cv=none; b=J+yvjj1z0PXrtjp6CoNnPR6PoY8jczy/D/DZWqKsU8S7h3GtCxsOfS7KEviWTEHIKXBumfw6dm1ti9ZU/y/exCufYMH4nbFpOrqBi8E+4RQPcWBaaEIxpLWlqSrPyuZFtgVTsI5bBzskZJOdvmIPqRKIFUG4LYF9zn+DujgZ9Uk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722056; c=relaxed/simple; bh=ukY0COvN+NL1nMv2vyEwUNEfkq+t2wK4jlewlvvA9Aw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ei9GZKiuGgUkUEmpE1DjqoaWixf+LhhMECtvgu17v6qqrIFtEdpeO7dDCTeX/dO0WcFjN73FA0yDoe5rME37oEkEJn0x1Zqma11oPU3vejIp/F96hr1BK1pRoUvTtiruNHC2+i1aVG2GysoJydZioYzGuWV1S1ikVLX16+ytKkw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QMXkj9OR; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QMXkj9OR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=KUIj5droJWBY9JuTDNlu6m/OZ7SC5gyUm02IKuKQE8M=; b=QMXkj9OR68fUtzLzKk+BxuxP5V lLcsfARWxz3J73jW+4CgAYNChX8vH6k0bYjyje3Ka3UUp3oc6GxLgHxFAdgUXy8I3Q3cRXMjVbNo2 UTdZOTlyUdiKdDXVA+30f9Hz4sGb1b9PlTE9EOQ+6g0S36udj5BbdA6CafPZ9Po96XiHG8eDOgtKG FKwNEztpeIb/BiXUv4XWkKCconDCsJXg0G+1M929zR6uRK51QBCmXgoU2HM5G4xb+Jbb2b9r/568X TIwGtt07kVctUBqVnmuuBQ/x2I5CUpuUf0jYc9Ig1T/eFiRNpVsz+Ge0Z6jbgho4bypbjl34cJZtc ux8+JkwA==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQW4-00000004SsL-0LR5; Mon, 12 Feb 2024 07:14:12 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 05/14] writeback: only update ->writeback_index for range_cyclic writeback Date: Mon, 12 Feb 2024 08:13:39 +0100 Message-Id: <20240212071348.1369918-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html mapping->writeback_index is only [1] used as the starting point for range_cyclic writeback, so there is no point in updating it for other types of writeback. [1] except for btrfs_defrag_file which does really odd things with mapping->writeback_index. But btrfs doesn't use write_cache_pages at all, so this isn't relevant here. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 06afba8f078515..4d862f196d1f05 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2403,7 +2403,6 @@ int write_cache_pages(struct address_space *mapping, pgoff_t index; pgoff_t end; /* Inclusive */ pgoff_t done_index; - int range_whole = 0; xa_mark_t tag; folio_batch_init(&fbatch); @@ -2413,8 +2412,6 @@ int write_cache_pages(struct address_space *mapping, } else { index = wbc->range_start >> PAGE_SHIFT; end = wbc->range_end >> PAGE_SHIFT; - if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) - range_whole = 1; } if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) { tag_pages_for_writeback(mapping, index, end); @@ -2518,14 +2515,21 @@ int write_cache_pages(struct address_space *mapping, } /* - * If we hit the last page and there is more work to be done: wrap - * back the index back to the start of the file for the next - * time we are called. + * For range cyclic writeback we need to remember where we stopped so + * that we can continue there next time we are called. If we hit the + * last page and there is more work to be done, wrap back to the start + * of the file. + * + * For non-cyclic writeback we always start looking up at the beginning + * of the file if we are called again, which can only happen due to + * -ENOMEM from the file system. */ - if (wbc->range_cyclic && !done) - done_index = 0; - if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) - mapping->writeback_index = done_index; + if (wbc->range_cyclic) { + if (done) + mapping->writeback_index = done_index; + else + mapping->writeback_index = 0; + } return ret; } From patchwork Mon Feb 12 07:13:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552733 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE3CA171BF; Mon, 12 Feb 2024 07:14:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722060; cv=none; b=KNZYYtmmR3j3JTkwwUt7fhUr8Dy5xXbGq7mZ0X435XgHoeuLRlZelt4zfVTjEIBWxCM93u79CnsKbic0gAhTr24M5RIDuRLQGm8jNiyC7GxL32nMOW4beagqiVsMEmQ0jFkuDW5rNgHTsF0A+dF9lgYh6diJz8VP8w7Mh93fmus= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722060; c=relaxed/simple; bh=ZZVhOWa+LBNPQ29pS7ToNJb6ifArT4DpTk7X4vrgazg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VepbleOR80BTPRH5NiCVh5JC1xwQuvfQVqNqdeEFUcfODXKAKG3sTSrXC76BYzKlPfpKHfX+erWFkP1DE2tG7tt6uTXZLzNi8K4n+fW4P0q3tNMnRC8tLUp0u51RLyNWmbdt+1Ve2RP8c1YiipllO2CiZpT6vYRyNi9Zi5blQKQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=nfOtutd8; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="nfOtutd8" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=To4jTyxCYxUY5wd7w+9Le458r9Ys3/VfyFqqGfNjFPc=; b=nfOtutd8j/0yHOPqqs1KM8fcvG q75YNKjRHxoc4Blgsq58A8PsYUG7YdvL+tmtdrQdsxoxjEgr/IjOWJBkBvMs+l5cmHKgcjd8uv+Kp 6Ol6MRgsBc4zfotAveb+QxwCtqStBJfTcZsbd+o4cxI/Spis1YF5p1g67mL5OBPIwz1jE8rVpopVk +kfLdFl+bliXGifeP7QdRNGixZVQ7R48kHZkHkKkbNGGFGtUCLQRyqq9BHODEkvg0et9bymQbGSWO agC+LF8N1vhEkgfFNyGl7XtKg3Cl9fUsChZREhpyU0ujaUfgWS5nFXu0L8mJuYizPNtdFfLAfkXJN 8s709i4g==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQW6-00000004Su2-363S; Mon, 12 Feb 2024 07:14:15 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara Subject: [PATCH 06/14] writeback: rework the loop termination condition in write_cache_pages Date: Mon, 12 Feb 2024 08:13:40 +0100 Message-Id: <20240212071348.1369918-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Rework the way we deal with the cleanup after the writepage call. First handle the magic AOP_WRITEPAGE_ACTIVATE separately from real error returns to get it out of the way of the actual error handling path. The split the handling on intgrity vs non-integrity branches first, and return early using a goto for the non-ingegrity early loop condition to remove the need for the done and done_index local variables, and for assigning the error to ret when we can just return error directly. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara --- mm/page-writeback.c | 84 ++++++++++++++++++--------------------------- 1 file changed, 33 insertions(+), 51 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 4d862f196d1f05..b49ee15a863e99 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2396,13 +2396,12 @@ int write_cache_pages(struct address_space *mapping, void *data) { int ret = 0; - int done = 0; int error; struct folio_batch fbatch; + struct folio *folio; int nr_folios; pgoff_t index; pgoff_t end; /* Inclusive */ - pgoff_t done_index; xa_mark_t tag; folio_batch_init(&fbatch); @@ -2419,8 +2418,7 @@ int write_cache_pages(struct address_space *mapping, } else { tag = PAGECACHE_TAG_DIRTY; } - done_index = index; - while (!done && (index <= end)) { + while (index <= end) { int i; nr_folios = filemap_get_folios_tag(mapping, &index, end, @@ -2430,11 +2428,7 @@ int write_cache_pages(struct address_space *mapping, break; for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; - unsigned long nr; - - done_index = folio->index; - + folio = fbatch.folios[i]; folio_lock(folio); /* @@ -2469,45 +2463,32 @@ int write_cache_pages(struct address_space *mapping, trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); error = writepage(folio, wbc, data); - nr = folio_nr_pages(folio); - wbc->nr_to_write -= nr; - if (unlikely(error)) { - /* - * Handle errors according to the type of - * writeback. There's no need to continue for - * background writeback. Just push done_index - * past this page so media errors won't choke - * writeout for the entire file. For integrity - * writeback, we must process the entire dirty - * set regardless of errors because the fs may - * still have state to clear for each page. In - * that case we continue processing and return - * the first error. - */ - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } else if (wbc->sync_mode != WB_SYNC_ALL) { - ret = error; - done_index = folio->index + nr; - done = 1; - break; - } - if (!ret) - ret = error; + wbc->nr_to_write -= folio_nr_pages(folio); + + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; } /* - * We stop writing back only if we are not doing - * integrity sync. In case of integrity sync we have to - * keep going until we have written all the pages - * we tagged for writeback prior to entering this loop. + * For integrity writeback we have to keep going until + * we have written all the folios we tagged for + * writeback above, even if we run past wbc->nr_to_write + * or encounter errors. + * We stash away the first error we encounter in + * wbc->saved_err so that it can be retrieved when we're + * done. This is because the file system may still have + * state to clear for each folio. + * + * For background writeback we exit as soon as we run + * past wbc->nr_to_write or encounter the first error. */ - done_index = folio->index + nr; - if (wbc->nr_to_write <= 0 && - wbc->sync_mode == WB_SYNC_NONE) { - done = 1; - break; + if (wbc->sync_mode == WB_SYNC_ALL) { + if (error && !ret) + ret = error; + } else { + if (error || wbc->nr_to_write <= 0) + goto done; } } folio_batch_release(&fbatch); @@ -2524,14 +2505,15 @@ int write_cache_pages(struct address_space *mapping, * of the file if we are called again, which can only happen due to * -ENOMEM from the file system. */ - if (wbc->range_cyclic) { - if (done) - mapping->writeback_index = done_index; - else - mapping->writeback_index = 0; - } - + if (wbc->range_cyclic) + mapping->writeback_index = 0; return ret; + +done: + if (wbc->range_cyclic) + mapping->writeback_index = folio->index + folio_nr_pages(folio); + folio_batch_release(&fbatch); + return error; } EXPORT_SYMBOL(write_cache_pages); From patchwork Mon Feb 12 07:13:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552734 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 267C41755A; Mon, 12 Feb 2024 07:14:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722061; cv=none; b=CNQ55mzQ9snXoEir2aGtSZ5fbLKpQLCd0XfUI6wbUC0uMQuRnLA+V1YosYlhIMyQYKD9RYdKi05+ZHabcHOVECRU9jJ9imclztsnk2daJDOsL93Y8oKl5yjaCU1qVBFl+yivyYMokDXzHcPKWZuCSl0DOfT+kS8mhSfm6vvUlUg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722061; c=relaxed/simple; bh=eWB/m3+kKXqr5eQAr/PHL6gI5rVS5Ez3H7pfyIeL5i0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pn94xUuaIBXFM7JtkbiKtC/g2q9QFl7ft0M5q4OdBgH3OeDJBnZBmFWHa/ZKuTdcmknXNXnsJMWbohTPHC2/7sA3s6ndAjlhlYfqX6X+ptSUIvB8F0Zb682wZZpSCGL3FDOdGjGJgNhexf8iytQ5/4Hctr6B18S6GO/3373FrCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=hYd1tnNc; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="hYd1tnNc" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Vlqq71/8cVy7zsg5xjVrhlKjzr59zOQg5vGY/d9fuxQ=; b=hYd1tnNcGbUg+/gh3uTrVMVQBo 5C0owQ98hyLKCFuNzHZWKhpaK4i93rvDQf/YtRRosKmoQIatQX8GjBjLbNYU51u5x6BrMXyEwBThp V9v4bboxACxI4B64dBWWgcHn1ZmTMHYpFUfEuEK51eADVZ4JTtdT6uLwEFs67KWUM61jy+NECgrNs mqnGux6DoCpjC8K2uCNHmjf0eh9L9n4iEg9A171Z97uzfL7+T7QAwHc7wm2lqkpPjM1zbpyYt+M1j paXId343ZnCHv/8fufhVVqFEjQfGz33QP6rkneYGZt/dolem9SLOlXG2viWp/P2MLcPHEhc5JbcyF Tqs5/HMg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWA-00000004SwH-0xZP; Mon, 12 Feb 2024 07:14:18 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 07/14] writeback: Factor folio_prepare_writeback() out of write_cache_pages() Date: Mon, 12 Feb 2024 08:13:41 +0100 Message-Id: <20240212071348.1369918-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Reduce write_cache_pages() by about 30 lines; much of it is commentary, but it all bundles nicely into an obvious function. Signed-off-by: Matthew Wilcox (Oracle) [hch: rename should_writeback_folio to folio_prepare_writeback based on a comment from Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 61 +++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 27 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index b49ee15a863e99..20ff00c8be9d90 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2360,6 +2360,38 @@ void tag_pages_for_writeback(struct address_space *mapping, } EXPORT_SYMBOL(tag_pages_for_writeback); +static bool folio_prepare_writeback(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio) +{ + /* + * Folio truncated or invalidated. We can freely skip it then, + * even for data integrity operations: the folio has disappeared + * concurrently, so there could be no real expectation of this + * data integrity operation even if there is now a new, dirty + * folio at the same pagecache index. + */ + if (unlikely(folio->mapping != mapping)) + return false; + + /* + * Did somebody else write it for us? + */ + if (!folio_test_dirty(folio)) + return false; + + if (folio_test_writeback(folio)) { + if (wbc->sync_mode == WB_SYNC_NONE) + return false; + folio_wait_writeback(folio); + } + BUG_ON(folio_test_writeback(folio)); + + if (!folio_clear_dirty_for_io(folio)) + return false; + + return true; +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2430,38 +2462,13 @@ int write_cache_pages(struct address_space *mapping, for (i = 0; i < nr_folios; i++) { folio = fbatch.folios[i]; folio_lock(folio); - - /* - * Page truncated or invalidated. We can freely skip it - * then, even for data integrity operations: the page - * has disappeared concurrently, so there could be no - * real expectation of this data integrity operation - * even if there is now a new, dirty page at the same - * pagecache address. - */ - if (unlikely(folio->mapping != mapping)) { -continue_unlock: + if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); continue; } - if (!folio_test_dirty(folio)) { - /* someone wrote it for us */ - goto continue_unlock; - } - - if (folio_test_writeback(folio)) { - if (wbc->sync_mode != WB_SYNC_NONE) - folio_wait_writeback(folio); - else - goto continue_unlock; - } - - BUG_ON(folio_test_writeback(folio)); - if (!folio_clear_dirty_for_io(folio)) - goto continue_unlock; - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); + error = writepage(folio, wbc, data); wbc->nr_to_write -= folio_nr_pages(folio); From patchwork Mon Feb 12 07:13:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552735 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A859179AD; Mon, 12 Feb 2024 07:14:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722065; cv=none; b=d6VmVq9PMQ7mw3rPqxH/I3szS9RVLKJ7SrAYvFX/8ccxmQJPhBk0JBxsaObNaEtVWjOUKtzIHV99tHTdxMt9AdWKFkrZ1fR6/AEJwg3wB2AA7V+KK0FaZvgJ+3ngY8vSwV9VNsPyPlbjbm+XrsOD0yFIeiO9nlp8Nf/Mi8CDYwk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722065; c=relaxed/simple; bh=6aonBwnYQ6BWDoALBt40yIo2buIID3W1FQ0bJNZOifM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iFYW2y5NLNAQj1QxOQMpBvUVJiJy7QOoSr7t3f1SLZ9jg6lCf3g6g6lRgGm6t8GqyXfejeyHyH/i1Eyw3yuzMWFXFUzvfZDn/wVRDFYoKP0rEVJOhTzht/3XS9aTSihJPXKRFiAmk+F/JNjraoX9jY1CyhGWKNHefMH1oeQBsNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=3Wgal8GD; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="3Wgal8GD" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=H733Z/OisKZKsSfT6EaxuE1uZOfzplnZ1VkoSGD6LBY=; b=3Wgal8GDp/OO9yjFp2rFlOqj97 HkjNBFs0/l40rFgt1jrEGxE8wnNu03bxBKTl3NTcTQuZO2IQ9whXrFw4ywFkPBVeGdRxlwT952/aY HUJZx5cwg5j+WUQy6PpgVM9QoYQxm7VM7J7CezwBeQ7OSkRI6Tl1alD4403gJfiTKV/OBTMfHnAG5 cHHon1/aXcJNCBKrycB3riZqjjN1xA33iCRxNaVsiib1eeCpk1GWuKFMDJuz1BxJjjk4ZWkSQ2QXS sCihHV3jAxBSfvV4E7dY3u4REiqg6qbcf74I1PFnXLQBFgrRStfMEtIZ3muezb+y0n/tRO0brIXPb Hv2IvJJg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWC-00000004SyE-3a0i; Mon, 12 Feb 2024 07:14:21 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 08/14] writeback: Factor writeback_get_batch() out of write_cache_pages() Date: Mon, 12 Feb 2024 08:13:42 +0100 Message-Id: <20240212071348.1369918-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" This simple helper will be the basis of the writeback iterator. To make this work, we need to remember the current index and end positions in writeback_control. Signed-off-by: Matthew Wilcox (Oracle) [hch: heavily rebased, add helpers to get the tag and end index, don't keep the end index in struct writeback_control] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/writeback.h | 6 ++++ mm/page-writeback.c | 60 +++++++++++++++++++++++++-------------- 2 files changed, 44 insertions(+), 22 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 4b8cf9e4810bad..f67b3ea866a0fb 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -11,6 +11,7 @@ #include #include #include +#include struct bio; @@ -40,6 +41,7 @@ enum writeback_sync_modes { * in a manner such that unspecified fields are set to zero. */ struct writeback_control { + /* public fields that can be set and/or consumed by the caller: */ long nr_to_write; /* Write this many pages, and decrement this for each page written */ long pages_skipped; /* Pages which were not written */ @@ -77,6 +79,10 @@ struct writeback_control { */ struct swap_iocb **swap_plug; + /* internal fields used by the ->writepages implementation: */ + struct folio_batch fbatch; + pgoff_t index; + #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ struct inode *inode; /* inode being written out */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 20ff00c8be9d90..045ca252c0423d 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2392,6 +2392,29 @@ static bool folio_prepare_writeback(struct address_space *mapping, return true; } +static xa_mark_t wbc_to_tag(struct writeback_control *wbc) +{ + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + return PAGECACHE_TAG_TOWRITE; + return PAGECACHE_TAG_DIRTY; +} + +static pgoff_t wbc_end(struct writeback_control *wbc) +{ + if (wbc->range_cyclic) + return -1; + return wbc->range_end >> PAGE_SHIFT; +} + +static void writeback_get_batch(struct address_space *mapping, + struct writeback_control *wbc) +{ + folio_batch_release(&wbc->fbatch); + cond_resched(); + filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), + wbc_to_tag(wbc), &wbc->fbatch); +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2429,38 +2452,32 @@ int write_cache_pages(struct address_space *mapping, { int ret = 0; int error; - struct folio_batch fbatch; struct folio *folio; - int nr_folios; - pgoff_t index; pgoff_t end; /* Inclusive */ - xa_mark_t tag; - folio_batch_init(&fbatch); if (wbc->range_cyclic) { - index = mapping->writeback_index; /* prev offset */ + wbc->index = mapping->writeback_index; /* prev offset */ end = -1; } else { - index = wbc->range_start >> PAGE_SHIFT; + wbc->index = wbc->range_start >> PAGE_SHIFT; end = wbc->range_end >> PAGE_SHIFT; } - if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) { - tag_pages_for_writeback(mapping, index, end); - tag = PAGECACHE_TAG_TOWRITE; - } else { - tag = PAGECACHE_TAG_DIRTY; - } - while (index <= end) { + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + tag_pages_for_writeback(mapping, wbc->index, end); + + folio_batch_init(&wbc->fbatch); + + while (wbc->index <= end) { int i; - nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch); + writeback_get_batch(mapping, wbc); - if (nr_folios == 0) + if (wbc->fbatch.nr == 0) break; - for (i = 0; i < nr_folios; i++) { - folio = fbatch.folios[i]; + for (i = 0; i < wbc->fbatch.nr; i++) { + folio = wbc->fbatch.folios[i]; + folio_lock(folio); if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); @@ -2498,8 +2515,6 @@ int write_cache_pages(struct address_space *mapping, goto done; } } - folio_batch_release(&fbatch); - cond_resched(); } /* @@ -2512,6 +2527,7 @@ int write_cache_pages(struct address_space *mapping, * of the file if we are called again, which can only happen due to * -ENOMEM from the file system. */ + folio_batch_release(&wbc->fbatch); if (wbc->range_cyclic) mapping->writeback_index = 0; return ret; @@ -2519,7 +2535,7 @@ int write_cache_pages(struct address_space *mapping, done: if (wbc->range_cyclic) mapping->writeback_index = folio->index + folio_nr_pages(folio); - folio_batch_release(&fbatch); + folio_batch_release(&wbc->fbatch); return error; } EXPORT_SYMBOL(write_cache_pages); From patchwork Mon Feb 12 07:13:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552736 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77D23179BF; Mon, 12 Feb 2024 07:14:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722066; cv=none; b=tFg2g5fsRmdCzt8YdExBVrr5Rf1QuXUecwWhB/x/AVu+g7AyIh76EOrYFq84GrigAdY11+NBIvPYeijc2P4xg57z9E1Dc9nRTb9q1/kkkwrfVTvZSZUe7yq8wQKFGw02ykBe6hGMtiZC9SRPMmMCXQjHwTQJvuy81fsnlq3GJ2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722066; c=relaxed/simple; bh=JrPt/ds20xQVjZa9ziXGBT5QJdIfBxuPU0yljYX4cpU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NX9ilwOBkJXGUFgHiYnM4o/FTRRd+tlFvNNL9bBD2qPdLo1/djSHCwelsih5sE9G3/TNk14fSCrPgiudwmO4BIj4fnOzKihaEbA7khnOSEUmC5zduCm80cJ3nZxjJI2AVUXqzRcYyVTQ0cdPTpPaUDs3FAWd0aMGwH2eoJKpMK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=bWtEI6r6; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bWtEI6r6" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Sdj3TuIooUnXttuDakRWtpzrehF7T0XwpBYd8+XY4jQ=; b=bWtEI6r60ayNLZELpTZOYkI7ni V0YLes3NCWJScVbi3epGe0ytl7yZvhrutFXCXRDlgRfNSYcjDKED1urb/sMAXD2TtAWOT2t0SOKMD K56jZaVtX4m5/XF3bHchfDQ5+TyHEhxlaRD95cNyd7gkg2rkgPmbESs78s7lxoWGgGNgMBlygZS9n em5EpFhw6x5kD6xPyLFXSWxfrZMCZmBg5ksCZBQNvyOj58FJY9sBxyzKTaSEnV9qBViF8T2xl+0sh PfB0XRahp4DAw/qv2YFfVgEzx6/X0zRF1hjoGkeCTLsEl3dFTdqW8yCJwphVJzDOisjnV+TU79VBR 2iOpdZtg==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWF-00000004Sze-2F2g; Mon, 12 Feb 2024 07:14:23 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 09/14] writeback: Simplify the loops in write_cache_pages() Date: Mon, 12 Feb 2024 08:13:43 +0100 Message-Id: <20240212071348.1369918-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Collapse the two nested loops into one. This is needed as a step towards turning this into an iterator. Note that this drops the "index <= end" check in the previous outer loop and just relies on filemap_get_folios_tag() to return 0 entries when index > end. This actually has a subtle implication when end == -1 because then the returned index will be -1 as well and thus if there is page present on index -1, we could be looping indefinitely. But as the comment in filemap_get_folios_tag documents this as already broken anyway we should not worry about it here either. The fix for that would probably a change to the filemap_get_folios_tag() calling convention. Signed-off-by: Matthew Wilcox (Oracle) [hch: updated the commit log based on feedback from Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 75 ++++++++++++++++++++++----------------------- 1 file changed, 36 insertions(+), 39 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 045ca252c0423d..a94a77b1805969 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2454,6 +2454,7 @@ int write_cache_pages(struct address_space *mapping, int error; struct folio *folio; pgoff_t end; /* Inclusive */ + int i = 0; if (wbc->range_cyclic) { wbc->index = mapping->writeback_index; /* prev offset */ @@ -2467,53 +2468,49 @@ int write_cache_pages(struct address_space *mapping, folio_batch_init(&wbc->fbatch); - while (wbc->index <= end) { - int i; - - writeback_get_batch(mapping, wbc); - + for (;;) { + if (i == wbc->fbatch.nr) { + writeback_get_batch(mapping, wbc); + i = 0; + } if (wbc->fbatch.nr == 0) break; - for (i = 0; i < wbc->fbatch.nr; i++) { - folio = wbc->fbatch.folios[i]; + folio = wbc->fbatch.folios[i++]; - folio_lock(folio); - if (!folio_prepare_writeback(mapping, wbc, folio)) { - folio_unlock(folio); - continue; - } + folio_lock(folio); + if (!folio_prepare_writeback(mapping, wbc, folio)) { + folio_unlock(folio); + continue; + } - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); - error = writepage(folio, wbc, data); - wbc->nr_to_write -= folio_nr_pages(folio); + error = writepage(folio, wbc, data); + wbc->nr_to_write -= folio_nr_pages(folio); - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; + } - /* - * For integrity writeback we have to keep going until - * we have written all the folios we tagged for - * writeback above, even if we run past wbc->nr_to_write - * or encounter errors. - * We stash away the first error we encounter in - * wbc->saved_err so that it can be retrieved when we're - * done. This is because the file system may still have - * state to clear for each folio. - * - * For background writeback we exit as soon as we run - * past wbc->nr_to_write or encounter the first error. - */ - if (wbc->sync_mode == WB_SYNC_ALL) { - if (error && !ret) - ret = error; - } else { - if (error || wbc->nr_to_write <= 0) - goto done; - } + /* + * For integrity writeback we have to keep going until we have + * written all the folios we tagged for writeback above, even if + * we run past wbc->nr_to_write or encounter errors. + * We stash away the first error we encounter in wbc->saved_err + * so that it can be retrieved when we're done. This is because + * the file system may still have state to clear for each folio. + * + * For background writeback we exit as soon as we run past + * wbc->nr_to_write or encounter the first error. + */ + if (wbc->sync_mode == WB_SYNC_ALL) { + if (error && !ret) + ret = error; + } else { + if (error || wbc->nr_to_write <= 0) + goto done; } } From patchwork Mon Feb 12 07:13:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552737 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0A39182CA; Mon, 12 Feb 2024 07:14:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722070; cv=none; b=J5/r/PKfFYcGCZUrdiUgM3ccg87gzx+EmbfjLoA/+iFgfbms+Gpk9T6sgTek5RT/euW3xI+loW5P83LofyV7JUFp2eVgV8pKM73+ro1CiSWJ8JguEilA1TheywMWU5EMpaJ9Yf+qZKdnQYkuL2SKCTOr4RNRmOyF+lXYzz6Yi0c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722070; c=relaxed/simple; bh=8YXecWRICYuMcNttDcOnWW490EjFn6keh2I4C8IPcyw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cyYN/JNykp3geZanS03QWyQZqSKl3W+QXTfVomlmiDoKZjx43WGtbulkj5ftbD2rVVxwPFXL93xlZCibZApzBmF2q8qJY0y1mDcNLOl+6cSOd6mduFkxOx5rafVSnStzZjdk4gUdXXp65C/FnuBSbrDRFmjsFZ2pmiQ7XhxYLBs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=KSEDHvnP; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="KSEDHvnP" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=M5Qvp5zNKGHXjXReg58hQjrWh/GgxzXP5maEkxrO/YI=; b=KSEDHvnPO53cQSe7AW5PkM179C 0DYoyalxcn+DwuyW53dr955k4BfksGs0A/fvdpCrNU2v6RfJcoVVHZgjOJDRQSk6SLZW/h09KIfpl zmaOnPgCKkPSDxEUIpzJ5N6UpKlyouOnZ7yV02DDBXSQT4lDJFXjCuR22l7N5IlG95PluhMT+qC/E uMHRsQ+YRZRTuyzoAzCxKP4glZ3VsabLC+VtyKBS2vW4d6FWVD4CNp1cJ42IhVaHxcR3TMwfZ8W55 oRAplzCHTEm8E2EJI6uaS6wt3snHiqfPsgSmTI4PE/NgzauQVCyCcIuyvEnFJmHKKBXtr8ffw1NUB z92qXtog==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWI-00000004T1W-297p; Mon, 12 Feb 2024 07:14:26 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 10/14] pagevec: Add ability to iterate a queue Date: Mon, 12 Feb 2024 08:13:44 +0100 Message-Id: <20240212071348.1369918-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Add a loop counter inside the folio_batch to let us iterate from 0-nr instead of decrementing nr and treating the batch as a stack. It would generate some very weird and suboptimal I/O patterns for page writeback to iterate over the batch as a stack. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/pagevec.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 87cc678adc850b..fcc06c300a72c3 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -27,6 +27,7 @@ struct folio; */ struct folio_batch { unsigned char nr; + unsigned char i; bool percpu_pvec_drained; struct folio *folios[PAGEVEC_SIZE]; }; @@ -40,12 +41,14 @@ struct folio_batch { static inline void folio_batch_init(struct folio_batch *fbatch) { fbatch->nr = 0; + fbatch->i = 0; fbatch->percpu_pvec_drained = false; } static inline void folio_batch_reinit(struct folio_batch *fbatch) { fbatch->nr = 0; + fbatch->i = 0; } static inline unsigned int folio_batch_count(struct folio_batch *fbatch) @@ -75,6 +78,21 @@ static inline unsigned folio_batch_add(struct folio_batch *fbatch, return folio_batch_space(fbatch); } +/** + * folio_batch_next - Return the next folio to process. + * @fbatch: The folio batch being processed. + * + * Use this function to implement a queue of folios. + * + * Return: The next folio in the queue, or NULL if the queue is empty. + */ +static inline struct folio *folio_batch_next(struct folio_batch *fbatch) +{ + if (fbatch->i == fbatch->nr) + return NULL; + return fbatch->folios[fbatch->i++]; +} + void __folio_batch_release(struct folio_batch *pvec); static inline void folio_batch_release(struct folio_batch *fbatch) From patchwork Mon Feb 12 07:13:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552738 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E51611CABF; Mon, 12 Feb 2024 07:14:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722072; cv=none; b=TzTtmngfftpbWB7aWJ1WQjR7NGgpq4X7jqvJPafekSrVIeb2+csyiIaBmdd37QHLpGyZXImY9L3FJDp2RU1RUfLmx0ZU0zMUvueDJEiD2Z/ebFZUvRFpgrD82euhjZsVYmIJgtpkdq0iVpgxpgNgvriVogCQbsVE0bDG1bEWftM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722072; c=relaxed/simple; bh=2FNgx33uieAGGD7HB2Qd3F2JZ17pdGt3GQ4I7EQhx2k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EdwM1aeL5eeA9Joj36faTAXUhVnAwdnf5CjOA3SlivoBWtqaVfta6SXzWZyBIrOrF8t+agOsYkKlNXET9W7SJhSurMRIowzlqJJkMsJ1pI2laJpLUETVpRwkQ4gfCxh+Kp3spmEBkF+6qv61G2NZWPf3VHca2aEtbNITY+vQrSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=PPFKGKFd; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PPFKGKFd" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=+Lom8uD40NGvlqisQyOK8nFLP+lWD9RUXEqKKIbGSn4=; b=PPFKGKFdog+pRhtnYVX3WRltrB ikV+4bwKtUpdsZ8JyPWOpR/eCx2IumzP/5BsTgIlB2axxAYRuMgIlbeJvtoS9crUEwioqaAVLIEy9 qe68HPx2+QSYY3d4+4EaLCUNFTjf6H5EsJ+Pf6Aic0UDbxShZgMHL2WP23oVmHjAD1RQyq5rOpy8j f90sfKpTzUAuob0zsJ8hcHxrSwQIiE9b6alk7Q0pmRALR7/PqiH3FBss+f4aY6y6jZnvQs41cJoJv PVW8Uvg/YNsq8/zUd6e/H02sDTt5aQbaw1r/Wu4Y+ATBQVVma61LebMzf/d0G6ueSwnLc0neemdyr lgavjWNQ==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWL-00000004T2h-0GbS; Mon, 12 Feb 2024 07:14:29 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 11/14] writeback: Use the folio_batch queue iterator Date: Mon, 12 Feb 2024 08:13:45 +0100 Message-Id: <20240212071348.1369918-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Instead of keeping our own local iterator variable, use the one just added to folio_batch. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index a94a77b1805969..62b663debe713b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2406,13 +2406,21 @@ static pgoff_t wbc_end(struct writeback_control *wbc) return wbc->range_end >> PAGE_SHIFT; } -static void writeback_get_batch(struct address_space *mapping, +static struct folio *writeback_get_folio(struct address_space *mapping, struct writeback_control *wbc) { - folio_batch_release(&wbc->fbatch); - cond_resched(); - filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), - wbc_to_tag(wbc), &wbc->fbatch); + struct folio *folio; + + folio = folio_batch_next(&wbc->fbatch); + if (!folio) { + folio_batch_release(&wbc->fbatch); + cond_resched(); + filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), + wbc_to_tag(wbc), &wbc->fbatch); + folio = folio_batch_next(&wbc->fbatch); + } + + return folio; } /** @@ -2454,7 +2462,6 @@ int write_cache_pages(struct address_space *mapping, int error; struct folio *folio; pgoff_t end; /* Inclusive */ - int i = 0; if (wbc->range_cyclic) { wbc->index = mapping->writeback_index; /* prev offset */ @@ -2469,15 +2476,10 @@ int write_cache_pages(struct address_space *mapping, folio_batch_init(&wbc->fbatch); for (;;) { - if (i == wbc->fbatch.nr) { - writeback_get_batch(mapping, wbc); - i = 0; - } - if (wbc->fbatch.nr == 0) + folio = writeback_get_folio(mapping, wbc); + if (!folio) break; - folio = wbc->fbatch.folios[i++]; - folio_lock(folio); if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); From patchwork Mon Feb 12 07:13:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552739 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B263E20DD2; Mon, 12 Feb 2024 07:14:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722076; cv=none; b=Y0uQWi+DF2SCFfetYIQbO/1Jvpfzi7cMnHRs1aocqoEIDu743XNeSgMwsFdb2AL4rQnVptxlNl/9e1UFvmizAir/EqMtRfgzU31UyokGFgKH39p5YL3yw5eSJjSMvJ+RitjKg/ZcRPeL9tXhXsMR9ccfYzoxwb611wr3OIY0HIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722076; c=relaxed/simple; bh=R03670f0uJzq6tZ590Qnsjch+falD9+U0M7iGN8msuQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=naYJ6VG8Y/+K2t4GxmDTP+mp2+Cv5jv0mfwfSCudsPZXkK93MzH/QiffY0c/7ixGaHu3D+n7pXiPYPxAwg5NjeZVpyrZSoijLbvw4w/gDG+j/4efEXG/60Hq1DAYVr6OiroPV0cWKjhpIFa+LzQmm1+vnUg2/V9mGyBvVJdAxM8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=XbHX1mZC; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="XbHX1mZC" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hDwf/sJTGlPIwBTRyPln6Zb1s+XpptYYA785c3w7vC4=; b=XbHX1mZCM5Mo7rShucMVsPQybi QYfeqa4nFaBzcM/DsAKy72s4D9gnZOQ+eZo15yg3xEHI7qigOLDFMavU0ZQlm8qdIdV8CBLgIYtNM gHhvHVUsTtsNGL+zE1jQLwkYVCj1Ruag3QbLwJ6Ta838iRkgnS1Lg6mE0P8nW/ZpIDmOytDpBIi4n yCMmI5JnIJvd0PcT1ZiaClUiaUlPN6MlYprQswFXIjkP/LX0Uli5B5o6UES3k6xUM87eJrAHtae9O FQP2lx8hKziZv9DQcZadgvj3xcrZEzcT3dQju6kWLLYmqNYz0W+KfaI8gVzCyx93dbRKa1wcqdpG3 r9C4E0cQ==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWN-00000004T3G-2raf; Mon, 12 Feb 2024 07:14:32 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 12/14] writeback: Move the folio_prepare_writeback loop out of write_cache_pages() Date: Mon, 12 Feb 2024 08:13:46 +0100 Message-Id: <20240212071348.1369918-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Move the loop for should-we-write-this-folio to writeback_get_folio. Signed-off-by: Matthew Wilcox (Oracle) [hch: folded the loop into the existing helper instead of a separate one as suggested by Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 62b663debe713b..01f076db4f2118 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2411,6 +2411,7 @@ static struct folio *writeback_get_folio(struct address_space *mapping, { struct folio *folio; +retry: folio = folio_batch_next(&wbc->fbatch); if (!folio) { folio_batch_release(&wbc->fbatch); @@ -2418,8 +2419,17 @@ static struct folio *writeback_get_folio(struct address_space *mapping, filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), wbc_to_tag(wbc), &wbc->fbatch); folio = folio_batch_next(&wbc->fbatch); + if (!folio) + return NULL; + } + + folio_lock(folio); + if (unlikely(!folio_prepare_writeback(mapping, wbc, folio))) { + folio_unlock(folio); + goto retry; } + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); return folio; } @@ -2480,14 +2490,6 @@ int write_cache_pages(struct address_space *mapping, if (!folio) break; - folio_lock(folio); - if (!folio_prepare_writeback(mapping, wbc, folio)) { - folio_unlock(folio); - continue; - } - - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); - error = writepage(folio, wbc, data); wbc->nr_to_write -= folio_nr_pages(folio); From patchwork Mon Feb 12 07:13:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552740 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE8FF2575F; Mon, 12 Feb 2024 07:14:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722079; cv=none; b=D2SGOJc145RwO3UfbIeo3hfHFm+v0dVZ7sRnzJG9dJ1dx6+Oq31tKGYFaZBz/p12uD0d/Uk0t0epyPCgcPVVhxXBr/145drA9z4FIfRntxgoTjGn4gTFa5ycu8y60vO+LESmspS1CQKpgR+vshALpqJrtpCT9HRTg+qj7yAnm+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722079; c=relaxed/simple; bh=ugu/VNDBSLTVtxfz1uR3NS/DADqhQQ4H/r5IEtJ7+WY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qN+4xDbjMAlAvBMYQmRfkdK6A0/cT+Db3p4kuUsDS5qWPStq/kvMLMHsdByDAn9TjyRpVM708Z0uT0lAsMLhZ6DHS3ORfxGJoaw4BXBDlH4liDgKpzy6HbmYbu23QUY4Rt5NwMRFcyAiyiN9CZf94H8Rfs8WEnKXZmzeOF1hrpw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=J1Vpv/4r; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="J1Vpv/4r" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3ps6DrUYY6joeRoCV3vXTg3akgu44JW6S1va7OwaCYU=; b=J1Vpv/4rTAbXBDrWBFtqLiozux yAh8D/yoZ6Q6bYS+XpZDbi9mUzzPzdz0zwxkt6zjj3ZBIN/FGdqVhwr6m9FQlhuAXa+bjd5gGQ0Xa Ni5nBeA6xwjmt7cUVDQJ60JYP5aMyMRBxqAIBTSreW5r72YJwR15vcnGdNBqjFOhgSOOUP2HNCJJy s6Fev43gNqoM3p47gmRrQt6mPNG1/FWfU9C3r+KewuO96GEwceTkX6AMzLJKO2uOSUbAe4Ilck70Y P0csjlPWIzKIvDbEiqOg6EN6xUzB3jAjJhpVeg32F9S6QMFCHv7WiTaseM18URlbhD0IEOR7zA8/Y bb8zv9BQ==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWQ-00000004T4B-1ZIv; Mon, 12 Feb 2024 07:14:34 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara Subject: [PATCH 13/14] writeback: add a writeback iterator Date: Mon, 12 Feb 2024 08:13:47 +0100 Message-Id: <20240212071348.1369918-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Refactor the code left in write_cache_pages into an iterator that the file system can call to get the next folio for a writeback operation: struct folio *folio = NULL; while ((folio = writeback_iter(mapping, wbc, folio, &error))) { error = ; } The twist here is that the error value is passed by reference, so that the iterator can restore it when breaking out of the loop. Handling of the magic AOP_WRITEPAGE_ACTIVATE value stays outside the iterator and needs is just kept in the write_cache_pages legacy wrapper. in preparation for eventually killing it off. Heavily based on a for_each* based iterator from Matthew Wilcox. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara --- include/linux/writeback.h | 4 + mm/page-writeback.c | 192 ++++++++++++++++++++++---------------- 2 files changed, 118 insertions(+), 78 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index f67b3ea866a0fb..9845cb62e40b2d 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -82,6 +82,7 @@ struct writeback_control { /* internal fields used by the ->writepages implementation: */ struct folio_batch fbatch; pgoff_t index; + int saved_err; #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ @@ -366,6 +367,9 @@ int balance_dirty_pages_ratelimited_flags(struct address_space *mapping, bool wb_over_bg_thresh(struct bdi_writeback *wb); +struct folio *writeback_iter(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio, int *error); + typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc, void *data); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 01f076db4f2118..1996200849e577 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2325,18 +2325,18 @@ void __init page_writeback_init(void) } /** - * tag_pages_for_writeback - tag pages to be written by write_cache_pages + * tag_pages_for_writeback - tag pages to be written by writeback * @mapping: address space structure to write * @start: starting page index * @end: ending page index (inclusive) * * This function scans the page range from @start to @end (inclusive) and tags - * all pages that have DIRTY tag set with a special TOWRITE tag. The idea is - * that write_cache_pages (or whoever calls this function) will then use - * TOWRITE tag to identify pages eligible for writeback. This mechanism is - * used to avoid livelocking of writeback by a process steadily creating new - * dirty pages in the file (thus it is important for this function to be quick - * so that it can tag pages faster than a dirtying process can create them). + * all pages that have DIRTY tag set with a special TOWRITE tag. The caller + * can then use the TOWRITE tag to identify pages eligible for writeback. + * This mechanism is used to avoid livelocking of writeback by a process + * steadily creating new dirty pages in the file (thus it is important for this + * function to be quick so that it can tag pages faster than a dirtying process + * can create them). */ void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end) @@ -2434,69 +2434,68 @@ static struct folio *writeback_get_folio(struct address_space *mapping, } /** - * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. + * writeback_iter - iterate folio of a mapping for writeback * @mapping: address space structure to write - * @wbc: subtract the number of written pages from *@wbc->nr_to_write - * @writepage: function called for each page - * @data: data passed to writepage function + * @wbc: writeback context + * @folio: previously iterated folio (%NULL to start) + * @error: in-out pointer for writeback errors (see below) * - * If a page is already under I/O, write_cache_pages() skips it, even - * if it's dirty. This is desirable behaviour for memory-cleaning writeback, - * but it is INCORRECT for data-integrity system calls such as fsync(). fsync() - * and msync() need to guarantee that all the data which was dirty at the time - * the call was made get new I/O started against them. If wbc->sync_mode is - * WB_SYNC_ALL then we were called for data integrity and we must wait for - * existing IO to complete. - * - * To avoid livelocks (when other process dirties new pages), we first tag - * pages which should be written back with TOWRITE tag and only then start - * writing them. For data-integrity sync we have to be careful so that we do - * not miss some pages (e.g., because some other process has cleared TOWRITE - * tag we set). The rule we follow is that TOWRITE tag can be cleared only - * by the process clearing the DIRTY tag (and submitting the page for IO). - * - * To avoid deadlocks between range_cyclic writeback and callers that hold - * pages in PageWriteback to aggregate IO until write_cache_pages() returns, - * we do not loop back to the start of the file. Doing so causes a page - * lock/page writeback access order inversion - we should only ever lock - * multiple pages in ascending page->index order, and looping back to the start - * of the file violates that rule and causes deadlocks. + * This function returns the next folio for the writeback operation described by + * @wbc on @mapping and should be called in a while loop in the ->writepages + * implementation. * - * Return: %0 on success, negative error code otherwise + * To start the writeback operation, %NULL is passed in the @folio argument, and + * for every subsequent iteration the folio returned previously should be passed + * back in. + * + * If there was an error in the per-folio writeback inside the writeback_iter() + * loop, @error should be set to the error value. + * + * Once the writeback described in @wbc has finished, this function will return + * %NULL and if there was an error in any iteration restore it to @error. + * + * Note: callers should not manually break out of the loop using break or goto + * but must keep calling writeback_iter() until it returns %NULL. + * + * Return: the folio to write or %NULL if the loop is done. */ -int write_cache_pages(struct address_space *mapping, - struct writeback_control *wbc, writepage_t writepage, - void *data) +struct folio *writeback_iter(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio, int *error) { - int ret = 0; - int error; - struct folio *folio; - pgoff_t end; /* Inclusive */ - - if (wbc->range_cyclic) { - wbc->index = mapping->writeback_index; /* prev offset */ - end = -1; - } else { - wbc->index = wbc->range_start >> PAGE_SHIFT; - end = wbc->range_end >> PAGE_SHIFT; - } - if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) - tag_pages_for_writeback(mapping, wbc->index, end); - - folio_batch_init(&wbc->fbatch); + if (!folio) { + folio_batch_init(&wbc->fbatch); + wbc->saved_err = *error = 0; - for (;;) { - folio = writeback_get_folio(mapping, wbc); - if (!folio) - break; + /* + * For range cyclic writeback we remember where we stopped so + * that we can continue where we stopped. + * + * For non-cyclic writeback we always start at the beginning of + * the passed in range. + */ + if (wbc->range_cyclic) + wbc->index = mapping->writeback_index; + else + wbc->index = wbc->range_start >> PAGE_SHIFT; - error = writepage(folio, wbc, data); + /* + * To avoid livelocks when other processes dirty new pages, we + * first tag pages which should be written back and only then + * start writing them. + * + * For data-integrity writeback we have to be careful so that we + * do not miss some pages (e.g., because some other process has + * cleared the TOWRITE tag we set). The rule we follow is that + * TOWRITE tag can be cleared only by the process clearing the + * DIRTY tag (and submitting the page for I/O). + */ + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + tag_pages_for_writeback(mapping, wbc->index, + wbc_end(wbc)); + } else { wbc->nr_to_write -= folio_nr_pages(folio); - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } + WARN_ON_ONCE(*error > 0); /* * For integrity writeback we have to keep going until we have @@ -2510,33 +2509,70 @@ int write_cache_pages(struct address_space *mapping, * wbc->nr_to_write or encounter the first error. */ if (wbc->sync_mode == WB_SYNC_ALL) { - if (error && !ret) - ret = error; + if (*error && !wbc->saved_err) + wbc->saved_err = *error; } else { - if (error || wbc->nr_to_write <= 0) + if (*error || wbc->nr_to_write <= 0) goto done; } } - /* - * For range cyclic writeback we need to remember where we stopped so - * that we can continue there next time we are called. If we hit the - * last page and there is more work to be done, wrap back to the start - * of the file. - * - * For non-cyclic writeback we always start looking up at the beginning - * of the file if we are called again, which can only happen due to - * -ENOMEM from the file system. - */ - folio_batch_release(&wbc->fbatch); - if (wbc->range_cyclic) - mapping->writeback_index = 0; - return ret; + folio = writeback_get_folio(mapping, wbc); + if (!folio) { + /* + * To avoid deadlocks between range_cyclic writeback and callers + * that hold pages in PageWriteback to aggregate I/O until + * the writeback iteration finishes, we do not loop back to the + * start of the file. Doing so causes a page lock/page + * writeback access order inversion - we should only ever lock + * multiple pages in ascending page->index order, and looping + * back to the start of the file violates that rule and causes + * deadlocks. + */ + if (wbc->range_cyclic) + mapping->writeback_index = 0; + + /* + * Return the first error we encountered (if there was any) to + * the caller. + */ + *error = wbc->saved_err; + } + return folio; done: if (wbc->range_cyclic) mapping->writeback_index = folio->index + folio_nr_pages(folio); folio_batch_release(&wbc->fbatch); + return NULL; +} + +/** + * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. + * @mapping: address space structure to write + * @wbc: subtract the number of written pages from *@wbc->nr_to_write + * @writepage: function called for each page + * @data: data passed to writepage function + * + * Return: %0 on success, negative error code otherwise + * + * Note: please use writeback_iter() instead. + */ +int write_cache_pages(struct address_space *mapping, + struct writeback_control *wbc, writepage_t writepage, + void *data) +{ + struct folio *folio = NULL; + int error; + + while ((folio = writeback_iter(mapping, wbc, folio, &error))) { + error = writepage(folio, wbc, data); + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; + } + } + return error; } EXPORT_SYMBOL(write_cache_pages); From patchwork Mon Feb 12 07:13:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13552741 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8792282F9; Mon, 12 Feb 2024 07:14:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722080; cv=none; b=pY5dXmAfhKc/Tmyqz7pZcPnNzFGFP4xitXIsI0KLE+Ja2Q15Fh4EluEMByQ2JD/IWHFxPYgpUrkLV8NpBqhuGfJuD1fwDQ7vPgG5wilKXsdcoIRXIdPnkdt6McZ4Lv3j22SvyuDSZewuBSepPnayIBlsF0p5I+frVdka367d2WA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707722080; c=relaxed/simple; bh=MKMZqD2/w+QWED2+pauhKDqx2bzrHQVvjTQI06WBrSw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E9f/ihYx8mQgbqpUG2zxIvNtBeYLYvwshJabEK0RZWsFCCL0ryR0+7nxtrv+T+H7moA8CCAF5i64oPTB0egZ06Lamfh7t6XVbGhAkyY/nfT2ATFXEJp9L8d4Ga/jOIaL6jrPmNzToQyJRj9xrrGKjJEvkIBnlCfvqNspm/whsnI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=qGm4DgDh; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qGm4DgDh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=KFNTqb7k3Ymo30IuEwDaihs1yCQpUBV3RN+kFxOoV/I=; b=qGm4DgDhGagw28iocodC5iIhS6 xsiMjPN63KLn6sFjMYCPL7iseVbubE2wS8w6bXJ+dcbmYvOHEWzOAZUcrSN0z9s6qgqeA4zE+v25Z Ieo4AhffrF+snI/gQkbKzpqjnhFvronw/KWUEfMYR/mvFkRA7Rkarbpg0iWyZZlXzrZJHp4jqfsi9 q9boZv/wYg0kxsnUDqX9688gwSdlI9bqz14Z3r5AjizGcXXBELXp/TxTAsBR1zhWDROEl8xd9MQ6z QcXvVjnwYut7/ExXvXXIN3edVcs960N7aYgwjVh71uI9ph52Edo7WhetgJdrGm5ARyCfAEeyxZjds ynVt7v1g==; Received: from [2001:4bb8:190:6eab:75e9:7295:a6e3:c35d] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZQWT-00000004T5c-0Fro; Mon, 12 Feb 2024 07:14:37 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara Subject: [PATCH 14/14] writeback: Remove a use of write_cache_pages() from do_writepages() Date: Mon, 12 Feb 2024 08:13:48 +0100 Message-Id: <20240212071348.1369918-15-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240212071348.1369918-1-hch@lst.de> References: <20240212071348.1369918-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Use the new writeback_iter() directly instead of indirecting through a callback. Signed-off-by: Matthew Wilcox (Oracle) [hch: ported to the while based iter style] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara --- mm/page-writeback.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 1996200849e577..2fd83d438f92bd 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2577,12 +2577,24 @@ int write_cache_pages(struct address_space *mapping, } EXPORT_SYMBOL(write_cache_pages); -static int writepage_cb(struct folio *folio, struct writeback_control *wbc, - void *data) +static int writeback_use_writepage(struct address_space *mapping, + struct writeback_control *wbc) { - struct address_space *mapping = data; + struct folio *folio = NULL; + struct blk_plug plug; + int err; - return mapping->a_ops->writepage(&folio->page, wbc); + blk_start_plug(&plug); + while ((folio = writeback_iter(mapping, wbc, folio, &err))) { + err = mapping->a_ops->writepage(&folio->page, wbc); + if (err == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + err = 0; + } + } + blk_finish_plug(&plug); + + return err; } int do_writepages(struct address_space *mapping, struct writeback_control *wbc) @@ -2598,12 +2610,7 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc) if (mapping->a_ops->writepages) { ret = mapping->a_ops->writepages(mapping, wbc); } else if (mapping->a_ops->writepage) { - struct blk_plug plug; - - blk_start_plug(&plug); - ret = write_cache_pages(mapping, wbc, writepage_cb, - mapping); - blk_finish_plug(&plug); + ret = writeback_use_writepage(mapping, wbc); } else { /* deal with chardevs and other special files */ ret = 0;