From patchwork Sat Feb 3 07:11:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543887 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B22F4D5B0; Sat, 3 Feb 2024 07:12:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944330; cv=none; b=BsprzeZdkadbap1pTM5j82xcz/Sai2Z1MqkTGzF1ZuQJ6T786TbEbArEutqdtjVPZKOpt0taSFKPOCDcDXYYJcubInihcyCeqUgccO7rZReqNulEaSkCBCuuVNb4fPEfEIa7ELgX7w4SUqKAf4v8WlVHmNyCTpDJi1bFJrNikME= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944330; c=relaxed/simple; bh=Zp3cCLINEokOiPdXtcRCwweX4H6+xgszIGGxoW504HM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fU2KTsHtrish3vlblW198tpQCrAUlbLUgx6jcHLyR2IE+aCCKbCJENaK/J3iUmsRqbd0YvAUQj2qGrotxwG9a2uOx9XAkY2hlsbph0OAXCsl4K8pv1lcmaSMQvi/AEbypiOWdFBTjRTcV4FN62+RMiu1TTsufnQiPlTwA7Y5TPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ZOhz2dNr; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZOhz2dNr" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=+lqvdXiJyeUBbv4RbCH+VsOF1ziPvzDxge6GroiRQvo=; b=ZOhz2dNrO2v9ls+tKOcQisMqRw d5QuM/23JepzSn1b7ipMDrTGniaboVzWpHlnXVcGXa/5j3pXzLv+92C4aMfdx5zI3LbWA1MpvxCKY bo5W1UBMWQIb6eFnzTI/vzwvbwoBSgimQvqAnr4GR3VdEvx2+dwJV10G4DEkR22tfac+AReRnaJYt h8GYwOYQFl5wxKZuROP6cuiRFwtrXGNKDtwvUJ/AVmUSktYdhaOMlOIIiUBh0PUyP/XuwiXXGy5pe XAktdBAhJf6YF2Km1CmRkcfi4sKlQTSHJQ5IcNX27jo73jG7Abl322/EiFmIEHqxYUnLKATMNc3uf Qhck81Xg==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWAC3-0000000Fjy5-2qvp; Sat, 03 Feb 2024 07:12:04 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 01/13] writeback: remove a duplicate prototype for tag_pages_for_writeback Date: Sat, 3 Feb 2024 08:11:35 +0100 Message-Id: <20240203071147.862076-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Signed-off-by: Christoph Hellwig [hch: split from a larger patch] Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/writeback.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 453736fd1d23ce..4b8cf9e4810bad 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -363,8 +363,6 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb); typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc, void *data); -void tag_pages_for_writeback(struct address_space *mapping, - pgoff_t start, pgoff_t end); int write_cache_pages(struct address_space *mapping, struct writeback_control *wbc, writepage_t writepage, void *data); From patchwork Sat Feb 3 07:11:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543888 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 450614E1D3; Sat, 3 Feb 2024 07:12:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944333; cv=none; b=h+tPRjH2v8Fly/Gt55zi2qHggBy5HccHjDs8mFFvW2H2UZKCKLibM8J9b/0jWd66+l1i1UOsSzB8uf/W8Vg+fv1/YKxdYOgoBg6zUpHwuLAg0MFuFU9lNROe+BK8vhncgYYRLGvNaR3y1W25WDDAhLEb6eRPAlLhsikH4PFFMY4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944333; c=relaxed/simple; bh=fd5z2BBVNn++M9QyH8w4N0hNZ9EZndHdM9R5JVwzQgM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TTsqkzrAWpy99CaWv2HF8oVusngBsOWkQZIER+0e2AbEJ7RmrtIiFCYunBI9zRevuCNXXJi7MeiRWA4RxEXuRyeGVXrMWpCOTrdpSBbZe/iELgMo7bmLouYqIZAEbxYD6PP38dMcZPs1ThTrJU7zQ8+S7ccL7aC16Oqkohcc8FI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=iE2Qy9x6; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="iE2Qy9x6" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Ir6qdgwpiudHSGvIqZf3UZHrrrbsMgA9iIwYL6Ldf0U=; b=iE2Qy9x6Sg4y+wDEl3bj4dxn64 gCZM+M5CKCFy/pvjW1p2OkT8q39b+HPLnqpeQFtfQGA7LIQT455VsNCXAFevt/ZFQdlwY6+lV/T08 hi9wUOIqd5dzHOSzIBC2F2C1en3bkkET7N+8OxgSq2ZP6UymsXcY5AHMrDYarIBXr43S1FjEAqJwL tnWLV4DaRWtSn8qD6VOyT5NExzUXn7LolYKSrpLFNKsSD9ZMMwOWlGrlASjgnLrSgjA6/9i6nUia2 SO3m5CYQbkwfuUV+eCZiDRKzYLJuG2c9DuxAxC2a3HR2vl2ucCFG25/X7aUlQeXm0S5RF82fcXR7Y 9YjRd9ow==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWAC8-0000000Fjz1-3cgJ; Sat, 03 Feb 2024 07:12:09 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 02/13] writeback: fix done_index when hitting the wbc->nr_to_write Date: Sat, 3 Feb 2024 08:11:36 +0100 Message-Id: <20240203071147.862076-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html When write_cache_pages finishes writing out a folio, it fails to update done_index to account for the number of pages in the folio just written. That means when range_cyclic writeback is restarted, it will be restarted at this folio instead of after it as it should. Fix that by updating done_index before breaking out of the loop. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 02147b61712bc9..b4d978f77b0b69 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2505,6 +2505,7 @@ int write_cache_pages(struct address_space *mapping, * keep going until we have written all the pages * we tagged for writeback prior to entering this loop. */ + done_index = folio->index + nr; wbc->nr_to_write -= nr; if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE) { From patchwork Sat Feb 3 07:11:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543889 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1F3E4F21D; Sat, 3 Feb 2024 07:12:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944339; cv=none; b=XHXDnC2tMK6HkG9C9az0WgiuJVdQDDAP0UvLo+SMWNb/CtRg00SpZgFtpBBJnMwLSU/9wAkugPLrvHY6j/IuMuhfQJnTmUSDq8tI6Sx2hyN3+WorPT2fVde8pndQYX/+bM7SgnQ9yc/YGJj4r+SbhYzuM1zFnY7d/5YeaonGDRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944339; c=relaxed/simple; bh=2Ra1DIMO+G/TyPfQxYqph8jYmrGnaJYGYnlhqWoTXnE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KGduLhuLAg/hS90YQqkMx2klUCmydvCIcksqcF8ni4TZTu3a48zxXi8WH2dtBM+0eYB2NQh5qedlzbTdw/zKS/F3khJdwGSvioqlBHJwLAAiRoeJ45VToSkSXkJ9rZ+gzOFmFp1wqpjNxGdJ9N3G9nAWv134SPtmILYBmA2RBig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=XlUtYVzv; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="XlUtYVzv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=49fgiBCP04epsyHu4VZLFhJdGCRmLlr7ZEyZXKDtz3E=; b=XlUtYVzvEHslYCkKcXL+kLKkWh 9WTLKLOLzxgooVTG83weTxXHEkwnSz4ybyLAeTF63d9qpkg5/3QqKB1r0fiaxhUO3cVx09NH+5Eo5 QFGjAEPz/oAvFBpRtC9I0fpdDPAIDkdv9qsZ/o2Yk6jT1RW6Z8Qoa81kLjYtsh7/LoSy3GUxUHyIz MQ0OFbDNGbmshBa1RhuYyufJiFWc9kY5FoAub/R7AI5Ep+JcunFwqr5QsPuqV2eofrvNuVUc7kq0H mfFJrDAX6yoOt/HhTtIfyWWS9UVv6ptOAdSuTkYnPQpb3GhIL9zWe/+GHCh8en4saFjCp4gzKTqae RAaS1RPw==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACE-0000000Fk0K-0UzU; Sat, 03 Feb 2024 07:12:14 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 03/13] writeback: also update wbc->nr_to_write on writeback failure Date: Sat, 3 Feb 2024 08:11:37 +0100 Message-Id: <20240203071147.862076-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html When exiting write_cache_pages early due to a non-integrity write failure, wbc->nr_to_write currently doesn't account for the folio we just failed to write. This doesn't matter because the callers always ingore the value on a failure, but moving the update to common code will allow to simplify the code, so do it. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index b4d978f77b0b69..ee9eb347890cd3 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2473,6 +2473,7 @@ int write_cache_pages(struct address_space *mapping, trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); error = writepage(folio, wbc, data); nr = folio_nr_pages(folio); + wbc->nr_to_write -= nr; if (unlikely(error)) { /* * Handle errors according to the type of @@ -2506,7 +2507,6 @@ int write_cache_pages(struct address_space *mapping, * we tagged for writeback prior to entering this loop. */ done_index = folio->index + nr; - wbc->nr_to_write -= nr; if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE) { done = 1; From patchwork Sat Feb 3 07:11:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543890 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D75F4F891; Sat, 3 Feb 2024 07:12:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944343; cv=none; b=puDXNA3+xV8ZvPYj4orvSYi42v9fTSq4+vHBGocaYgXsfZYAQwDMNCWufmmwkehBYtZt3W1ih9IvC642hu44dTg32b9k+508lGFbUIPnF13017cuHzSCJ/2esn07dCLinX/OWgz2UyvrVPFWQKzxXgvncrJ4K4cLwY0DGaX2JNU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944343; c=relaxed/simple; bh=d7uAWRyCrrlyj0Ej5Or/Ih/519cz8DKV1v+etTcSbsg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lvIuULEv79ULms6iagWoZD6YS+K4ca8FBEl20XBsVy3/V5kCLcDCnkfZZIN1nqBh6Wo5qd0HJT4MRtk2fOmqLa5Lj2o9hGgfOF0BXP9Usf3SnKxr6f6SoO8X2pB27d5C5dO9pRn1BiAlVIFFDqQFhTshfak4kzivCgDVUdRZ0z4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QH+FHNxv; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QH+FHNxv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=GhmEmGUPN5iqiwr7bHV2cU1PJLEobcTODedxESHF4Gc=; b=QH+FHNxvkAB6h4KM2iyT9Hyvsg AfE8Wt9IRK/c15fUAWzpoE0OtkarSrAESliln8xwuU91uqBJnw3VO1ZVOQoYwX84FrKLJob4M3jMF oRjd2HOh1Yp7g+s/i/D020aM/XNuc5Ou1E5OqSB+BoPXYnmXsysVl04sBQn+hyb0r/W9HefFlUQas cX5zi8ABK7rKKEQCa4O6ZDOWajvyMxkBX+pdPcVSoLre58AnwYKNwkfkYrngCfhD9mRSaR7RQeW3j +ZFwoC5ArTW4txv7BDS1lqNVD7QtwiapVkQzDgkl/LFVurH1ZDwJQeCF5Z4DWpsunH5mT9eRUoK2y SAl/WSOA==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACJ-0000000Fk2d-1AV7; Sat, 03 Feb 2024 07:12:20 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 04/13] writeback: only update ->writeback_index for range_cyclic writeback Date: Sat, 3 Feb 2024 08:11:38 +0100 Message-Id: <20240203071147.862076-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html mapping->writeback_index is only [1] used as the starting point for range_cyclic writeback, so there is no point in updating it for other types of writeback. [1] except for btrfs_defrag_file which does really odd things with mapping->writeback_index. But btrfs doesn't use write_cache_pages at all, so this isn't relevant here. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index ee9eb347890cd3..c7c494526bc650 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2403,7 +2403,6 @@ int write_cache_pages(struct address_space *mapping, pgoff_t index; pgoff_t end; /* Inclusive */ pgoff_t done_index; - int range_whole = 0; xa_mark_t tag; folio_batch_init(&fbatch); @@ -2413,8 +2412,6 @@ int write_cache_pages(struct address_space *mapping, } else { index = wbc->range_start >> PAGE_SHIFT; end = wbc->range_end >> PAGE_SHIFT; - if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) - range_whole = 1; } if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) { tag_pages_for_writeback(mapping, index, end); @@ -2518,14 +2515,21 @@ int write_cache_pages(struct address_space *mapping, } /* - * If we hit the last page and there is more work to be done: wrap - * back the index back to the start of the file for the next - * time we are called. + * For range cyclic writeback we need to remember where we stopped so + * that we can continue there next time we are called. If we hit the + * last page and there is more work to be done, wrap back to the start + * of the file. + * + * For non-cyclic writeback we always start looking up at the beginning + * of the file if we are called again, which can only happen due to + * -ENOMEM from the file system. */ - if (wbc->range_cyclic && !done) - done_index = 0; - if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) - mapping->writeback_index = done_index; + if (wbc->range_cyclic) { + if (done) + mapping->writeback_index = done_index; + else + mapping->writeback_index = 0; + } return ret; } From patchwork Sat Feb 3 07:11:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543891 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDC425024B; Sat, 3 Feb 2024 07:12:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944348; cv=none; b=H0gDXq1kUhgeDU1XqWwVTNqpy7jdlAm0V+Unwd6Hgb5yUYYPKqxCP96xHFQic4w903fmEahlWFUWR87kWrh/HzFxdu7Ctx4bv85e7s2U1UPfBn8JC1JUbB/cAMJZf4f1sXYrWvrrhzBrlT0VwIkJ9gVoLXdd0VC5nxJpMa4geJw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944348; c=relaxed/simple; bh=11+z648M91LOuAn70D5VUrpWYdA9g8PsbH50zJJGCu4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r5u+LLhxzZw3xdK+NeZSRWs/b/hzolAMeocimicWdKQ7wrA2a0LzGFDyb4rFClKj+IRsaAUVwldXxMbfLqvTG58MzEBAwhQj+CjBGG4rcruEtuFE+i/ozafOzENQjHenYLNjLXrc6YJioEkUx9cqZdVTK/2BaU5jZvnIFiNdP8U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=DB0SbFi6; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DB0SbFi6" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5NQ5foZwClGdjirNb4QF1J+mdWuoRYsiFuyY0APWPYw=; b=DB0SbFi6hUnk1R7j5iigmiLr5N wGakeqVDiyg/3UEtSiSGuXyTjawI0qc+3zilFXplt0yfm3V6765hxCRtqBaCEEbXRu6PmcijL0EwD ohFv1B6Ru53npud78X2mRjKDz+W8KfKClXbYz//sS9Rlej5EBI3FYmzdI0NGMiwddOLtapEwUWLxn xOakmAfp6pz1HZvZdZCjEVNJYQ8OON7wbu3bQautPLcydnJS0vvYHgPkMCCowSEe3fNMXKbop8vzO KCGYuP5WzsvA6ReigLFIw1MRl26nXVPe05zKKDjdM/xBnxIJJYynLn8OYg9q/pGoHyGTfCW9NxHyR jxtv/EuA==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACN-0000000Fk4w-1KJm; Sat, 03 Feb 2024 07:12:23 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/13] writeback: rework the loop termination condition in write_cache_pages Date: Sat, 3 Feb 2024 08:11:39 +0100 Message-Id: <20240203071147.862076-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Rework the way we deal with the cleanup after the writepage call. First handle the magic AOP_WRITEPAGE_ACTIVATE separately from real error returns to get it out of the way of the actual error handling path. The split the handling on intgrity vs non-integrity branches first, and return early using a goto for the non-ingegrity early loop condition to remove the need for the done and done_index local variables, and for assigning the error to ret when we can just return error directly. Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: Brian Foster --- mm/page-writeback.c | 84 ++++++++++++++++++--------------------------- 1 file changed, 33 insertions(+), 51 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c7c494526bc650..88b2c4c111c01b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2396,13 +2396,12 @@ int write_cache_pages(struct address_space *mapping, void *data) { int ret = 0; - int done = 0; int error; struct folio_batch fbatch; + struct folio *folio; int nr_folios; pgoff_t index; pgoff_t end; /* Inclusive */ - pgoff_t done_index; xa_mark_t tag; folio_batch_init(&fbatch); @@ -2419,8 +2418,7 @@ int write_cache_pages(struct address_space *mapping, } else { tag = PAGECACHE_TAG_DIRTY; } - done_index = index; - while (!done && (index <= end)) { + while (index <= end) { int i; nr_folios = filemap_get_folios_tag(mapping, &index, end, @@ -2430,11 +2428,7 @@ int write_cache_pages(struct address_space *mapping, break; for (i = 0; i < nr_folios; i++) { - struct folio *folio = fbatch.folios[i]; - unsigned long nr; - - done_index = folio->index; - + folio = fbatch.folios[i]; folio_lock(folio); /* @@ -2469,45 +2463,32 @@ int write_cache_pages(struct address_space *mapping, trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); error = writepage(folio, wbc, data); - nr = folio_nr_pages(folio); - wbc->nr_to_write -= nr; - if (unlikely(error)) { - /* - * Handle errors according to the type of - * writeback. There's no need to continue for - * background writeback. Just push done_index - * past this page so media errors won't choke - * writeout for the entire file. For integrity - * writeback, we must process the entire dirty - * set regardless of errors because the fs may - * still have state to clear for each page. In - * that case we continue processing and return - * the first error. - */ - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } else if (wbc->sync_mode != WB_SYNC_ALL) { - ret = error; - done_index = folio->index + nr; - done = 1; - break; - } - if (!ret) - ret = error; + wbc->nr_to_write -= folio_nr_pages(folio); + + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; } /* - * We stop writing back only if we are not doing - * integrity sync. In case of integrity sync we have to - * keep going until we have written all the pages - * we tagged for writeback prior to entering this loop. + * For integrity writeback we have to keep going until + * we have written all the folios we tagged for + * writeback above, even if we run past wbc->nr_to_write + * or encounter errors. + * We stash away the first error we encounter in + * wbc->saved_err so that it can be retrieved when we're + * done. This is because the file system may still have + * state to clear for each folio. + * + * For background writeback we exit as soon as we run + * past wbc->nr_to_write or encounter the first error. */ - done_index = folio->index + nr; - if (wbc->nr_to_write <= 0 && - wbc->sync_mode == WB_SYNC_NONE) { - done = 1; - break; + if (wbc->sync_mode == WB_SYNC_ALL) { + if (error && !ret) + ret = error; + } else { + if (error || wbc->nr_to_write <= 0) + goto done; } } folio_batch_release(&fbatch); @@ -2524,14 +2505,15 @@ int write_cache_pages(struct address_space *mapping, * of the file if we are called again, which can only happen due to * -ENOMEM from the file system. */ - if (wbc->range_cyclic) { - if (done) - mapping->writeback_index = done_index; - else - mapping->writeback_index = 0; - } - + if (wbc->range_cyclic) + mapping->writeback_index = 0; return ret; + +done: + folio_batch_release(&fbatch); + if (wbc->range_cyclic) + mapping->writeback_index = folio->index + folio_nr_pages(folio); + return error; } EXPORT_SYMBOL(write_cache_pages); From patchwork Sat Feb 3 07:11:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543892 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 725295025C; Sat, 3 Feb 2024 07:12:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944352; cv=none; b=ZY+11VaAdoXxr8+lAi3WowBdSkIx0a0u66hYv/BAZjvEygnd6KhiCAHTfj19VhoA0xoQu/Y5SpgsSt4ei4CI1SibHyhGzeENkCORf4R9B/OoYlPaxr/qaHMKVLQnL2BgIKsidXc412Freog7DQgfKHUVaDliwypWHZpsC7KI0II= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944352; c=relaxed/simple; bh=1v22MW2g70qIZzRnC2FfK5fA1jdl1UfKC8ZDQkFezDg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hqFKaN5M8gucjlrITWO+3HN023Ju0SQXCW0vg9JMPlhYKeAgL5/Fdfe/zOvDf4HBZS2mKviGyJ0QP1wpz9fdIvRc74qyYIm8JfL9hv8bcYepAZnfRkqZa0n60mMwBv+L60djrqcGQyUyNQ1Uz3AaTDOZ0EGlXcIw3eJkJMkZN5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QtHTeuf+; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QtHTeuf+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=jfeyOGGqJMEhwjwDWY78Dxs/MD9XWP1Zn49OkkJuspU=; b=QtHTeuf+5e3JSMI6nOmskVdXn2 G8jz9FhabBgFanvh596iAi7noXgATwCi8VOdtCyLoUHl0L23Nt+Egfg2/qpZAHF/W78ZpJNpLkMwB tWjQjqHjUneWK8LK2zuwoMrnCEyUN+CmhqZ4ZRSFYbptSY4swVURu/4e3Wboeen9MkaDWNvQqFMcN dJYtx4TWqYx60UTzX1pE7h6Qqr6AZd0lfbtQep3EuRDdIA4AR7a8KjkW2ll/n7LfL6rU4nSfL1ow2 tWQyVWXrjWKzWGksYBzaHtM9g2dn5LHuKdQbFjjcm3XvN0izvhzgH9QM6RVm05E3b4IZ19SOjZh9w dmh0Pezg==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACQ-0000000Fk6W-3iJc; Sat, 03 Feb 2024 07:12:28 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 06/13] writeback: Factor folio_prepare_writeback() out of write_cache_pages() Date: Sat, 3 Feb 2024 08:11:40 +0100 Message-Id: <20240203071147.862076-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Reduce write_cache_pages() by about 30 lines; much of it is commentary, but it all bundles nicely into an obvious function. Signed-off-by: Matthew Wilcox (Oracle) [hch: rename should_writeback_folio to folio_prepare_writeback based on a comment from Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 61 +++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 27 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 88b2c4c111c01b..949193624baa38 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2360,6 +2360,38 @@ void tag_pages_for_writeback(struct address_space *mapping, } EXPORT_SYMBOL(tag_pages_for_writeback); +static bool folio_prepare_writeback(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio) +{ + /* + * Folio truncated or invalidated. We can freely skip it then, + * even for data integrity operations: the folio has disappeared + * concurrently, so there could be no real expectation of this + * data integrity operation even if there is now a new, dirty + * folio at the same pagecache index. + */ + if (unlikely(folio->mapping != mapping)) + return false; + + /* + * Did somebody else write it for us? + */ + if (!folio_test_dirty(folio)) + return false; + + if (folio_test_writeback(folio)) { + if (wbc->sync_mode == WB_SYNC_NONE) + return false; + folio_wait_writeback(folio); + } + BUG_ON(folio_test_writeback(folio)); + + if (!folio_clear_dirty_for_io(folio)) + return false; + + return true; +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2430,38 +2462,13 @@ int write_cache_pages(struct address_space *mapping, for (i = 0; i < nr_folios; i++) { folio = fbatch.folios[i]; folio_lock(folio); - - /* - * Page truncated or invalidated. We can freely skip it - * then, even for data integrity operations: the page - * has disappeared concurrently, so there could be no - * real expectation of this data integrity operation - * even if there is now a new, dirty page at the same - * pagecache address. - */ - if (unlikely(folio->mapping != mapping)) { -continue_unlock: + if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); continue; } - if (!folio_test_dirty(folio)) { - /* someone wrote it for us */ - goto continue_unlock; - } - - if (folio_test_writeback(folio)) { - if (wbc->sync_mode != WB_SYNC_NONE) - folio_wait_writeback(folio); - else - goto continue_unlock; - } - - BUG_ON(folio_test_writeback(folio)); - if (!folio_clear_dirty_for_io(folio)) - goto continue_unlock; - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); + error = writepage(folio, wbc, data); wbc->nr_to_write -= folio_nr_pages(folio); From patchwork Sat Feb 3 07:11:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543893 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13EC151C44; Sat, 3 Feb 2024 07:12:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944356; cv=none; b=blBwZl4cPjSfU0pQQaV6zjEyrhCW+E4DFaBzEnNUvbnPNPNmDc7W2MgCsEDV0q8EulvTioi268Xlkx0SmdSf0OlYyhrm3+DFYRRVNK78NRWGIeCvVjNEfSs+41tUzTlvTZPR0TICcOCzDkqA81gexnKXc/CzZP+1tzxA2WOuNGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944356; c=relaxed/simple; bh=C1UgprNPOHV5lXFEJypnLKWveW2N3jNOillkGbIi414=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=T+4A5RNrqGdyjLgwjrTv0TlUHGplvrrGv0SWeXiE6CW2YAjWU86eeGN273uh9E2t3cUu2RFRx/QG4rwtFQeT1RtUMsu9rsgLndwS4CwscCtiQBZFT+B1TixCwFGt3w5PqvWfIgfTo6EnoIZGXD8onkP6VyKiBRR6mYSXjLWOGiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=ZpxFIRF0; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZpxFIRF0" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=e37EqS2ZYYTeeuRHGuEE+Jp38DFAOi6vK8cHvx3Jpt4=; b=ZpxFIRF0rwjTTxnnah1dH4r+g/ N+x8kCIzZ77fhJadth50deaJ77t6djMUAPPcuUsG1BbnsxFdWYEzisaWKSgIBC8xV1DytPxrky3Kq 4Bmxmc3DKJhdTBatFBqeQFMkzLwlbU0XxvcSsuXqZqOCepbKrwmBI3VW1YMpzQanXhV/rM5cjAK4n fNlo6Cd3ftcE74+bpOHcLUbnwvNg2EgWBQqWeGCkHKvr+yDL9l9MwidQlkyhRbp2WGHa63Zz95LHJ vaMEze6Ezzq8d+IedDrC8ow1o8kZucRlgXIPTCXgpzDvwGD0AuTEHieil/ddDKnJ6ZHMijpArgbhA gVVCUrpg==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACV-0000000Fk90-2Gz9; Sat, 03 Feb 2024 07:12:32 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 07/13] writeback: Factor writeback_get_batch() out of write_cache_pages() Date: Sat, 3 Feb 2024 08:11:41 +0100 Message-Id: <20240203071147.862076-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" This simple helper will be the basis of the writeback iterator. To make this work, we need to remember the current index and end positions in writeback_control. Signed-off-by: Matthew Wilcox (Oracle) [hch: heavily rebased, add helpers to get the tag and end index, don't keep the end index in struct writeback_control] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/writeback.h | 6 ++++ mm/page-writeback.c | 60 +++++++++++++++++++++++++-------------- 2 files changed, 44 insertions(+), 22 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 4b8cf9e4810bad..f67b3ea866a0fb 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -11,6 +11,7 @@ #include #include #include +#include struct bio; @@ -40,6 +41,7 @@ enum writeback_sync_modes { * in a manner such that unspecified fields are set to zero. */ struct writeback_control { + /* public fields that can be set and/or consumed by the caller: */ long nr_to_write; /* Write this many pages, and decrement this for each page written */ long pages_skipped; /* Pages which were not written */ @@ -77,6 +79,10 @@ struct writeback_control { */ struct swap_iocb **swap_plug; + /* internal fields used by the ->writepages implementation: */ + struct folio_batch fbatch; + pgoff_t index; + #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ struct inode *inode; /* inode being written out */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 949193624baa38..23363ed712f646 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2392,6 +2392,29 @@ static bool folio_prepare_writeback(struct address_space *mapping, return true; } +static xa_mark_t wbc_to_tag(struct writeback_control *wbc) +{ + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + return PAGECACHE_TAG_TOWRITE; + return PAGECACHE_TAG_DIRTY; +} + +static pgoff_t wbc_end(struct writeback_control *wbc) +{ + if (wbc->range_cyclic) + return -1; + return wbc->range_end >> PAGE_SHIFT; +} + +static void writeback_get_batch(struct address_space *mapping, + struct writeback_control *wbc) +{ + folio_batch_release(&wbc->fbatch); + cond_resched(); + filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), + wbc_to_tag(wbc), &wbc->fbatch); +} + /** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write @@ -2429,38 +2452,32 @@ int write_cache_pages(struct address_space *mapping, { int ret = 0; int error; - struct folio_batch fbatch; struct folio *folio; - int nr_folios; - pgoff_t index; pgoff_t end; /* Inclusive */ - xa_mark_t tag; - folio_batch_init(&fbatch); if (wbc->range_cyclic) { - index = mapping->writeback_index; /* prev offset */ + wbc->index = mapping->writeback_index; /* prev offset */ end = -1; } else { - index = wbc->range_start >> PAGE_SHIFT; + wbc->index = wbc->range_start >> PAGE_SHIFT; end = wbc->range_end >> PAGE_SHIFT; } - if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) { - tag_pages_for_writeback(mapping, index, end); - tag = PAGECACHE_TAG_TOWRITE; - } else { - tag = PAGECACHE_TAG_DIRTY; - } - while (index <= end) { + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + tag_pages_for_writeback(mapping, wbc->index, end); + + folio_batch_init(&wbc->fbatch); + + while (wbc->index <= end) { int i; - nr_folios = filemap_get_folios_tag(mapping, &index, end, - tag, &fbatch); + writeback_get_batch(mapping, wbc); - if (nr_folios == 0) + if (wbc->fbatch.nr == 0) break; - for (i = 0; i < nr_folios; i++) { - folio = fbatch.folios[i]; + for (i = 0; i < wbc->fbatch.nr; i++) { + folio = wbc->fbatch.folios[i]; + folio_lock(folio); if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); @@ -2498,8 +2515,6 @@ int write_cache_pages(struct address_space *mapping, goto done; } } - folio_batch_release(&fbatch); - cond_resched(); } /* @@ -2512,12 +2527,13 @@ int write_cache_pages(struct address_space *mapping, * of the file if we are called again, which can only happen due to * -ENOMEM from the file system. */ + folio_batch_release(&wbc->fbatch); if (wbc->range_cyclic) mapping->writeback_index = 0; return ret; done: - folio_batch_release(&fbatch); + folio_batch_release(&wbc->fbatch); if (wbc->range_cyclic) mapping->writeback_index = folio->index + folio_nr_pages(folio); return error; From patchwork Sat Feb 3 07:11:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543894 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E836252F7A; Sat, 3 Feb 2024 07:12:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944360; cv=none; b=u2bVxp5Pk9OW/pScTl5/LFSmFvfQ7TWn2MpJPQxFqDupUuZ9qyrKStiz80alvHmGGfmu8sXGnJdt0s6fMj2faeT30pgWNLC7lL7FmoD5jwjyhbauhGJ2rzns6Plx1g1wyMGmEBEtXV9T/7bdHTkk8RQZRnCWcyAa1h+LVyg0U9Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944360; c=relaxed/simple; bh=wQIP+N3XvRn+eYb7iTMjcVZ9jDHU0guNE2DI5QJg/zQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Fl7ABjXMutSr9Wuq8e7t6uDTIT2BK46P672SdZlGXuhI2le0mptmu6Veax7GtPk/k7EdiWWTgsvs+5tpGp0VgRk16kS+koDVjfhUpMONgRbPELM37mC2XElNnbWgAFctHbbKREcHQ6lO/arMQT28gxzsaxQXJD7ML1rR16aXqgY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=BZP6RRlB; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="BZP6RRlB" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YJszH51EwdW2rUwhJ/fmC9DInqYvSSbYnjASxDZKqZ0=; b=BZP6RRlBDDpDvPXciqWZlZGJn5 oC+v7sIzJvcs21tF8kJfjVZUxi+rlyNMkbXo9XyOmN+I4iFNmzmF8RE4x6Adbipm2iFwKH0YFjQdA 9/nlcT5ORYA4i24NEohIRXfYMQHGG+tKQIlh8qPq1VUBvrwNdgJLkGI3T+hb+i9GBglSUhcAxX9Xq 01aj7FFKMTuwF5eQBQl5BVoEFAIoOLYL7DjnFNlp/5NFpxywbU++nEGUH4m1TKZzgN5N+mKnfs/jO LXaUZ4bnnWLEsSjwcWKrbME/Fb2QcL2hF3E4KWQ0BaYNZfCcRv1Ee/5ZlvfFK9xbwIOfldgQwV1Gx JYbz3wYA==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACZ-0000000FkAd-0VnK; Sat, 03 Feb 2024 07:12:35 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 08/13] writeback: Simplify the loops in write_cache_pages() Date: Sat, 3 Feb 2024 08:11:42 +0100 Message-Id: <20240203071147.862076-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Collapse the two nested loops into one. This is needed as a step towards turning this into an iterator. Note that this drops the "index <= end" check in the previous outer loop and just relies on filemap_get_folios_tag() to return 0 entries when index > end. This actually has a subtle implication when end == -1 because then the returned index will be -1 as well and thus if there is page present on index -1, we could be looping indefinitely. But as the comment in filemap_get_folios_tag documents this as already broken anyway we should not worry about it here either. The fix for that would probably a change to the filemap_get_folios_tag() calling convention. Signed-off-by: Matthew Wilcox (Oracle) [hch: updated the commit log based on feedback from Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 75 ++++++++++++++++++++++----------------------- 1 file changed, 36 insertions(+), 39 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 23363ed712f646..d7ab42def43035 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2454,6 +2454,7 @@ int write_cache_pages(struct address_space *mapping, int error; struct folio *folio; pgoff_t end; /* Inclusive */ + int i = 0; if (wbc->range_cyclic) { wbc->index = mapping->writeback_index; /* prev offset */ @@ -2467,53 +2468,49 @@ int write_cache_pages(struct address_space *mapping, folio_batch_init(&wbc->fbatch); - while (wbc->index <= end) { - int i; - - writeback_get_batch(mapping, wbc); - + for (;;) { + if (i == wbc->fbatch.nr) { + writeback_get_batch(mapping, wbc); + i = 0; + } if (wbc->fbatch.nr == 0) break; - for (i = 0; i < wbc->fbatch.nr; i++) { - folio = wbc->fbatch.folios[i]; + folio = wbc->fbatch.folios[i++]; - folio_lock(folio); - if (!folio_prepare_writeback(mapping, wbc, folio)) { - folio_unlock(folio); - continue; - } + folio_lock(folio); + if (!folio_prepare_writeback(mapping, wbc, folio)) { + folio_unlock(folio); + continue; + } - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); - error = writepage(folio, wbc, data); - wbc->nr_to_write -= folio_nr_pages(folio); + error = writepage(folio, wbc, data); + wbc->nr_to_write -= folio_nr_pages(folio); - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; + } - /* - * For integrity writeback we have to keep going until - * we have written all the folios we tagged for - * writeback above, even if we run past wbc->nr_to_write - * or encounter errors. - * We stash away the first error we encounter in - * wbc->saved_err so that it can be retrieved when we're - * done. This is because the file system may still have - * state to clear for each folio. - * - * For background writeback we exit as soon as we run - * past wbc->nr_to_write or encounter the first error. - */ - if (wbc->sync_mode == WB_SYNC_ALL) { - if (error && !ret) - ret = error; - } else { - if (error || wbc->nr_to_write <= 0) - goto done; - } + /* + * For integrity writeback we have to keep going until we have + * written all the folios we tagged for writeback above, even if + * we run past wbc->nr_to_write or encounter errors. + * We stash away the first error we encounter in wbc->saved_err + * so that it can be retrieved when we're done. This is because + * the file system may still have state to clear for each folio. + * + * For background writeback we exit as soon as we run past + * wbc->nr_to_write or encounter the first error. + */ + if (wbc->sync_mode == WB_SYNC_ALL) { + if (error && !ret) + ret = error; + } else { + if (error || wbc->nr_to_write <= 0) + goto done; } } From patchwork Sat Feb 3 07:11:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543895 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5022853812; Sat, 3 Feb 2024 07:12:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944363; cv=none; b=ETCvOLdhlL9VCtJUvI3xZj4JQL+17KTC3v+xM66TzcbwePUD5wFkSREACWT1O0PcmLTd36yyXvlqi+nrxuFz5EVigP6wY/I4t+dWPZYVWKodJpzklIOMQq79RDXTsS3GtXfcF+Pzbucu/pnoL7KQEMstq62duXOVV8MqIGXvzcs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944363; c=relaxed/simple; bh=8YXecWRICYuMcNttDcOnWW490EjFn6keh2I4C8IPcyw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uIvJoH33sA4D3FKDBM4CFRm0itPyg4kiw0XXUc7vTM4FoBi7TnHJWZrR/SNi7ioLCV/WV6XoObo7ss3lTahpbpBdH/AnbAI8ceA0oAALxQNbLUUrKmeyTEr15vfqr1TFo72g8Yz95uIY7mb50va/UOgAR+IfXLCu0NfZmN/qjTw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=0/r+7dmm; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="0/r+7dmm" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=M5Qvp5zNKGHXjXReg58hQjrWh/GgxzXP5maEkxrO/YI=; b=0/r+7dmm43M5Dn9pI478JvAC/4 oMa4a8isH3iYu7/DiNuxK8ticsjuP9iB3tDFkuLLDpkg6dtFVtGZrup7ew7Rnn5OM2HIv9tS6RfwK LqawRXjuztB1xFtFVyxTQO1HlsxCukxD2MXQPsnIdK7arkvSvDEEZFNyL7beFqM3neAWYFUFJ2s1C rXejoAV9Dgdly36z5qeDNo2SknFWyn45vn2vzB7BwJXJaJ2MzKYi37Fk32ni74PE30b7+Rqi/uaDS AvC5hBjkd/pVjcP1eTjIgMojnI0ey9H2nma+pU9cepPu814Hc9/zTDPLloxTU4xB4yUYmcDRrLuB4 C+xJMSdQ==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACd-0000000FkDE-0QIe; Sat, 03 Feb 2024 07:12:40 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 09/13] pagevec: Add ability to iterate a queue Date: Sat, 3 Feb 2024 08:11:43 +0100 Message-Id: <20240203071147.862076-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Add a loop counter inside the folio_batch to let us iterate from 0-nr instead of decrementing nr and treating the batch as a stack. It would generate some very weird and suboptimal I/O patterns for page writeback to iterate over the batch as a stack. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- include/linux/pagevec.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 87cc678adc850b..fcc06c300a72c3 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -27,6 +27,7 @@ struct folio; */ struct folio_batch { unsigned char nr; + unsigned char i; bool percpu_pvec_drained; struct folio *folios[PAGEVEC_SIZE]; }; @@ -40,12 +41,14 @@ struct folio_batch { static inline void folio_batch_init(struct folio_batch *fbatch) { fbatch->nr = 0; + fbatch->i = 0; fbatch->percpu_pvec_drained = false; } static inline void folio_batch_reinit(struct folio_batch *fbatch) { fbatch->nr = 0; + fbatch->i = 0; } static inline unsigned int folio_batch_count(struct folio_batch *fbatch) @@ -75,6 +78,21 @@ static inline unsigned folio_batch_add(struct folio_batch *fbatch, return folio_batch_space(fbatch); } +/** + * folio_batch_next - Return the next folio to process. + * @fbatch: The folio batch being processed. + * + * Use this function to implement a queue of folios. + * + * Return: The next folio in the queue, or NULL if the queue is empty. + */ +static inline struct folio *folio_batch_next(struct folio_batch *fbatch) +{ + if (fbatch->i == fbatch->nr) + return NULL; + return fbatch->folios[fbatch->i++]; +} + void __folio_batch_release(struct folio_batch *pvec); static inline void folio_batch_release(struct folio_batch *fbatch) From patchwork Sat Feb 3 07:11:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543923 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E271A5578E; Sat, 3 Feb 2024 07:12:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944369; cv=none; b=ILCKOpQzGRhTeA4O+urGC1yShOTjvefCSYRREov7SQ5r3KR4d7P4J952Crvcai1gQe5VgJLgraIRSt5TdNn0glLfo16ObMl0OAcKgn9KAgYPEx1Gc2Xz9kFmZ5yl0HYRNDBvV1Y0gm5VPVVwaKVCWyc2GwR52KQC8ZYgir3gifA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944369; c=relaxed/simple; bh=SGRAuVhJI7Lth/Gasz34I52NDhUw1SIUQLUzjXA01lc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YwktG1dEtbnEssTk28TZeU+hb4LKX6pAhVToh3/uHDDWwbX7yYv4Bqyui/l12V9XqKUtResfeQhF3aDQPl5EXy/fHH2D6fVEPLDHDmPUu9L//I6RdBoRaVaoHxXG6oTP/x43XX+176FfDumcfa5di17k+BlKA8gi3Ws7mwtwGbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=UVZT0U8O; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="UVZT0U8O" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=vMuVWuEQ40U143WWo2Ps+rfcdCaB4a+PakD93tsqYPU=; b=UVZT0U8O/OGsWBbA5J5L5DbkUG SmurBjpTmleDtvpw2bIr/rnEVzN8rjTX/aNoUn+C4KAtED42DW3RQVqwlvQC/yzmqGqoWaZxdSYXM GVzgszHwHO9An1ZeyUqAzpxJTAG+W+qouZ7Lgp9GpZyziDRqZHuPHOYf4ID+oxozldM2iGP1P+N6g tf9vH/pqi0+xMruv51PFKnq0+WgjmlKuqh4BdJMW/fNaqPe/lMFUsojx+nfiOqL87BxG6SMlcW+Pv OROo1W/J6KtBHth1yL+GCwFdhJuuVYMDmvSY6cBYYXdtFAjwyr5JEiYBZFU9qWweuDv9rL9io0lsI YA2ZqZhQ==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACi-0000000FkF5-2eII; Sat, 03 Feb 2024 07:12:45 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 10/13] writeback: Use the folio_batch queue iterator Date: Sat, 3 Feb 2024 08:11:44 +0100 Message-Id: <20240203071147.862076-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Instead of keeping our own local iterator variable, use the one just added to folio_batch. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index d7ab42def43035..095ba4db9dcc17 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2406,13 +2406,21 @@ static pgoff_t wbc_end(struct writeback_control *wbc) return wbc->range_end >> PAGE_SHIFT; } -static void writeback_get_batch(struct address_space *mapping, +static struct folio *writeback_get_folio(struct address_space *mapping, struct writeback_control *wbc) { - folio_batch_release(&wbc->fbatch); - cond_resched(); - filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), - wbc_to_tag(wbc), &wbc->fbatch); + struct folio *folio; + + folio = folio_batch_next(&wbc->fbatch); + if (!folio) { + folio_batch_release(&wbc->fbatch); + cond_resched(); + filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), + wbc_to_tag(wbc), &wbc->fbatch); + folio = folio_batch_next(&wbc->fbatch); + } + + return folio; } /** @@ -2454,7 +2462,6 @@ int write_cache_pages(struct address_space *mapping, int error; struct folio *folio; pgoff_t end; /* Inclusive */ - int i = 0; if (wbc->range_cyclic) { wbc->index = mapping->writeback_index; /* prev offset */ @@ -2469,15 +2476,10 @@ int write_cache_pages(struct address_space *mapping, folio_batch_init(&wbc->fbatch); for (;;) { - if (i == wbc->fbatch.nr) { - writeback_get_batch(mapping, wbc); - i = 0; - } - if (wbc->fbatch.nr == 0) + folio = writeback_get_folio(mapping, wbc); + if (!folio) break; - folio = wbc->fbatch.folios[i++]; - folio_lock(folio); if (!folio_prepare_writeback(mapping, wbc, folio)) { folio_unlock(folio); From patchwork Sat Feb 3 07:11:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543924 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41C345674D; Sat, 3 Feb 2024 07:12:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944375; cv=none; b=WgDO1cNXuU0mWvLO2zjZ7Y2Nc2abdsCUgXs61hCE0AQBcbSxaSMXOyUnt1G8asDqpSSiZDfJCOLkL6z+ZvXSokbvFc4hJ3EynG1PCiWST9hQnkT3cgkzbPb9rSUzYyCRIe0OwolJy0MXqdlphyRc1EzuC6phxnUXpdoM9FvzacA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944375; c=relaxed/simple; bh=BEzYiGhNCPidNvLvQLhLK2U2cZcI03nhea82PoTH56s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qwTrdoNCgKMGcfy+qax7TpF0QaTvX8IW9Mm+MuGyEd1oNJAhUJwzyi6UaDOwiMqRYBjr0JZoNBe9QougMhh4v7NJSTGNoW8uLRh0CYDXN5HwOSG3OBzJsERNAJzMsbihP4/oaaVPsZVxZWyBDY4RlSU74ioZzQxFIkaoo0VGTUk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=vH0hdIAp; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vH0hdIAp" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=AnG5GvoJfhWPvqWiitfZqai2sD2FuEVxQzhAIrm6KaA=; b=vH0hdIAp6gB22asQhHw8fV4fL7 t7SMPnQ6UOZ06eB6iUc6QRiRN2CHY9Hoa3/Qt6lKZZW+1cVc05CzNKKIfNtKfWXE9n/0DIqp9pM41 tUdkoZyp1A/VSO+XZuNI8/iD1snj2iILh49fiFClyMMSrDYvZwMScjIzwLgLCAtb95yYimLLC6kZs 3uYxfZ9w1waMc6giSV8N33kIOCeAyPZBqp55YBUwZiocSMqmbEkiaPrGEeU7UtVNwsyGwiGTQe5eA t3RXkIGbnjSxOrBklgbZ7FsjwDnYiDV06ODpVc/+M3iEChRzlwXXkIMS44L9k65uCSyfj+bQtgnTP OJ4GTRbA==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACn-0000000FkIF-1VWY; Sat, 03 Feb 2024 07:12:50 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , Dave Chinner Subject: [PATCH 11/13] writeback: Move the folio_prepare_writeback loop out of write_cache_pages() Date: Sat, 3 Feb 2024 08:11:45 +0100 Message-Id: <20240203071147.862076-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Move the loop for should-we-write-this-folio to writeback_get_folio. Signed-off-by: Matthew Wilcox (Oracle) [hch: folded the loop into the existing helper instead of a separate one as suggested by Jan Kara] Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Jan Kara Acked-by: Dave Chinner --- mm/page-writeback.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 095ba4db9dcc17..3abb053e70580e 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2411,6 +2411,7 @@ static struct folio *writeback_get_folio(struct address_space *mapping, { struct folio *folio; +retry: folio = folio_batch_next(&wbc->fbatch); if (!folio) { folio_batch_release(&wbc->fbatch); @@ -2418,8 +2419,17 @@ static struct folio *writeback_get_folio(struct address_space *mapping, filemap_get_folios_tag(mapping, &wbc->index, wbc_end(wbc), wbc_to_tag(wbc), &wbc->fbatch); folio = folio_batch_next(&wbc->fbatch); + if (!folio) + return NULL; + } + + folio_lock(folio); + if (unlikely(!folio_prepare_writeback(mapping, wbc, folio))) { + folio_unlock(folio); + goto retry; } + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); return folio; } @@ -2480,14 +2490,6 @@ int write_cache_pages(struct address_space *mapping, if (!folio) break; - folio_lock(folio); - if (!folio_prepare_writeback(mapping, wbc, folio)) { - folio_unlock(folio); - continue; - } - - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); - error = writepage(folio, wbc, data); wbc->nr_to_write -= folio_nr_pages(folio); From patchwork Sat Feb 3 07:11:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543925 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F252D5677A; Sat, 3 Feb 2024 07:12:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944378; cv=none; b=h92DZM9xyTDYt/QJHscNk3jBkskR0jvV/OPiYqGjzXH2+1ElcZjY890Hrpz74B1IAmojaAOld8F9lTEPsoa6Y/Nhh/hb9dlK/4dL790qqhTRQh7WD4eKaWbWwQwIYSplmeW+l00p2+jQ0b2M7P/F12hKN45Bst8041C4uLXAxwc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944378; c=relaxed/simple; bh=m4giqPV84816CByeyXTh7r2sIblFSun4/FcDAG8GeJM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UPrJk4nMCYVctVv2+QA0ahm2+YFmr+A6TSL7JAsPwKIyvpss1Nrya2YmhaIcD4SUi/4jyLy2d0UyoCDeNZdDCrXV88W66y5hBnZESD6j6/AR5VNdsXF/sl441tbB8n37ZKUBuNT/GtBw6Rxk/RYBHb9l9cckmARFhvRqhDkKq4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=0fuITZBJ; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="0fuITZBJ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=GezxIWW+q+u2zC46RjSY4N2DxF7mdi1d7/13SlWu4e0=; b=0fuITZBJrufOpvkZIZVTWLsXDG Tb1yRAHTRSZM10wbA9mMQmPVY7Dc/QPRmF7qOsXh9cUUf9u8BZ4kCuh0PsgskRbJvPn1EXAWIHPo/ 2lUsRZffOvsZzX/H8/b9ym45gzAp3Xs0w0hhnCArnihPgDVLNKiYo/O+eRsH5bw/DH4TgV9psc3II Ri+1RmASwfnAIZaowi8VPt9u+QXKPsK7mH9qI+9Bnz/jojbY9mOU/Z1FWHmh4z/tvtaZZmu7dVmrK mTYhghZYwghLpblIlUFbA1bxNC9ycEhtE3cNU/inSmeuTiqmjjbnkEOFTaPl0XNmjuDmFeuE1ihUx NqEc+Dew==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACr-0000000FkKP-28zg; Sat, 03 Feb 2024 07:12:54 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/13] writeback: add a writeback iterator Date: Sat, 3 Feb 2024 08:11:46 +0100 Message-Id: <20240203071147.862076-13-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Refactor the code left in write_cache_pages into an iterator that the file system can call to get the next folio for a writeback operation: struct folio *folio = NULL; while ((folio = writeback_iter(mapping, wbc, folio, &error))) { error = ; } The twist here is that the error value is passed by reference, so that the iterator can restore it when breaking out of the loop. Handling of the magic AOP_WRITEPAGE_ACTIVATE value stays outside the iterator and needs is just kept in the write_cache_pages legacy wrapper. in preparation for eventually killing it off. Heavily based on a for_each* based iterator from Matthew Wilcox. Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: Brian Foster --- include/linux/writeback.h | 4 + mm/page-writeback.c | 192 ++++++++++++++++++++++---------------- 2 files changed, 118 insertions(+), 78 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index f67b3ea866a0fb..9845cb62e40b2d 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -82,6 +82,7 @@ struct writeback_control { /* internal fields used by the ->writepages implementation: */ struct folio_batch fbatch; pgoff_t index; + int saved_err; #ifdef CONFIG_CGROUP_WRITEBACK struct bdi_writeback *wb; /* wb this writeback is issued under */ @@ -366,6 +367,9 @@ int balance_dirty_pages_ratelimited_flags(struct address_space *mapping, bool wb_over_bg_thresh(struct bdi_writeback *wb); +struct folio *writeback_iter(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio, int *error); + typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc, void *data); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 3abb053e70580e..5fe4cdb7dbd61a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2325,18 +2325,18 @@ void __init page_writeback_init(void) } /** - * tag_pages_for_writeback - tag pages to be written by write_cache_pages + * tag_pages_for_writeback - tag pages to be written by writeback * @mapping: address space structure to write * @start: starting page index * @end: ending page index (inclusive) * * This function scans the page range from @start to @end (inclusive) and tags - * all pages that have DIRTY tag set with a special TOWRITE tag. The idea is - * that write_cache_pages (or whoever calls this function) will then use - * TOWRITE tag to identify pages eligible for writeback. This mechanism is - * used to avoid livelocking of writeback by a process steadily creating new - * dirty pages in the file (thus it is important for this function to be quick - * so that it can tag pages faster than a dirtying process can create them). + * all pages that have DIRTY tag set with a special TOWRITE tag. The caller + * can then use the TOWRITE tag to identify pages eligible for writeback. + * This mechanism is used to avoid livelocking of writeback by a process + * steadily creating new dirty pages in the file (thus it is important for this + * function to be quick so that it can tag pages faster than a dirtying process + * can create them). */ void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end) @@ -2434,69 +2434,68 @@ static struct folio *writeback_get_folio(struct address_space *mapping, } /** - * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. + * writeback_iter - iterate folio of a mapping for writeback * @mapping: address space structure to write - * @wbc: subtract the number of written pages from *@wbc->nr_to_write - * @writepage: function called for each page - * @data: data passed to writepage function + * @wbc: writeback context + * @folio: previously iterated folio (%NULL to start) + * @error: in-out pointer for writeback errors (see below) * - * If a page is already under I/O, write_cache_pages() skips it, even - * if it's dirty. This is desirable behaviour for memory-cleaning writeback, - * but it is INCORRECT for data-integrity system calls such as fsync(). fsync() - * and msync() need to guarantee that all the data which was dirty at the time - * the call was made get new I/O started against them. If wbc->sync_mode is - * WB_SYNC_ALL then we were called for data integrity and we must wait for - * existing IO to complete. - * - * To avoid livelocks (when other process dirties new pages), we first tag - * pages which should be written back with TOWRITE tag and only then start - * writing them. For data-integrity sync we have to be careful so that we do - * not miss some pages (e.g., because some other process has cleared TOWRITE - * tag we set). The rule we follow is that TOWRITE tag can be cleared only - * by the process clearing the DIRTY tag (and submitting the page for IO). - * - * To avoid deadlocks between range_cyclic writeback and callers that hold - * pages in PageWriteback to aggregate IO until write_cache_pages() returns, - * we do not loop back to the start of the file. Doing so causes a page - * lock/page writeback access order inversion - we should only ever lock - * multiple pages in ascending page->index order, and looping back to the start - * of the file violates that rule and causes deadlocks. + * This function returns the next folio for the writeback operation described by + * @wbc on @mapping and should be called in a while loop in the ->writepages + * implementation. * - * Return: %0 on success, negative error code otherwise + * To start the writeback operation, %NULL is passed in the @folio argument, and + * for every subsequent iteration the folio returned previously should be passed + * back in. + * + * If there was an error in the per-folio writeback inside the writeback_iter() + * loop, @error should be set to the error value. + * + * Once the writeback described in @wbc has finished, this function will return + * %NULL and if there was an error in any iteration restore it to @error. + * + * Note: callers should not manually break out of the loop using break or goto + * but must keep calling writeback_iter() until it returns %NULL. + * + * Return: the folio to write or %NULL if the loop is done. */ -int write_cache_pages(struct address_space *mapping, - struct writeback_control *wbc, writepage_t writepage, - void *data) +struct folio *writeback_iter(struct address_space *mapping, + struct writeback_control *wbc, struct folio *folio, int *error) { - int ret = 0; - int error; - struct folio *folio; - pgoff_t end; /* Inclusive */ - - if (wbc->range_cyclic) { - wbc->index = mapping->writeback_index; /* prev offset */ - end = -1; - } else { - wbc->index = wbc->range_start >> PAGE_SHIFT; - end = wbc->range_end >> PAGE_SHIFT; - } - if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) - tag_pages_for_writeback(mapping, wbc->index, end); - - folio_batch_init(&wbc->fbatch); + if (!folio) { + folio_batch_init(&wbc->fbatch); + wbc->saved_err = *error = 0; - for (;;) { - folio = writeback_get_folio(mapping, wbc); - if (!folio) - break; + /* + * For range cyclic writeback we remember where we stopped so + * that we can continue where we stopped. + * + * For non-cyclic writeback we always start at the beginning of + * the passed in range. + */ + if (wbc->range_cyclic) + wbc->index = mapping->writeback_index; + else + wbc->index = wbc->range_start >> PAGE_SHIFT; - error = writepage(folio, wbc, data); + /* + * To avoid livelocks when other processes dirty new pages, we + * first tag pages which should be written back and only then + * start writing them. + * + * For data-integrity writeback we have to be careful so that we + * do not miss some pages (e.g., because some other process has + * cleared the TOWRITE tag we set). The rule we follow is that + * TOWRITE tag can be cleared only by the process clearing the + * DIRTY tag (and submitting the page for I/O). + */ + if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) + tag_pages_for_writeback(mapping, wbc->index, + wbc_end(wbc)); + } else { wbc->nr_to_write -= folio_nr_pages(folio); - if (error == AOP_WRITEPAGE_ACTIVATE) { - folio_unlock(folio); - error = 0; - } + WARN_ON_ONCE(*error > 0); /* * For integrity writeback we have to keep going until we have @@ -2510,33 +2509,70 @@ int write_cache_pages(struct address_space *mapping, * wbc->nr_to_write or encounter the first error. */ if (wbc->sync_mode == WB_SYNC_ALL) { - if (error && !ret) - ret = error; + if (*error && !wbc->saved_err) + wbc->saved_err = *error; } else { - if (error || wbc->nr_to_write <= 0) + if (*error || wbc->nr_to_write <= 0) goto done; } } - /* - * For range cyclic writeback we need to remember where we stopped so - * that we can continue there next time we are called. If we hit the - * last page and there is more work to be done, wrap back to the start - * of the file. - * - * For non-cyclic writeback we always start looking up at the beginning - * of the file if we are called again, which can only happen due to - * -ENOMEM from the file system. - */ - folio_batch_release(&wbc->fbatch); - if (wbc->range_cyclic) - mapping->writeback_index = 0; - return ret; + folio = writeback_get_folio(mapping, wbc); + if (!folio) { + /* + * To avoid deadlocks between range_cyclic writeback and callers + * that hold pages in PageWriteback to aggregate I/O until + * the writeback iteration finishes, we do not loop back to the + * start of the file. Doing so causes a page lock/page + * writeback access order inversion - we should only ever lock + * multiple pages in ascending page->index order, and looping + * back to the start of the file violates that rule and causes + * deadlocks. + */ + if (wbc->range_cyclic) + mapping->writeback_index = 0; + + /* + * Return the first error we encountered (if there was any) to + * the caller. + */ + *error = wbc->saved_err; + } + return folio; done: folio_batch_release(&wbc->fbatch); if (wbc->range_cyclic) mapping->writeback_index = folio->index + folio_nr_pages(folio); + return NULL; +} + +/** + * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. + * @mapping: address space structure to write + * @wbc: subtract the number of written pages from *@wbc->nr_to_write + * @writepage: function called for each page + * @data: data passed to writepage function + * + * Return: %0 on success, negative error code otherwise + * + * Note: please use writeback_iter() instead. + */ +int write_cache_pages(struct address_space *mapping, + struct writeback_control *wbc, writepage_t writepage, + void *data) +{ + struct folio *folio = NULL; + int error; + + while ((folio = writeback_iter(mapping, wbc, folio, &error))) { + error = writepage(folio, wbc, data); + if (error == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + error = 0; + } + } + return error; } EXPORT_SYMBOL(write_cache_pages); From patchwork Sat Feb 3 07:11:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13543926 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F12DB57312; Sat, 3 Feb 2024 07:13:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944383; cv=none; b=r5Oyem+7ZgE1WWROLkNvCGkgY4dZucBYmiRCsY4LIa5QoHf5zW/Y8wWEPAF59CKTlyTx80goUK7AaM7pQ8R/Ct/SvwZIZPQtwSUy8BkdKJ/hcsg5hO2c+LWq9+DyTD3snoB1Gw35yKgs50Ywaw5l+JmdxpAeaFU9dD8y+p33jRA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706944383; c=relaxed/simple; bh=tYS0fAOiFIhd5iBGGom3yizrN/cdEhJ6ynoZIQ4dwCQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GszER4YpHFfc30E+kaH4bydwNtrmHGRKiNv8irYRfiAWJMamf/4gqbgu7HuaIZAcPqqdvk8seGpCXf9sMJGGFvG2I5oxyL5XGmcv5oxotL+zwbpdZyMPK5CUTu/nra9sgmMyXmtlCFq/3AqlT4SRV3Z4G+lC0JVaHqr4RYYbtYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=FDgMugVo; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="FDgMugVo" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=aBLx8GrPTTpnrFrYwat6koWXvCPglp7/mI2LOOfVaJQ=; b=FDgMugVoB7MSUNHernp01Fwg/x MO+pR9JFMoCL9rwCqYotraUCpxDX06NW4fyeZ6vX5JPS0X8tpbgo7moefxhXVI67QAr0Xxmzp5E37 psFmoZYeys27ykwrnviTUvz/SmbokAZR4N9bZEyFSFJOJlxuJ2QZbK1SuEFbb8Vdsk7p/XGeR7TL8 +XGOB5d0OGem8uOS4FN4sM1iYvLyTDpZ+wlMffmwZiv2yPxszrIZH53rlSPctCCQy5qXCcVdpqrYW 3kZ1BMlFOw32O9rX7U/qB7G4p6dLZExzOJuw2VLdjcwXSgL2j1i6HzWGmXtoS3nBfwUSXDi/FI0fc ndPdGBiw==; Received: from [89.144.222.32] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rWACv-0000000FkMz-41QH; Sat, 03 Feb 2024 07:12:59 +0000 From: Christoph Hellwig To: linux-mm@kvack.org Cc: Matthew Wilcox , Jan Kara , David Howells , Brian Foster , Christian Brauner , "Darrick J. Wong" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/13] writeback: Remove a use of write_cache_pages() from do_writepages() Date: Sat, 3 Feb 2024 08:11:47 +0100 Message-Id: <20240203071147.862076-14-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240203071147.862076-1-hch@lst.de> References: <20240203071147.862076-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html From: "Matthew Wilcox (Oracle)" Use the new writeback_iter() directly instead of indirecting through a callback. Signed-off-by: Matthew Wilcox (Oracle) [hch: ported to the while based iter style] Signed-off-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: Brian Foster --- mm/page-writeback.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 5fe4cdb7dbd61a..53ff2d8219ddb6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2577,13 +2577,25 @@ int write_cache_pages(struct address_space *mapping, } EXPORT_SYMBOL(write_cache_pages); -static int writepage_cb(struct folio *folio, struct writeback_control *wbc, - void *data) +static int writeback_use_writepage(struct address_space *mapping, + struct writeback_control *wbc) { - struct address_space *mapping = data; - int ret = mapping->a_ops->writepage(&folio->page, wbc); - mapping_set_error(mapping, ret); - return ret; + struct folio *folio = NULL; + struct blk_plug plug; + int err; + + blk_start_plug(&plug); + while ((folio = writeback_iter(mapping, wbc, folio, &err))) { + err = mapping->a_ops->writepage(&folio->page, wbc); + mapping_set_error(mapping, err); + if (err == AOP_WRITEPAGE_ACTIVATE) { + folio_unlock(folio); + err = 0; + } + } + blk_finish_plug(&plug); + + return err; } int do_writepages(struct address_space *mapping, struct writeback_control *wbc) @@ -2599,12 +2611,7 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc) if (mapping->a_ops->writepages) { ret = mapping->a_ops->writepages(mapping, wbc); } else if (mapping->a_ops->writepage) { - struct blk_plug plug; - - blk_start_plug(&plug); - ret = write_cache_pages(mapping, wbc, writepage_cb, - mapping); - blk_finish_plug(&plug); + ret = writeback_use_writepage(mapping, wbc); } else { /* deal with chardevs and other special files */ ret = 0;