From patchwork Mon Jun 12 21:01:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60D5DC7EE2F for ; Mon, 12 Jun 2023 21:05:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238583AbjFLVFk (ORCPT ); Mon, 12 Jun 2023 17:05:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238926AbjFLVEu (ORCPT ); Mon, 12 Jun 2023 17:04:50 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 525CF4200 for ; Mon, 12 Jun 2023 14:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EpkciJE0cf4CC6YyIRSuST7oQ3gwfKQp0080Wo4Pa/0=; b=C3+jjQEVn2OUlfZ5Oej6JU0/7O rfGfR40i0kzUs+6Dlo4+NN3CBovi6Av3Pu7NBuQaNYU3WPi8/8jmpmGGbrxPyutSoge7MU3WafIcE fiARWufeiK1MAj3R5blXVQh8dRSNDr4C4RRZZ7iiuouelxQg7ipyycjscbim22q26SoBzVIDV0xIN Shvi47XGZ2PhMp86yLH/oJKjggNuhFoG20Hxc2NMlc2f06Bfk7MGDqh8Rs9uRtHfpCrW51DzzPcTn iJkqM9fDAfEd/Pa555tVLW/j8t5OqzF3WVgPVhQtT+fuyk/7U/cQIJXRsQQPuJyzGASnU7fNaygjr ugOEB6eg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofW-0033we-Um; Mon, 12 Jun 2023 21:01:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 01/14] gfs2: Use a folio inside gfs2_jdata_writepage() Date: Mon, 12 Jun 2023 22:01:28 +0100 Message-Id: <20230612210141.730128-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Replace a few implicit calls to compound_head() with one explicit one. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/gfs2/aops.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index a5f4be6b9213..0518861df783 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -150,20 +150,21 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w static int gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc) { + struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; struct gfs2_inode *ip = GFS2_I(inode); struct gfs2_sbd *sdp = GFS2_SB(inode); if (gfs2_assert_withdraw(sdp, gfs2_glock_is_held_excl(ip->i_gl))) goto out; - if (PageChecked(page) || current->journal_info) + if (folio_test_checked(folio) || current->journal_info) goto out_ignore; - return __gfs2_jdata_writepage(page, wbc); + return __gfs2_jdata_writepage(&folio->page, wbc); out_ignore: - redirty_page_for_writepage(wbc, page); + folio_redirty_for_writepage(wbc, folio); out: - unlock_page(page); + folio_unlock(folio); return 0; } From patchwork Mon Jun 12 21:01:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D42C7EE2F for ; Mon, 12 Jun 2023 21:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232170AbjFLVFG (ORCPT ); Mon, 12 Jun 2023 17:05:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238793AbjFLVEo (ORCPT ); Mon, 12 Jun 2023 17:04:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38E98199B for ; Mon, 12 Jun 2023 14:01:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+KgibMizZ6kh+XVrYZEwmKrqDA8XWXa+C/CbqtOPblI=; b=eWfdRu5bTZfS66nsIVER9gR30W 3WDHiVr3tu2zpiZ59Erwqfe4UaA/vjBA0+9mF2ROgfcrymwUEv+se8Og9MM9jpYGxnNZhK0fOQTgO FOJZTmB7fSo/pl7whmR2s1MADgdjclPwpc9qcdPVHezR0rOWVKExB+xuCNq0iY4J7+cN9Ucp3iwEa cNbbTSMGBQVudHWtezb0eBGqVnAi2p2u7b0GFXmdSoOqJ+qy66ReMsH7e891jc4/gUr0wgu/XkBc/ urUYo3nXiU4GSlYPsjxndgcuZJMWCPuhWGDUYBueGvzi764Vnsla+xdzyFi4WODL/gJfbLWym4nEc dJr0PY5A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wg-1U; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 02/14] gfs2: Pass a folio to __gfs2_jdata_write_folio() Date: Mon, 12 Jun 2023 22:01:29 +0100 Message-Id: <20230612210141.730128-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Remove a couple of folio->page conversions in the callers, and two calls to compound_head() in the function itself. Rename it from __gfs2_jdata_writepage() to __gfs2_jdata_write_folio(). Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/gfs2/aops.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 0518861df783..749135252d52 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -113,30 +113,31 @@ static int gfs2_write_jdata_page(struct page *page, } /** - * __gfs2_jdata_writepage - The core of jdata writepage - * @page: The page to write + * __gfs2_jdata_write_folio - The core of jdata writepage + * @folio: The folio to write * @wbc: The writeback control * * This is shared between writepage and writepages and implements the * core of the writepage operation. If a transaction is required then - * PageChecked will have been set and the transaction will have + * the checked flag will have been set and the transaction will have * already been started before this is called. */ - -static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc) +static int __gfs2_jdata_write_folio(struct folio *folio, + struct writeback_control *wbc) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct gfs2_inode *ip = GFS2_I(inode); - if (PageChecked(page)) { - ClearPageChecked(page); - if (!page_has_buffers(page)) { - create_empty_buffers(page, inode->i_sb->s_blocksize, - BIT(BH_Dirty)|BIT(BH_Uptodate)); + if (folio_test_checked(folio)) { + folio_clear_checked(folio); + if (!folio_buffers(folio)) { + folio_create_empty_buffers(folio, + inode->i_sb->s_blocksize, + BIT(BH_Dirty)|BIT(BH_Uptodate)); } - gfs2_trans_add_databufs(ip, page_folio(page), 0, PAGE_SIZE); + gfs2_trans_add_databufs(ip, folio, 0, folio_size(folio)); } - return gfs2_write_jdata_page(page, wbc); + return gfs2_write_jdata_page(&folio->page, wbc); } /** @@ -159,7 +160,7 @@ static int gfs2_jdata_writepage(struct page *page, struct writeback_control *wbc goto out; if (folio_test_checked(folio) || current->journal_info) goto out_ignore; - return __gfs2_jdata_writepage(&folio->page, wbc); + return __gfs2_jdata_write_folio(folio, wbc); out_ignore: folio_redirty_for_writepage(wbc, folio); @@ -256,7 +257,7 @@ static int gfs2_write_jdata_batch(struct address_space *mapping, trace_wbc_writepage(wbc, inode_to_bdi(inode)); - ret = __gfs2_jdata_writepage(&folio->page, wbc); + ret = __gfs2_jdata_write_folio(folio, wbc); if (unlikely(ret)) { if (ret == AOP_WRITEPAGE_ACTIVATE) { folio_unlock(folio); From patchwork Mon Jun 12 21:01:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31F8FC88CB5 for ; Mon, 12 Jun 2023 21:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229733AbjFLVFD (ORCPT ); Mon, 12 Jun 2023 17:05:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238616AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BFB8DF for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QVNnWFXK6jFzHkA+IkFF3lxJNeiQVfoJXZ8J7ElSc1A=; b=eV66Z9yiJ6X1EuiRwsby/gllHM cO4PnbLzsJJOh1v6aiAnOEvNOstnKK/UrNrJZeaGqrS2qXj4AMwtxYaNBqjPS33tVaiUn4fizY/L2 C5oQLpbGAvf2MbGAI3spwI2Egx30E8JA/3gTO7AfDcrVTuCLO8iFKlgjgChd0SQxLPxjIzzbnUhmK CTw2vPbzsr7jP2neZf7BGvky4cwxJP255YyjKjIk9flfYt0E4tKnVpHMz1hqP6FrG+pRSNPFwtxKM i1XngszlkONGaCm0xGjo879nOjzjzzKSVfDohS4C9FjDJHQl5b0EIhVKx1BGNCmQW/zatL/stQ4Bu p08YrgrA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wi-3h; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 03/14] gfs2: Convert gfs2_write_jdata_page() to gfs2_write_jdata_folio() Date: Mon, 12 Jun 2023 22:01:30 +0100 Message-Id: <20230612210141.730128-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add support for large folios and remove some accesses to page->mapping and page->index. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/gfs2/aops.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 749135252d52..ec5b5c1ea634 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -82,33 +82,33 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock, } /** - * gfs2_write_jdata_page - gfs2 jdata-specific version of block_write_full_page - * @page: The page to write + * gfs2_write_jdata_folio - gfs2 jdata-specific version of block_write_full_page + * @folio: The folio to write * @wbc: The writeback control * * This is the same as calling block_write_full_page, but it also * writes pages outside of i_size */ -static int gfs2_write_jdata_page(struct page *page, +static int gfs2_write_jdata_folio(struct folio *folio, struct writeback_control *wbc) { - struct inode * const inode = page->mapping->host; + struct inode * const inode = folio->mapping->host; loff_t i_size = i_size_read(inode); - const pgoff_t end_index = i_size >> PAGE_SHIFT; - unsigned offset; /* - * The page straddles i_size. It must be zeroed out on each and every + * The folio straddles i_size. It must be zeroed out on each and every * writepage invocation because it may be mmapped. "A file is mapped * in multiples of the page size. For a file that is not a multiple of - * the page size, the remaining memory is zeroed when mapped, and + * the page size, the remaining memory is zeroed when mapped, and * writes to that region are not written out to the file." */ - offset = i_size & (PAGE_SIZE - 1); - if (page->index == end_index && offset) - zero_user_segment(page, offset, PAGE_SIZE); + if (folio_pos(folio) < i_size && + i_size < folio_pos(folio) + folio_size(folio)) + folio_zero_segment(folio, offset_in_folio(folio, i_size), + folio_size(folio)); - return __block_write_full_page(inode, page, gfs2_get_block_noalloc, wbc, + return __block_write_full_page(inode, &folio->page, + gfs2_get_block_noalloc, wbc, end_buffer_async_write); } @@ -137,7 +137,7 @@ static int __gfs2_jdata_write_folio(struct folio *folio, } gfs2_trans_add_databufs(ip, folio, 0, folio_size(folio)); } - return gfs2_write_jdata_page(&folio->page, wbc); + return gfs2_write_jdata_folio(folio, wbc); } /** From patchwork Mon Jun 12 21:01:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49BB7C88CB9 for ; Mon, 12 Jun 2023 21:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238733AbjFLVFn (ORCPT ); Mon, 12 Jun 2023 17:05:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238967AbjFLVEx (ORCPT ); Mon, 12 Jun 2023 17:04:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47BC4420C for ; Mon, 12 Jun 2023 14:02:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LqgS6qgWtiqJa73/VdHea0R38qiYynjcB4wwTpq3gcQ=; b=MTi9yrf7kJncAdwaS/HpHbjWfp wxZ0cuGpUe8taydOM/uRs+SaFVsfDkT+ovdeSqXmydiDimKYlTX1bdxvFFac/hUqdNycv1sXHIsb0 siLob4N1c9H2XbTbsxWExOimZofKyAh1UGqwx4aYCjxG+C6/lvQJGtZgQSu92OHWcTFdNtcgLpf+W wJo/6teIU1Xw+YiRbeG1yDyQIWBqGnm3L6GZTwTF3LfFvR/nAwXt6SIBHYs3RlcZEAAZ/DMkBAPae G6i4t3UcxQQoaG+bpA8bYgmSVfsMOHIttmJe3UnCqZ07bVfrgC2ApoMPZL/BVDTJqZfl0tBFcZCPg jsQB5QhA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wk-6y; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 04/14] buffer: Convert __block_write_full_page() to __block_write_full_folio() Date: Mon, 12 Jun 2023 22:01:31 +0100 Message-Id: <20230612210141.730128-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Remove nine hidden calls to compound_head() by using a folio instead of a page. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/buffer.c | 53 +++++++++++++++++++------------------ fs/gfs2/aops.c | 5 ++-- fs/ntfs/aops.c | 2 +- fs/reiserfs/inode.c | 2 +- include/linux/buffer_head.h | 2 +- 5 files changed, 32 insertions(+), 32 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index a7fc561758b1..4d518df50fab 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1764,7 +1764,7 @@ static struct buffer_head *folio_create_buffers(struct folio *folio, * WB_SYNC_ALL, the writes are posted using REQ_SYNC; this * causes the writes to be flagged as synchronous writes. */ -int __block_write_full_page(struct inode *inode, struct page *page, +int __block_write_full_folio(struct inode *inode, struct folio *folio, get_block_t *get_block, struct writeback_control *wbc, bh_end_io_t *handler) { @@ -1776,14 +1776,14 @@ int __block_write_full_page(struct inode *inode, struct page *page, int nr_underway = 0; blk_opf_t write_flags = wbc_to_write_flags(wbc); - head = folio_create_buffers(page_folio(page), inode, + head = folio_create_buffers(folio, inode, (1 << BH_Dirty) | (1 << BH_Uptodate)); /* * Be very careful. We have no exclusion from block_dirty_folio * here, and the (potentially unmapped) buffers may become dirty at * any time. If a buffer becomes dirty here after we've inspected it - * then we just miss that fact, and the page stays dirty. + * then we just miss that fact, and the folio stays dirty. * * Buffers outside i_size may be dirtied by block_dirty_folio; * handle that here by just cleaning them. @@ -1793,7 +1793,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, blocksize = bh->b_size; bbits = block_size_bits(blocksize); - block = (sector_t)page->index << (PAGE_SHIFT - bbits); + block = (sector_t)folio->index << (PAGE_SHIFT - bbits); last_block = (i_size_read(inode) - 1) >> bbits; /* @@ -1804,7 +1804,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, if (block > last_block) { /* * mapped buffers outside i_size will occur, because - * this page can be outside i_size when there is a + * this folio can be outside i_size when there is a * truncate in progress. */ /* @@ -1834,7 +1834,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, continue; /* * If it's a fully non-blocking write attempt and we cannot - * lock the buffer then redirty the page. Note that this can + * lock the buffer then redirty the folio. Note that this can * potentially cause a busy-wait loop from writeback threads * and kswapd activity, but those code paths have their own * higher-level throttling. @@ -1842,7 +1842,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, if (wbc->sync_mode != WB_SYNC_NONE) { lock_buffer(bh); } else if (!trylock_buffer(bh)) { - redirty_page_for_writepage(wbc, page); + folio_redirty_for_writepage(wbc, folio); continue; } if (test_clear_buffer_dirty(bh)) { @@ -1853,11 +1853,11 @@ int __block_write_full_page(struct inode *inode, struct page *page, } while ((bh = bh->b_this_page) != head); /* - * The page and its buffers are protected by PageWriteback(), so we can - * drop the bh refcounts early. + * The folio and its buffers are protected by the writeback flag, + * so we can drop the bh refcounts early. */ - BUG_ON(PageWriteback(page)); - set_page_writeback(page); + BUG_ON(folio_test_writeback(folio)); + folio_start_writeback(folio); do { struct buffer_head *next = bh->b_this_page; @@ -1867,20 +1867,20 @@ int __block_write_full_page(struct inode *inode, struct page *page, } bh = next; } while (bh != head); - unlock_page(page); + folio_unlock(folio); err = 0; done: if (nr_underway == 0) { /* - * The page was marked dirty, but the buffers were + * The folio was marked dirty, but the buffers were * clean. Someone wrote them back by hand with * write_dirty_buffer/submit_bh. A rare case. */ - end_page_writeback(page); + folio_end_writeback(folio); /* - * The page and buffer_heads can be released at any time from + * The folio and buffer_heads can be released at any time from * here on. */ } @@ -1891,7 +1891,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, * ENOSPC, or some other error. We may already have added some * blocks to the file, so we need to write these out to avoid * exposing stale data. - * The page is currently locked and not marked for writeback + * The folio is currently locked and not marked for writeback */ bh = head; /* Recovery: lock and submit the mapped buffers */ @@ -1903,15 +1903,15 @@ int __block_write_full_page(struct inode *inode, struct page *page, } else { /* * The buffer may have been set dirty during - * attachment to a dirty page. + * attachment to a dirty folio. */ clear_buffer_dirty(bh); } } while ((bh = bh->b_this_page) != head); - SetPageError(page); - BUG_ON(PageWriteback(page)); - mapping_set_error(page->mapping, err); - set_page_writeback(page); + folio_set_error(folio); + BUG_ON(folio_test_writeback(folio)); + mapping_set_error(folio->mapping, err); + folio_start_writeback(folio); do { struct buffer_head *next = bh->b_this_page; if (buffer_async_write(bh)) { @@ -1921,10 +1921,10 @@ int __block_write_full_page(struct inode *inode, struct page *page, } bh = next; } while (bh != head); - unlock_page(page); + folio_unlock(folio); goto done; } -EXPORT_SYMBOL(__block_write_full_page); +EXPORT_SYMBOL(__block_write_full_folio); /* * If a page has any new buffers, zero them out here, and mark them uptodate @@ -2677,6 +2677,7 @@ EXPORT_SYMBOL(block_truncate_page); int block_write_full_page(struct page *page, get_block_t *get_block, struct writeback_control *wbc) { + struct folio *folio = page_folio(page); struct inode * const inode = page->mapping->host; loff_t i_size = i_size_read(inode); const pgoff_t end_index = i_size >> PAGE_SHIFT; @@ -2684,13 +2685,13 @@ int block_write_full_page(struct page *page, get_block_t *get_block, /* Is the page fully inside i_size? */ if (page->index < end_index) - return __block_write_full_page(inode, page, get_block, wbc, + return __block_write_full_folio(inode, folio, get_block, wbc, end_buffer_async_write); /* Is the page fully outside i_size? (truncate in progress) */ offset = i_size & (PAGE_SIZE-1); if (page->index >= end_index+1 || !offset) { - unlock_page(page); + folio_unlock(folio); return 0; /* don't care */ } @@ -2702,7 +2703,7 @@ int block_write_full_page(struct page *page, get_block_t *get_block, * writes to that region are not written out to the file." */ zero_user_segment(page, offset, PAGE_SIZE); - return __block_write_full_page(inode, page, get_block, wbc, + return __block_write_full_folio(inode, folio, get_block, wbc, end_buffer_async_write); } EXPORT_SYMBOL(block_write_full_page); diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index ec5b5c1ea634..3a2be1901e1e 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -107,9 +107,8 @@ static int gfs2_write_jdata_folio(struct folio *folio, folio_zero_segment(folio, offset_in_folio(folio, i_size), folio_size(folio)); - return __block_write_full_page(inode, &folio->page, - gfs2_get_block_noalloc, wbc, - end_buffer_async_write); + return __block_write_full_folio(inode, folio, gfs2_get_block_noalloc, + wbc, end_buffer_async_write); } /** diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c index e8aeba124a95..4e158bce4192 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -526,7 +526,7 @@ static int ntfs_read_folio(struct file *file, struct folio *folio) * * Return 0 on success and -errno on error. * - * Based on ntfs_read_block() and __block_write_full_page(). + * Based on ntfs_read_block() and __block_write_full_folio(). */ static int ntfs_write_block(struct page *page, struct writeback_control *wbc) { diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index d8debbb6105f..ff34ee49106f 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -2506,7 +2506,7 @@ static int map_block_for_writepage(struct inode *inode, /* * mason@suse.com: updated in 2.5.54 to follow the same general io - * start/recovery path as __block_write_full_page, along with special + * start/recovery path as __block_write_full_folio, along with special * code to handle reiserfs tails. */ static int reiserfs_write_full_page(struct page *page, diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 1520793c72da..a366e01f8bd4 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -263,7 +263,7 @@ extern int buffer_heads_over_limit; void block_invalidate_folio(struct folio *folio, size_t offset, size_t length); int block_write_full_page(struct page *page, get_block_t *get_block, struct writeback_control *wbc); -int __block_write_full_page(struct inode *inode, struct page *page, +int __block_write_full_folio(struct inode *inode, struct folio *folio, get_block_t *get_block, struct writeback_control *wbc, bh_end_io_t *handler); int block_read_full_folio(struct folio *, get_block_t *); From patchwork Mon Jun 12 21:01:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41BE5C7EE43 for ; Mon, 12 Jun 2023 21:06:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234891AbjFLVGQ (ORCPT ); Mon, 12 Jun 2023 17:06:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233586AbjFLVFg (ORCPT ); Mon, 12 Jun 2023 17:05:36 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3686E4492 for ; Mon, 12 Jun 2023 14:02:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vAFZSL3nTXFwL8IPdl3K3J/1oPVBjzs/oCIJOb/JvBA=; b=vwxY0AjGvku632qBf24IhNVh5p 0n6X8XUWEZ8bjD+dtkLhshoYdiupkPYoBepWZrTXmPEeRdTg8PlJDMystMLMzv2lCqhkkgllLJBAS z+cN+nLu2BpfeSew2Q98WYQ24SssiqxcPFkBTTLGmmMI4P7k/YeNzegM0MMwPaGcNYr2jw3S+v4Kd yUYa0uCQQo1C/5efs7v7uEnTrJs2CkVddp8xsht3nlKV/HF7qa8e+UZZaYHaTUezBrSPY463EbfFW SFZj2+Xl8YsH/lZfI9gs7xwFbK6ONnPvwJwiyS9e87tR5lt5V5WYP8vS/R2fbD04escYc3Fdb2UUh UGiEpVig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wm-Ah; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 05/14] gfs2: Support ludicrously large folios in gfs2_trans_add_databufs() Date: Mon, 12 Jun 2023 22:01:32 +0100 Message-Id: <20230612210141.730128-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We may someday support folios larger than 4GB, so use a size_t for the byte count within a folio to prevent unpleasant truncations. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/gfs2/aops.c | 6 +++--- fs/gfs2/aops.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 3a2be1901e1e..1c407eba1e30 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -38,13 +38,13 @@ void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio, - unsigned int from, unsigned int len) + size_t from, size_t len) { struct buffer_head *head = folio_buffers(folio); unsigned int bsize = head->b_size; struct buffer_head *bh; - unsigned int to = from + len; - unsigned int start, end; + size_t to = from + len; + size_t start, end; for (bh = head, start = 0; bh != head || !start; bh = bh->b_this_page, start = end) { diff --git a/fs/gfs2/aops.h b/fs/gfs2/aops.h index 09db1914425e..f08322ef41cf 100644 --- a/fs/gfs2/aops.h +++ b/fs/gfs2/aops.h @@ -10,6 +10,6 @@ extern void adjust_fs_space(struct inode *inode); extern void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio, - unsigned int from, unsigned int len); + size_t from, size_t len); #endif /* __AOPS_DOT_H__ */ From patchwork Mon Jun 12 21:01:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CC00C88CBB for ; Mon, 12 Jun 2023 21:05:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235872AbjFLVFJ (ORCPT ); Mon, 12 Jun 2023 17:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238846AbjFLVEq (ORCPT ); Mon, 12 Jun 2023 17:04:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5891D19F for ; Mon, 12 Jun 2023 14:01:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=doGe8puSss/uniHTaIANwdEHtbxADBEOwFenXNdMMOE=; b=pQaQD0RoeBgP6QgPtAt++Wqxq0 36A9Ku0djxOLLvYA8cMY9y4nQlUnyATdyVrAeBEr6owhzeuHpJXJk7FOiEpGijQnpupsBwaPjP1p3 cPX21zL5rHRqhfHhRDN59S/oZSIE5SKe/wp+TXzkQ9TbJRJdeWsT7z6N6Wja6o5PDjCbi6ds28o0X ZJrOSI/gdQUJ+bISl/Xl96usRRv3FPZThVb+zRJUgoaOAJDSc7AIiCnYhrzkBcM2/WMqLoxSl0jkX YzziULGfatLHNpcYArUU1K7gMBM0VGHpSXzQEl6gVNEDA4hW1BuVN2wH0anogRSbujynUzQiuWESl S5HA7vew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wo-Dk; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher , Bob Peterson Subject: [PATCH v3 06/14] buffer: Make block_write_full_page() handle large folios correctly Date: Mon, 12 Jun 2023 22:01:33 +0100 Message-Id: <20230612210141.730128-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Keep the interface as struct page, but work entirely on the folio internally. Removes several PAGE_SIZE assumptions and removes some references to page->index and page->mapping. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: Bob Peterson Reviewed-by: Bob Peterson --- fs/buffer.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 4d518df50fab..34ecf55d2f12 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2678,33 +2678,31 @@ int block_write_full_page(struct page *page, get_block_t *get_block, struct writeback_control *wbc) { struct folio *folio = page_folio(page); - struct inode * const inode = page->mapping->host; + struct inode * const inode = folio->mapping->host; loff_t i_size = i_size_read(inode); - const pgoff_t end_index = i_size >> PAGE_SHIFT; - unsigned offset; - /* Is the page fully inside i_size? */ - if (page->index < end_index) + /* Is the folio fully inside i_size? */ + if (folio_pos(folio) + folio_size(folio) <= i_size) return __block_write_full_folio(inode, folio, get_block, wbc, end_buffer_async_write); - /* Is the page fully outside i_size? (truncate in progress) */ - offset = i_size & (PAGE_SIZE-1); - if (page->index >= end_index+1 || !offset) { + /* Is the folio fully outside i_size? (truncate in progress) */ + if (folio_pos(folio) >= i_size) { folio_unlock(folio); return 0; /* don't care */ } /* - * The page straddles i_size. It must be zeroed out on each and every + * The folio straddles i_size. It must be zeroed out on each and every * writepage invocation because it may be mmapped. "A file is mapped * in multiples of the page size. For a file that is not a multiple of - * the page size, the remaining memory is zeroed when mapped, and + * the page size, the remaining memory is zeroed when mapped, and * writes to that region are not written out to the file." */ - zero_user_segment(page, offset, PAGE_SIZE); + folio_zero_segment(folio, offset_in_folio(folio, i_size), + folio_size(folio)); return __block_write_full_folio(inode, folio, get_block, wbc, - end_buffer_async_write); + end_buffer_async_write); } EXPORT_SYMBOL(block_write_full_page); From patchwork Mon Jun 12 21:01:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65424C88CBC for ; Mon, 12 Jun 2023 21:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238220AbjFLVFP (ORCPT ); Mon, 12 Jun 2023 17:05:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238891AbjFLVEs (ORCPT ); Mon, 12 Jun 2023 17:04:48 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B02E2959 for ; Mon, 12 Jun 2023 14:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=I07LqDro4fa5l6e/F6p2fyQSef3eBPSmHHHBiFnjelc=; b=bPGjuQwXCCyvsRA/mQdrkkWQey Tlqe5Ew9ZOK5/BFgB5dEJVayh21G4YQIbmED1RP4i6tGr1xk+HkqpmRSWRjEpPWskC46eYhI6tno5 LA/5suxoCbKsq4IH+HEqqePInPtQWX7/cZa+kKXb5Kr/f+wfiJcjs2wUvp/gWj8wLXr58Zm9v5vXX VNk2v8y1VFi7SA9D1oPJOPOgPcTevlZUsN/D260TrTR0+fV8B/kKSwO3hfg/rHWmiYEPbSbfmlsE0 7q5uBhF9KV9T/xT7Oy9hdxNH7OVrML2+dVwuMlG3L9gS+uI2vu9ELq3G++fNBaF0LIb52ptf3nADY aoC4pkDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wq-Gc; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 07/14] buffer: Convert block_page_mkwrite() to use a folio Date: Mon, 12 Jun 2023 22:01:34 +0100 Message-Id: <20230612210141.730128-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org If any page in a folio is dirtied, dirty the entire folio. Removes a number of hidden calls to compound_head() and references to page->mapping and page->index. Fixes a pre-existing bug where we could mark a folio as dirty if the file is truncated to a multiple of the page size just as we take the page fault. I don't believe this bug has any bad effect, it's just inefficient. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 34ecf55d2f12..0af167e8a9c6 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2564,38 +2564,37 @@ EXPORT_SYMBOL(block_commit_write); int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, get_block_t get_block) { - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); struct inode *inode = file_inode(vma->vm_file); unsigned long end; loff_t size; int ret; - lock_page(page); + folio_lock(folio); size = i_size_read(inode); - if ((page->mapping != inode->i_mapping) || - (page_offset(page) > size)) { + if ((folio->mapping != inode->i_mapping) || + (folio_pos(folio) >= size)) { /* We overload EFAULT to mean page got truncated */ ret = -EFAULT; goto out_unlock; } - /* page is wholly or partially inside EOF */ - if (((page->index + 1) << PAGE_SHIFT) > size) - end = size & ~PAGE_MASK; - else - end = PAGE_SIZE; + end = folio_size(folio); + /* folio is wholly or partially inside EOF */ + if (folio_pos(folio) + end > size) + end = size - folio_pos(folio); - ret = __block_write_begin(page, 0, end, get_block); + ret = __block_write_begin_int(folio, 0, end, get_block, NULL); if (!ret) - ret = block_commit_write(page, 0, end); + ret = block_commit_write(&folio->page, 0, end); if (unlikely(ret < 0)) goto out_unlock; - set_page_dirty(page); - wait_for_stable_page(page); + folio_mark_dirty(folio); + folio_wait_stable(folio); return 0; out_unlock: - unlock_page(page); + folio_unlock(folio); return ret; } EXPORT_SYMBOL(block_page_mkwrite); From patchwork Mon Jun 12 21:01:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90B9BC88CB8 for ; Mon, 12 Jun 2023 21:06:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238867AbjFLVGA (ORCPT ); Mon, 12 Jun 2023 17:06:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239064AbjFLVFA (ORCPT ); Mon, 12 Jun 2023 17:05:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 700BC4233 for ; Mon, 12 Jun 2023 14:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VTfR0XRcDrdQr18OTDSB9N9llnuWvQuNKz9hb6FFVsk=; b=PGHjwOWIZERBHCnsLQ21j/Bc2n lHKnO+NotAOlcgudO2qTIFPLSo4t8cU07A1Tq0FsIAAT/c6HBEO3CW6rOS2VvvUFK4ggz6bsX7NJm 6+8A4C+gxRXe8LHjoVE9ScyVRdumuTD6Qhv4yJhD+NQejOPO9NYHu0SgIdgJsSfqwdX91x88Sc78F QxnNBA9nqgEA3fxUUKb5eLhnVuD2lgUGUKAu+MYx3kvITpRm8ZvJl5L/uLtMS3yNaeWPsPe9cC33e 72Pntl80sz9kg2YCMlIwiKobRO21u1BCYnMoI5TtSncS7Tqz5O8Z25tyd30z1ckdsbeJo3QeYDXD6 9w/BRv3Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033ws-JJ; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 08/14] buffer: Convert __block_commit_write() to take a folio Date: Mon, 12 Jun 2023 22:01:35 +0100 Message-Id: <20230612210141.730128-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This removes a hidden call to compound_head() inside __block_commit_write() and moves it to those callers which are still page based. Also make block_write_end() safe for large folios. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 0af167e8a9c6..97c64b05151f 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2116,15 +2116,15 @@ int __block_write_begin(struct page *page, loff_t pos, unsigned len, } EXPORT_SYMBOL(__block_write_begin); -static int __block_commit_write(struct inode *inode, struct page *page, - unsigned from, unsigned to) +static int __block_commit_write(struct inode *inode, struct folio *folio, + size_t from, size_t to) { - unsigned block_start, block_end; - int partial = 0; + size_t block_start, block_end; + bool partial = false; unsigned blocksize; struct buffer_head *bh, *head; - bh = head = page_buffers(page); + bh = head = folio_buffers(folio); blocksize = bh->b_size; block_start = 0; @@ -2132,7 +2132,7 @@ static int __block_commit_write(struct inode *inode, struct page *page, block_end = block_start + blocksize; if (block_end <= from || block_start >= to) { if (!buffer_uptodate(bh)) - partial = 1; + partial = true; } else { set_buffer_uptodate(bh); mark_buffer_dirty(bh); @@ -2147,11 +2147,11 @@ static int __block_commit_write(struct inode *inode, struct page *page, /* * If this is a partial write which happened to make all buffers * uptodate then we can optimize away a bogus read_folio() for - * the next read(). Here we 'discover' whether the page went + * the next read(). Here we 'discover' whether the folio went * uptodate as a result of this (potentially partial) write. */ if (!partial) - SetPageUptodate(page); + folio_mark_uptodate(folio); return 0; } @@ -2188,10 +2188,9 @@ int block_write_end(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, struct page *page, void *fsdata) { + struct folio *folio = page_folio(page); struct inode *inode = mapping->host; - unsigned start; - - start = pos & (PAGE_SIZE - 1); + size_t start = pos - folio_pos(folio); if (unlikely(copied < len)) { /* @@ -2203,18 +2202,18 @@ int block_write_end(struct file *file, struct address_space *mapping, * read_folio might come in and destroy our partial write. * * Do the simplest thing, and just treat any short write to a - * non uptodate page as a zero-length write, and force the + * non uptodate folio as a zero-length write, and force the * caller to redo the whole thing. */ - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) copied = 0; - page_zero_new_buffers(page, start+copied, start+len); + page_zero_new_buffers(&folio->page, start+copied, start+len); } - flush_dcache_page(page); + flush_dcache_folio(folio); /* This could be a short (even 0-length) commit */ - __block_commit_write(inode, page, start, start+copied); + __block_commit_write(inode, folio, start, start + copied); return copied; } @@ -2537,8 +2536,9 @@ EXPORT_SYMBOL(cont_write_begin); int block_commit_write(struct page *page, unsigned from, unsigned to) { - struct inode *inode = page->mapping->host; - __block_commit_write(inode,page,from,to); + struct folio *folio = page_folio(page); + struct inode *inode = folio->mapping->host; + __block_commit_write(inode, folio, from, to); return 0; } EXPORT_SYMBOL(block_commit_write); @@ -2586,7 +2586,7 @@ int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, ret = __block_write_begin_int(folio, 0, end, get_block, NULL); if (!ret) - ret = block_commit_write(&folio->page, 0, end); + ret = __block_commit_write(inode, folio, 0, end); if (unlikely(ret < 0)) goto out_unlock; From patchwork Mon Jun 12 21:01:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB0F6C88CB8 for ; Mon, 12 Jun 2023 21:05:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238014AbjFLVFO (ORCPT ); Mon, 12 Jun 2023 17:05:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238620AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 592DCE55 for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8TAjc/iGR0TnG7xeR+UbsxUyFbhLwY9D51/Atttv7wM=; b=elmLfq35JDUwMftvUFkfJjGVmL EmY3EQ8DyxiHPGL7B2ebWDdChsd47+1aUi4MbHVvKMDhPRIqfZYLmu4yMOPO+LeDLIZdnHNXYLm0W i8J++9JT/uiGjWMeQ3kNkSXIP7b3gdF2R2BgIDWuxWSPm8eeiszSuQlMWMDEVOD4YDQzVj02aQi8B SipyKLS9VMQRujihEIZiKGBAHoji4cIvU0m013Ptl4oIj9K+ySEPvYK1wDl8gpNE8x8TFZf5EJmiO qrUUoLLbSLsXVyDrot0w4xKGKI2RbQ7Z/Fe25UWi3D7mL+mFN4pf+SdNgpC7cIkB3uacLko/5UD+N eTpKOYjg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033wu-M2; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 09/14] buffer: Convert page_zero_new_buffers() to folio_zero_new_buffers() Date: Mon, 12 Jun 2023 22:01:36 +0100 Message-Id: <20230612210141.730128-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Most of the callers already have a folio; convert reiserfs_write_end() to have a folio. Removes a couple of hidden calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 27 ++++++++++++++------------- fs/ext4/inode.c | 4 ++-- fs/reiserfs/inode.c | 7 ++++--- include/linux/buffer_head.h | 2 +- 4 files changed, 21 insertions(+), 19 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 97c64b05151f..e4bd465ecee8 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1927,33 +1927,34 @@ int __block_write_full_folio(struct inode *inode, struct folio *folio, EXPORT_SYMBOL(__block_write_full_folio); /* - * If a page has any new buffers, zero them out here, and mark them uptodate + * If a folio has any new buffers, zero them out here, and mark them uptodate * and dirty so they'll be written out (in order to prevent uninitialised * block data from leaking). And clear the new bit. */ -void page_zero_new_buffers(struct page *page, unsigned from, unsigned to) +void folio_zero_new_buffers(struct folio *folio, size_t from, size_t to) { - unsigned int block_start, block_end; + size_t block_start, block_end; struct buffer_head *head, *bh; - BUG_ON(!PageLocked(page)); - if (!page_has_buffers(page)) + BUG_ON(!folio_test_locked(folio)); + head = folio_buffers(folio); + if (!head) return; - bh = head = page_buffers(page); + bh = head; block_start = 0; do { block_end = block_start + bh->b_size; if (buffer_new(bh)) { if (block_end > from && block_start < to) { - if (!PageUptodate(page)) { - unsigned start, size; + if (!folio_test_uptodate(folio)) { + size_t start, xend; start = max(from, block_start); - size = min(to, block_end) - start; + xend = min(to, block_end); - zero_user(page, start, size); + folio_zero_segment(folio, start, xend); set_buffer_uptodate(bh); } @@ -1966,7 +1967,7 @@ void page_zero_new_buffers(struct page *page, unsigned from, unsigned to) bh = bh->b_this_page; } while (bh != head); } -EXPORT_SYMBOL(page_zero_new_buffers); +EXPORT_SYMBOL(folio_zero_new_buffers); static void iomap_to_bh(struct inode *inode, sector_t block, struct buffer_head *bh, @@ -2104,7 +2105,7 @@ int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, err = -EIO; } if (unlikely(err)) - page_zero_new_buffers(&folio->page, from, to); + folio_zero_new_buffers(folio, from, to); return err; } @@ -2208,7 +2209,7 @@ int block_write_end(struct file *file, struct address_space *mapping, if (!folio_test_uptodate(folio)) copied = 0; - page_zero_new_buffers(&folio->page, start+copied, start+len); + folio_zero_new_buffers(folio, start+copied, start+len); } flush_dcache_folio(folio); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 02de439bf1f0..9ca583360166 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1093,7 +1093,7 @@ static int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len, err = -EIO; } if (unlikely(err)) { - page_zero_new_buffers(&folio->page, from, to); + folio_zero_new_buffers(folio, from, to); } else if (fscrypt_inode_uses_fs_layer_crypto(inode)) { for (i = 0; i < nr_wait; i++) { int err2; @@ -1339,7 +1339,7 @@ static int ext4_write_end(struct file *file, } /* - * This is a private version of page_zero_new_buffers() which doesn't + * This is a private version of folio_zero_new_buffers() which doesn't * set the buffer to be dirty, since in data=journalled mode we need * to call ext4_dirty_journalled_data() instead. */ diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index ff34ee49106f..77bd3b27059f 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -2872,6 +2872,7 @@ static int reiserfs_write_end(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, struct page *page, void *fsdata) { + struct folio *folio = page_folio(page); struct inode *inode = page->mapping->host; int ret = 0; int update_sd = 0; @@ -2887,12 +2888,12 @@ static int reiserfs_write_end(struct file *file, struct address_space *mapping, start = pos & (PAGE_SIZE - 1); if (unlikely(copied < len)) { - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) copied = 0; - page_zero_new_buffers(page, start + copied, start + len); + folio_zero_new_buffers(folio, start + copied, start + len); } - flush_dcache_page(page); + flush_dcache_folio(folio); reiserfs_commit_page(inode, page, start, start + copied); diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index a366e01f8bd4..c794ea7096ba 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -278,7 +278,7 @@ int block_write_end(struct file *, struct address_space *, int generic_write_end(struct file *, struct address_space *, loff_t, unsigned, unsigned, struct page *, void *); -void page_zero_new_buffers(struct page *page, unsigned from, unsigned to); +void folio_zero_new_buffers(struct folio *folio, size_t from, size_t to); void clean_page_buffers(struct page *page); int cont_write_begin(struct file *, struct address_space *, loff_t, unsigned, struct page **, void **, From patchwork Mon Jun 12 21:01:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B612C88CBF for ; Mon, 12 Jun 2023 21:05:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236860AbjFLVFK (ORCPT ); Mon, 12 Jun 2023 17:05:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238780AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D61510C9 for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Y3oTUBr8nUbuZPwhJLsYAQkdoLOLLxnDGymoFp7BQW8=; b=X8dun2nKFjxWLL2wyZG6PKYjOy 7oRuiHDN09PIGW8XhOtq+2JegWniSG1JEFa52Ha4SBTBnOMfUhhKNcEdr6PZm87muuv/zrcpzB9DD wBuc2kqJhtPk7wep9btVGHoYUSKwbdWP6g1vqjiGqiuPxb88W+HWp8m2HfiAfjTiT5L/7xXLIo/F3 dsIJu1Gzjxd7mSaAeHEmIfvrkFnFeATGAwgnlUBS2pnA6bn5H2m1pfjiaQCABo1I9gSmDjuxv97Gf dbJ8lYf0cQNjOhH4/4+7JYCiQuRfw2fWGq+ZOQIjUcqzvirx6p5jzvSv7bMe/+dZjPNtr9smdaEVr Yk2HzYKA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033x8-QY; Mon, 12 Jun 2023 21:01:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 10/14] buffer: Convert grow_dev_page() to use a folio Date: Mon, 12 Jun 2023 22:01:37 +0100 Message-Id: <20230612210141.730128-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Get a folio from the page cache instead of a page, then use the folio API throughout. Removes a few calls to compound_head() and may be needed to support block size > PAGE_SIZE. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index e4bd465ecee8..06d031e28bee 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -976,7 +976,7 @@ grow_dev_page(struct block_device *bdev, sector_t block, pgoff_t index, int size, int sizebits, gfp_t gfp) { struct inode *inode = bdev->bd_inode; - struct page *page; + struct folio *folio; struct buffer_head *bh; sector_t end_block; int ret = 0; @@ -992,42 +992,38 @@ grow_dev_page(struct block_device *bdev, sector_t block, */ gfp_mask |= __GFP_NOFAIL; - page = find_or_create_page(inode->i_mapping, index, gfp_mask); - - BUG_ON(!PageLocked(page)); + folio = __filemap_get_folio(inode->i_mapping, index, + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp_mask); - if (page_has_buffers(page)) { - bh = page_buffers(page); + bh = folio_buffers(folio); + if (bh) { if (bh->b_size == size) { - end_block = init_page_buffers(page, bdev, + end_block = init_page_buffers(&folio->page, bdev, (sector_t)index << sizebits, size); goto done; } - if (!try_to_free_buffers(page_folio(page))) + if (!try_to_free_buffers(folio)) goto failed; } - /* - * Allocate some buffers for this page - */ - bh = alloc_page_buffers(page, size, true); + bh = folio_alloc_buffers(folio, size, true); /* - * Link the page to the buffers and initialise them. Take the + * Link the folio to the buffers and initialise them. Take the * lock to be atomic wrt __find_get_block(), which does not - * run under the page lock. + * run under the folio lock. */ spin_lock(&inode->i_mapping->private_lock); - link_dev_buffers(page, bh); - end_block = init_page_buffers(page, bdev, (sector_t)index << sizebits, - size); + link_dev_buffers(&folio->page, bh); + end_block = init_page_buffers(&folio->page, bdev, + (sector_t)index << sizebits, size); spin_unlock(&inode->i_mapping->private_lock); done: ret = (block < end_block) ? 1 : -ENXIO; failed: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return ret; } From patchwork Mon Jun 12 21:01:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E96CA9EA2 for ; Mon, 12 Jun 2023 21:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237514AbjFLVFM (ORCPT ); Mon, 12 Jun 2023 17:05:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238782AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A08DB10F9 for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0jkJkWCuyQJ6JHYCEaQP7K47NHBMN43bMAiZZ3L65AM=; b=siXEE0FV8LliLis0imD3MgWr0S aBbgj1Ef1wsCkNerL/qfbEkhsYzePKuZgj4wvIk64rJvws95Ih5rJQkjOuYjoApVRq/k7tJIYL/81 tnKXQMDJRTjsMV2N3GCp5UmUmh6CT8SEfYoqdVsjeZubbsA/iG55KUGp6Kuptvz7TJ/Ptbl/FqQPx o4fg7kzaEuFLZg06zZKBFLxmJTCkSTmC8nwqAAWv2lCzdQngv3UrXiaIv3KDhNklErXAdMbpuY6ku pYRHthCj53VjinR4pC9ZeM0Jer+i+4y9I3oa7BEDgCE+JMTPcS0i0gaVbRpLtaN8kZqpQWoJ9ARrf twslX8EA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofX-0033xE-Ur; Mon, 12 Jun 2023 21:01:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 11/14] buffer: Convert init_page_buffers() to folio_init_buffers() Date: Mon, 12 Jun 2023 22:01:38 +0100 Message-Id: <20230612210141.730128-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use the folio API and pass the folio from both callers. Saves a hidden call to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 06d031e28bee..9b9dee417467 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -934,15 +934,14 @@ static sector_t blkdev_max_block(struct block_device *bdev, unsigned int size) } /* - * Initialise the state of a blockdev page's buffers. + * Initialise the state of a blockdev folio's buffers. */ -static sector_t -init_page_buffers(struct page *page, struct block_device *bdev, - sector_t block, int size) +static sector_t folio_init_buffers(struct folio *folio, + struct block_device *bdev, sector_t block, int size) { - struct buffer_head *head = page_buffers(page); + struct buffer_head *head = folio_buffers(folio); struct buffer_head *bh = head; - int uptodate = PageUptodate(page); + bool uptodate = folio_test_uptodate(folio); sector_t end_block = blkdev_max_block(bdev, size); do { @@ -998,9 +997,8 @@ grow_dev_page(struct block_device *bdev, sector_t block, bh = folio_buffers(folio); if (bh) { if (bh->b_size == size) { - end_block = init_page_buffers(&folio->page, bdev, - (sector_t)index << sizebits, - size); + end_block = folio_init_buffers(folio, bdev, + (sector_t)index << sizebits, size); goto done; } if (!try_to_free_buffers(folio)) @@ -1016,7 +1014,7 @@ grow_dev_page(struct block_device *bdev, sector_t block, */ spin_lock(&inode->i_mapping->private_lock); link_dev_buffers(&folio->page, bh); - end_block = init_page_buffers(&folio->page, bdev, + end_block = folio_init_buffers(folio, bdev, (sector_t)index << sizebits, size); spin_unlock(&inode->i_mapping->private_lock); done: From patchwork Mon Jun 12 21:01:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D9EEC88CB8 for ; Mon, 12 Jun 2023 21:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232529AbjFLVFI (ORCPT ); Mon, 12 Jun 2023 17:05:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232662AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B59131728 for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ucqnqDLTtH6IoVIhUQ8NZkUFNdNeJ2iPdF0fXtWUILg=; b=EGORevA2L+emExRwWNAxwNvXwK W2Du75WjVbt+H12WDvamT3gWXhZvK6nQB39x+ukxSiIxTPaqi6BM1GpIHaRIUlcHo/Y+ZTeDFTf8s KDJ7zxafvN/QedshTbDju3chmZyWyqqcSSJ7EXC33Q1xOCaLqfCHlcLset98z2I6psrf1eOoGTkQP CpLA6o2elRdvR9/BhI1HOAlj04ExCuu6q8d6NRa9RJVSgs8l/E94qktAeJEhKvqiPJDJjNZHaivZn YAjg7UzeU2TcPG3Ay5syFFVW2dZXJ+0ba/ZQEYkhgBl+h0tTksuXdi53cYv3y9sMxNcFBrGjNBO2S sapVqeHQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofY-0033xK-2s; Mon, 12 Jun 2023 21:01:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 12/14] buffer: Convert link_dev_buffers to take a folio Date: Mon, 12 Jun 2023 22:01:39 +0100 Message-Id: <20230612210141.730128-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Its one caller already has a folio, so switch it to use the folio API. Removes a hidden call to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 9b9dee417467..4ca2eb2b3dca 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -907,8 +907,8 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, } EXPORT_SYMBOL_GPL(alloc_page_buffers); -static inline void -link_dev_buffers(struct page *page, struct buffer_head *head) +static inline void link_dev_buffers(struct folio *folio, + struct buffer_head *head) { struct buffer_head *bh, *tail; @@ -918,7 +918,7 @@ link_dev_buffers(struct page *page, struct buffer_head *head) bh = bh->b_this_page; } while (bh); tail->b_this_page = head; - attach_page_private(page, head); + folio_attach_private(folio, head); } static sector_t blkdev_max_block(struct block_device *bdev, unsigned int size) @@ -1013,7 +1013,7 @@ grow_dev_page(struct block_device *bdev, sector_t block, * run under the folio lock. */ spin_lock(&inode->i_mapping->private_lock); - link_dev_buffers(&folio->page, bh); + link_dev_buffers(folio, bh); end_block = folio_init_buffers(folio, bdev, (sector_t)index << sizebits, size); spin_unlock(&inode->i_mapping->private_lock); From patchwork Mon Jun 12 21:01:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7A5ECA9EA5 for ; Mon, 12 Jun 2023 21:05:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237677AbjFLVFN (ORCPT ); Mon, 12 Jun 2023 17:05:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238783AbjFLVEn (ORCPT ); Mon, 12 Jun 2023 17:04:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D54FA1732 for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vKvynQGpjvP3shWSlyAyKuk5q4UrI4w69YUOsJMsW1c=; b=kDcBVqfGv0w8ceTHfcy0uqPeHY ij1TUg81AusxbwiXPLYJyuRQFPKjOujzLcAUYtMKsEdjlhLUbm4vyLT7+touimkipsuNhz7elll3l N+d2WpSWM5XqsoL4ml14K56UAkBv7bw9zjUaUof2FVsu9Z8asLdJ/8pzYBf8w7392Ye6aXPsU7v4z MQG1xDUEdMBDXaAI4Vt1e4HHkMAYBjZfTlaOWGzOe4EbhsOFtFU2DSMznp1VDaQVSqyTZ95aMQw0m 8Sd2VlYwglRGcLQ1rkpYsVOw3Ce/2FIfY2Mlmrnz3yALSoSWPUan5FcSKl4hPPPgtt0eVEsS9pBH7 KHC4zj0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofY-0033xQ-6f; Mon, 12 Jun 2023 21:01:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 13/14] buffer: Use a folio in __find_get_block_slow() Date: Mon, 12 Jun 2023 22:01:40 +0100 Message-Id: <20230612210141.730128-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Saves a call to compound_head() and may be needed to support block size > PAGE_SIZE. Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 4ca2eb2b3dca..c38fdcaa32ff 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -195,19 +195,19 @@ __find_get_block_slow(struct block_device *bdev, sector_t block) pgoff_t index; struct buffer_head *bh; struct buffer_head *head; - struct page *page; + struct folio *folio; int all_mapped = 1; static DEFINE_RATELIMIT_STATE(last_warned, HZ, 1); index = block >> (PAGE_SHIFT - bd_inode->i_blkbits); - page = find_get_page_flags(bd_mapping, index, FGP_ACCESSED); - if (!page) + folio = __filemap_get_folio(bd_mapping, index, FGP_ACCESSED, 0); + if (IS_ERR(folio)) goto out; spin_lock(&bd_mapping->private_lock); - if (!page_has_buffers(page)) + head = folio_buffers(folio); + if (!head) goto out_unlock; - head = page_buffers(page); bh = head; do { if (!buffer_mapped(bh)) @@ -237,7 +237,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block) } out_unlock: spin_unlock(&bd_mapping->private_lock); - put_page(page); + folio_put(folio); out: return ret; } From patchwork Mon Jun 12 21:01:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13277442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E1AC7EE43 for ; Mon, 12 Jun 2023 21:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238063AbjFLVGx (ORCPT ); Mon, 12 Jun 2023 17:06:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238785AbjFLVEo (ORCPT ); Mon, 12 Jun 2023 17:04:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0473198E for ; Mon, 12 Jun 2023 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2Apa5O6YVLHPRKl9QPRk82tyPcbf0XN1bONDIYMH1qM=; b=SPBewS9f90gD3t2ehpHzyHC5jk MrXkZpDRr0aX4N8XgalvgYQ41/WILwB6oARPP+Gw0nUsGJx/0l6i58Xg2isoM7Z0mCj6dhsn1Qw5M 6UJvq8rS8AjLT5aDiXls7WLcBYwP4rGwdGUD797Pu2e5x+bSnbG2spdBIuhzeaTgkvQCNWUfxjohV ZRmneuYjuh7bdNJWYNbDtIa6Rhz586GJ2F+JXXbyd7vhaYjzAarVEMr8Ex4h3JIpeh2fC51K4p+9s aO6F+tB6tEqaW2ywcMofb7pmvgKlXc17Dn2pqSRN0fGVkFDZnIIPPQtWhQXwkX0XAO7HRdm86vA9f X3ehGkHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8ofY-0033xW-AV; Mon, 12 Jun 2023 21:01:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , cluster-devel@redhat.com, Hannes Reinecke , Luis Chamberlain , Andrew Morton , Andreas Gruenbacher Subject: [PATCH v3 14/14] buffer: Convert block_truncate_page() to use a folio Date: Mon, 12 Jun 2023 22:01:41 +0100 Message-Id: <20230612210141.730128-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230612210141.730128-1-willy@infradead.org> References: <20230612210141.730128-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Support large folios in block_truncate_page() and avoid three hidden calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/buffer.c | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index c38fdcaa32ff..5a5b0c9d9769 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2598,17 +2598,16 @@ int block_truncate_page(struct address_space *mapping, loff_t from, get_block_t *get_block) { pgoff_t index = from >> PAGE_SHIFT; - unsigned offset = from & (PAGE_SIZE-1); unsigned blocksize; sector_t iblock; - unsigned length, pos; + size_t offset, length, pos; struct inode *inode = mapping->host; - struct page *page; + struct folio *folio; struct buffer_head *bh; int err = 0; blocksize = i_blocksize(inode); - length = offset & (blocksize - 1); + length = from & (blocksize - 1); /* Block boundary? Nothing to do */ if (!length) @@ -2617,15 +2616,18 @@ int block_truncate_page(struct address_space *mapping, length = blocksize - length; iblock = (sector_t)index << (PAGE_SHIFT - inode->i_blkbits); - page = grab_cache_page(mapping, index); - if (!page) + folio = filemap_grab_folio(mapping, index); + if (!folio) return -ENOMEM; - if (!page_has_buffers(page)) - create_empty_buffers(page, blocksize, 0); + bh = folio_buffers(folio); + if (!bh) { + folio_create_empty_buffers(folio, blocksize, 0); + bh = folio_buffers(folio); + } /* Find the buffer that contains "offset" */ - bh = page_buffers(page); + offset = offset_in_folio(folio, from); pos = blocksize; while (offset >= pos) { bh = bh->b_this_page; @@ -2644,7 +2646,7 @@ int block_truncate_page(struct address_space *mapping, } /* Ok, it's mapped. Make sure it's up-to-date */ - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) set_buffer_uptodate(bh); if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) { @@ -2654,12 +2656,12 @@ int block_truncate_page(struct address_space *mapping, goto unlock; } - zero_user(page, offset, length); + folio_zero_range(folio, offset, length); mark_buffer_dirty(bh); unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return err; }