From patchwork Wed Jan 4 21:14:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13089010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BDBAC54EBD for ; Wed, 4 Jan 2023 21:16:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240498AbjADVPy (ORCPT ); Wed, 4 Jan 2023 16:15:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240352AbjADVPW (ORCPT ); Wed, 4 Jan 2023 16:15:22 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 072011B9DA; Wed, 4 Jan 2023 13:15:21 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id o2so32019200pjh.4; Wed, 04 Jan 2023 13:15:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+xmszoMF+zXuajEbDNQtanZkcgOmS+CJpLcEGks1cJg=; b=OncBz9F5gBo7cV1EReOGwL58RLiuGeHUNG9zdhd9I0bXKATJ5Zhy+0GmzcSRHZkGqe GWyLzbZvdJd/L2NfcZcScrFyPQDUeVa3ZPsrKxfcBXxNsTmcxuJGYX6Si9XAN/vN44PO JWIeuxKk4kEu/XBQv7tTuvTs0ONLCiFiszBqZ22kD1lrenj6LzRTGJaJ35L4UabDI4X9 oDH4s+R9oGwf42PnMqbTSeqPtDYmsw6Hy03fA74G24fm2HmtysF71GUVxhIByHu1D1Re mLiVQ//u9upm+YXU62JZZsO92EKHd72eQkKl2tPz+jR9e+206/W8Bdj6AulQvLoDfNUZ B5/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+xmszoMF+zXuajEbDNQtanZkcgOmS+CJpLcEGks1cJg=; b=mOyp1+djtOmleiYCuefZB9dcyuFV//suGVYRxuFSnfBabH6ixcmPoEtTdVWbWCoJH6 M6zY64vq34impAaxcjIS0eQUB1cCV2mwaaYuY3kQcFjodP2MgoMvCPEOfKoQ6YZBucMM YJgmzdgdeQ9wO7vMUdAR+qnXQSJzYVPyiEMkTDhx2Ka2G8mJKsJK6+0WeUzFWssByz+F rpa2FzFqzTYgSfTkELzPTGnXcLmtqNGgOr23BtIkQCvistT1++iAk0liwgXnpxRxr2MX 99hA3twH1YDTyaeKsWnch0RQUHxA1eXE5oxy2wTgQ+7qQ9gXHk59Kk2SpdPdkAzsgiTf ciIg== X-Gm-Message-State: AFqh2kofu6u2i9aHxvmo6X/wnxcRVL3EZya9jjOUxLsaRc1NDkS260d7 XXjlxX+k2twCpcDh/G91Y794tq2Ky54g0g== X-Google-Smtp-Source: AMrXdXvHnTnnPFQftwDqWW79TZxt6Ds+PTbYHXL3GxE41AK61UcPUBYk4efjU/wb+K+ueTETX1yb+w== X-Received: by 2002:a17:90a:b296:b0:225:d9e7:2728 with SMTP id c22-20020a17090ab29600b00225d9e72728mr40013660pjr.33.1672866920523; Wed, 04 Jan 2023 13:15:20 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::a55d]) by smtp.googlemail.com with ESMTPSA id i8-20020a17090a138800b00226369149cesm6408pja.21.2023.01.04.13.15.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 13:15:20 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-nilfs@vger.kernel.org, linux-mm@kvack.org, "Vishal Moola (Oracle)" Subject: [PATCH v5 17/23] gfs2: Convert gfs2_write_cache_jdata() to use filemap_get_folios_tag() Date: Wed, 4 Jan 2023 13:14:42 -0800 Message-Id: <20230104211448.4804-18-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230104211448.4804-1-vishal.moola@gmail.com> References: <20230104211448.4804-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Converted function to use folios throughout. This is in preparation for the removal of find_get_pgaes_range_tag(). This change removes 8 calls to compound_head(). Also had to modify and rename gfs2_write_jdata_pagevec() to take in and utilize folio_batch rather than pagevec and use folios rather than pages. gfs2_write_jdata_batch() now supports large folios. Signed-off-by: Vishal Moola (Oracle) Reviewed-by: Andreas Gruenbacher --- fs/gfs2/aops.c | 64 +++++++++++++++++++++++++++----------------------- 1 file changed, 35 insertions(+), 29 deletions(-) diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index e782b4f1d104..0a47068f9acc 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -195,67 +195,71 @@ static int gfs2_writepages(struct address_space *mapping, } /** - * gfs2_write_jdata_pagevec - Write back a pagevec's worth of pages + * gfs2_write_jdata_batch - Write back a folio batch's worth of folios * @mapping: The mapping * @wbc: The writeback control - * @pvec: The vector of pages - * @nr_pages: The number of pages to write + * @fbatch: The batch of folios * @done_index: Page index * * Returns: non-zero if loop should terminate, zero otherwise */ -static int gfs2_write_jdata_pagevec(struct address_space *mapping, +static int gfs2_write_jdata_batch(struct address_space *mapping, struct writeback_control *wbc, - struct pagevec *pvec, - int nr_pages, + struct folio_batch *fbatch, pgoff_t *done_index) { struct inode *inode = mapping->host; struct gfs2_sbd *sdp = GFS2_SB(inode); - unsigned nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits); + unsigned nrblocks; int i; int ret; + int nr_pages = 0; + int nr_folios = folio_batch_count(fbatch); + + for (i = 0; i < nr_folios; i++) + nr_pages += folio_nr_pages(fbatch->folios[i]); + nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits); ret = gfs2_trans_begin(sdp, nrblocks, nrblocks); if (ret < 0) return ret; - for(i = 0; i < nr_pages; i++) { - struct page *page = pvec->pages[i]; + for (i = 0; i < nr_folios; i++) { + struct folio *folio = fbatch->folios[i]; - *done_index = page->index; + *done_index = folio->index; - lock_page(page); + folio_lock(folio); - if (unlikely(page->mapping != mapping)) { + if (unlikely(folio->mapping != mapping)) { continue_unlock: - unlock_page(page); + folio_unlock(folio); continue; } - if (!PageDirty(page)) { + if (!folio_test_dirty(folio)) { /* someone wrote it for us */ goto continue_unlock; } - if (PageWriteback(page)) { + if (folio_test_writeback(folio)) { if (wbc->sync_mode != WB_SYNC_NONE) - wait_on_page_writeback(page); + folio_wait_writeback(folio); else goto continue_unlock; } - BUG_ON(PageWriteback(page)); - if (!clear_page_dirty_for_io(page)) + BUG_ON(folio_test_writeback(folio)); + if (!folio_clear_dirty_for_io(folio)) goto continue_unlock; trace_wbc_writepage(wbc, inode_to_bdi(inode)); - ret = __gfs2_jdata_writepage(page, wbc); + ret = __gfs2_jdata_writepage(&folio->page, wbc); if (unlikely(ret)) { if (ret == AOP_WRITEPAGE_ACTIVATE) { - unlock_page(page); + folio_unlock(folio); ret = 0; } else { @@ -268,7 +272,8 @@ static int gfs2_write_jdata_pagevec(struct address_space *mapping, * not be suitable for data integrity * writeout). */ - *done_index = page->index + 1; + *done_index = folio->index + + folio_nr_pages(folio); ret = 1; break; } @@ -305,8 +310,8 @@ static int gfs2_write_cache_jdata(struct address_space *mapping, { int ret = 0; int done = 0; - struct pagevec pvec; - int nr_pages; + struct folio_batch fbatch; + int nr_folios; pgoff_t writeback_index; pgoff_t index; pgoff_t end; @@ -315,7 +320,7 @@ static int gfs2_write_cache_jdata(struct address_space *mapping, int range_whole = 0; xa_mark_t tag; - pagevec_init(&pvec); + folio_batch_init(&fbatch); if (wbc->range_cyclic) { writeback_index = mapping->writeback_index; /* prev offset */ index = writeback_index; @@ -341,17 +346,18 @@ static int gfs2_write_cache_jdata(struct address_space *mapping, tag_pages_for_writeback(mapping, index, end); done_index = index; while (!done && (index <= end)) { - nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end, - tag); - if (nr_pages == 0) + nr_folios = filemap_get_folios_tag(mapping, &index, end, + tag, &fbatch); + if (nr_folios == 0) break; - ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, &done_index); + ret = gfs2_write_jdata_batch(mapping, wbc, &fbatch, + &done_index); if (ret) done = 1; if (ret > 0) ret = 0; - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); }