From patchwork Thu Sep 1 22:01:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 12963316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 333D3C6FA8F for ; Thu, 1 Sep 2022 22:03:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234399AbiIAWDA (ORCPT ); Thu, 1 Sep 2022 18:03:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233562AbiIAWCo (ORCPT ); Thu, 1 Sep 2022 18:02:44 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB1767C504; Thu, 1 Sep 2022 15:02:42 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id t129so113053pfb.6; Thu, 01 Sep 2022 15:02:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=NOXLTO79qhrJOlYyxIYm/ZYJYsfR2nEvKsGqG4l5Qvs=; b=BofO2z6y9ENTQWp0Yvv0xZOA4E0ZmpnPsRHNqq6A2jC1fFkLvkeI9auiLxIXCMWaFI Ny/wnTRd4bsfeSW1PW0hlP2y1AW0H/gKGeJyyyuXIzCBClJwAJdgV0XPBr23RLqie4M/ ewOkzhyY5JUKKRcuoL+BKdtDBY649mPhRz+RjFkATheqTrkrL23h4FPz7QrlORmIjWgR HOb4a9Tepxb/llZqXj0DwkH1KVQYgyA1Reni49ob6gMd5Ubkso+9GqQHb704TQ/xa7VI DNqNBw4bPc6iBtu1mEpGSjGiajYdCK5/I1MxHEc0MmiyV/DXBSSxCupIrv8urS1sT+Pz 3wwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=NOXLTO79qhrJOlYyxIYm/ZYJYsfR2nEvKsGqG4l5Qvs=; b=lJ7WTq4NtvBaKgc2lIc7H+mfBheIca5FK/6hR3HEDniUuSOylRDNFBRHT17LM5xxuc HOgOerMgsdKMt8VdIzbf7RzZpxegRMb0kRqjspG7IzR5daC9WOKff/C/KB8QtWQyYNuT KmsaiFOmsXvaaYaXIahV0pRB6ETIgmOcs02Lz+rJiVz+oElaIm20rXVLaR91WHTUIHmN oo1HHXy7oTyaZ7FgbwYz1f8utb2KOqLh64mb9lfJ83LJihClybtY8BwAFT5dLGIwQ3Rv ZDGzXTebkM3xuv5BKVvKrPbYxC3bMDjII98LfnzAL5BLqr4JqpN+nvnekcQCK0TqBRPC yOoQ== X-Gm-Message-State: ACgBeo2lGAzyikTsx8Qy/nY24/1e0L0WKfUABczJIXvMd4U/EajMtiHZ gA7QnmYfSIbC6fMf9NJsraQVNT34Fg6cJw== X-Google-Smtp-Source: AA6agR4QgchkxCnj/34OOoCX8oBg5svfID3CvXXYOvQ+6OAqnA4ElIZZzg75g2LnJSAMa171PFwf5w== X-Received: by 2002:a63:256:0:b0:42f:b14b:2085 with SMTP id 83-20020a630256000000b0042fb14b2085mr11358313pgc.189.1662069761849; Thu, 01 Sep 2022 15:02:41 -0700 (PDT) Received: from vmfolio.. (c-73-189-111-8.hsd1.ca.comcast.net. [73.189.111.8]) by smtp.googlemail.com with ESMTPSA id fv4-20020a17090b0e8400b001fb350026f1sm128894pjb.4.2022.09.01.15.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 15:02:41 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-fsdevel@vger.kernel.org Cc: linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-nilfs@vger.kernel.org, linux-mm@kvack.org, "Vishal Moola (Oracle)" Subject: [PATCH 05/23] afs: Convert afs_writepages_region() to use filemap_get_folios_tag() Date: Thu, 1 Sep 2022 15:01:20 -0700 Message-Id: <20220901220138.182896-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220901220138.182896-1-vishal.moola@gmail.com> References: <20220901220138.182896-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Convert to use folios throughout. This function is in preparation to remove find_get_pages_range_tag(). Also modified this function to write the whole batch one at a time, rather than calling for a new set every single write. Signed-off-by: Vishal Moola (Oracle) Tested-by: David Howells --- fs/afs/write.c | 114 +++++++++++++++++++++++++------------------------ 1 file changed, 59 insertions(+), 55 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 9ebdd36eaf2f..c17dbd82a38c 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -699,82 +699,86 @@ static int afs_writepages_region(struct address_space *mapping, loff_t start, loff_t end, loff_t *_next) { struct folio *folio; - struct page *head_page; + struct folio_batch fbatch; ssize_t ret; + unsigned int i; int n, skips = 0; _enter("%llx,%llx,", start, end); + folio_batch_init(&fbatch); do { pgoff_t index = start / PAGE_SIZE; - n = find_get_pages_range_tag(mapping, &index, end / PAGE_SIZE, - PAGECACHE_TAG_DIRTY, 1, &head_page); + n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE, + PAGECACHE_TAG_DIRTY, &fbatch); + if (!n) break; + for (i = 0; i < n; i++) { + folio = fbatch.folios[i]; + start = folio_pos(folio); /* May regress with THPs */ - folio = page_folio(head_page); - start = folio_pos(folio); /* May regress with THPs */ - - _debug("wback %lx", folio_index(folio)); + _debug("wback %lx", folio_index(folio)); - /* At this point we hold neither the i_pages lock nor the - * page lock: the page may be truncated or invalidated - * (changing page->mapping to NULL), or even swizzled - * back from swapper_space to tmpfs file mapping - */ - if (wbc->sync_mode != WB_SYNC_NONE) { - ret = folio_lock_killable(folio); - if (ret < 0) { - folio_put(folio); - return ret; - } - } else { - if (!folio_trylock(folio)) { - folio_put(folio); - return 0; + /* At this point we hold neither the i_pages lock nor the + * page lock: the page may be truncated or invalidated + * (changing page->mapping to NULL), or even swizzled + * back from swapper_space to tmpfs file mapping + */ + if (wbc->sync_mode != WB_SYNC_NONE) { + ret = folio_lock_killable(folio); + if (ret < 0) { + folio_batch_release(&fbatch); + return ret; + } + } else { + if (!folio_trylock(folio)) + continue; } - } - if (folio_mapping(folio) != mapping || - !folio_test_dirty(folio)) { - start += folio_size(folio); - folio_unlock(folio); - folio_put(folio); - continue; - } + if (folio->mapping != mapping || + !folio_test_dirty(folio)) { + start += folio_size(folio); + folio_unlock(folio); + continue; + } - if (folio_test_writeback(folio) || - folio_test_fscache(folio)) { - folio_unlock(folio); - if (wbc->sync_mode != WB_SYNC_NONE) { - folio_wait_writeback(folio); + if (folio_test_writeback(folio) || + folio_test_fscache(folio)) { + folio_unlock(folio); + if (wbc->sync_mode != WB_SYNC_NONE) { + folio_wait_writeback(folio); #ifdef CONFIG_AFS_FSCACHE - folio_wait_fscache(folio); + folio_wait_fscache(folio); #endif - } else { - start += folio_size(folio); + } else { + start += folio_size(folio); + } + if (wbc->sync_mode == WB_SYNC_NONE) { + if (skips >= 5 || need_resched()) { + *_next = start; + _leave(" = 0 [%llx]", *_next); + return 0; + } + skips++; + } + continue; } - folio_put(folio); - if (wbc->sync_mode == WB_SYNC_NONE) { - if (skips >= 5 || need_resched()) - break; - skips++; + + if (!folio_clear_dirty_for_io(folio)) + BUG(); + ret = afs_write_back_from_locked_folio(mapping, wbc, + folio, start, end); + if (ret < 0) { + _leave(" = %zd", ret); + folio_batch_release(&fbatch); + return ret; } - continue; - } - if (!folio_clear_dirty_for_io(folio)) - BUG(); - ret = afs_write_back_from_locked_folio(mapping, wbc, folio, start, end); - folio_put(folio); - if (ret < 0) { - _leave(" = %zd", ret); - return ret; + start += ret; } - - start += ret; - + folio_batch_release(&fbatch); cond_resched(); } while (wbc->nr_to_write > 0);