From patchwork Tue Oct 11 21:56:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13004462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE48FC43217 for ; Tue, 11 Oct 2022 21:57:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37509900004; Tue, 11 Oct 2022 17:57:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 325E3900002; Tue, 11 Oct 2022 17:57:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17727900004; Tue, 11 Oct 2022 17:57:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 03068900002 for ; Tue, 11 Oct 2022 17:57:13 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D8FFC1A00DF for ; Tue, 11 Oct 2022 21:57:13 +0000 (UTC) X-FDA: 80010029946.17.A79E3C3 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf28.hostedemail.com (Postfix) with ESMTP id 6925BC002C for ; Tue, 11 Oct 2022 21:57:13 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id v10-20020a17090a634a00b00205e48cf845so212149pjs.4 for ; Tue, 11 Oct 2022 14:57:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hXDbujsutxz3ngIaiJSWN9PnHbyRD3OgFchLZ9JC5Lo=; b=AYlGz70PIZyhQNjM/Pp+kDlhxI0tF4WMQOfLtk/ekNQRgN9Ryg8nV3g6d7Ogp9hFb4 RkFKc7VcBUyFEuKbtWBk7zY+UaP8ogbIUMr5IikoXh6EZt10Bn9RTXbf9klImU9xpiwG OVrarOKuCw2ZiFWChKbD6Jlpb+uuALJpUAegSOOJPrPRUog6y8xBumfw0exi2HOTTuTh MZS44y2LBJvC/jA+mSA8DntE1hWUO7M3xntY8klRbwxceSFR1yDXCloilgbWz/S8Suv2 TWhPwZF1LE1kzT39Kim3Ki6gXy6AeTHGoWppqfPxgNk7GJxrZgGxyFISdjoH/kRbpn6z s0kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hXDbujsutxz3ngIaiJSWN9PnHbyRD3OgFchLZ9JC5Lo=; b=tuXAvbHpNDMjFW/2McDwFRgO5EYU2lqv6kuNG1L9zaAynOLUZZMfT+E452rf6/9RG8 7E9UoZa6CAeSf1rj1tKoLBt46XogZOSqLtihrww4nMDOO75s2/o8r79506y4yUzZQbR9 JXpihs2u9t0Gz0RMuMnEIluj0a2HqjJ+rgvF5FPQDSSEiQ0vNGgWlT5dV/D8CHKKwi66 XOwNweTGX62zEz6Ta1yADGaje8PfUTfX89EwGJeu7XSn7E2DWCZAD5tnq8AK5AIyfCKF 7reCZ+j9ajNxucPf5u3d/2Q3WQ5PxgA7zoKZ6orL60NWzlKvvp/FxCF6ljKsOVoE2qac oJjg== X-Gm-Message-State: ACrzQf2gmhNOuJFJk9Epd5pJ0pN67WJCFyd8NTIMy5DU144hFBQavF7M up6/hQrFa5vi0WyWUSiL4CmujYg1fBOK3A== X-Google-Smtp-Source: AMsMyM4lMDBv1ZDaUDkxWrgrJDmvQQx72heA97zHEVahr9/xNPYg8P/qKhBWIIlPosLXZJVP3x8/3A== X-Received: by 2002:a17:902:760d:b0:184:29:8ab8 with SMTP id k13-20020a170902760d00b0018400298ab8mr3405046pll.36.1665525432423; Tue, 11 Oct 2022 14:57:12 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z17-20020a170903019100b0018123556931sm6580371plg.204.2022.10.11.14.57.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 14:57:12 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 4/4] filemap: Remove indices argument from find_lock_entries() and find_get_entries() Date: Tue, 11 Oct 2022 14:56:34 -0700 Message-Id: <20221011215634.478330-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221011215634.478330-1-vishal.moola@gmail.com> References: <20221011215634.478330-1-vishal.moola@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665525433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hXDbujsutxz3ngIaiJSWN9PnHbyRD3OgFchLZ9JC5Lo=; b=3rv9XNVDrbPzbEQ3PEjuVeeahRKTBmGZN5VmUhTrIdPU+1nPcrtosOhBM08aPyKXYYzYlP k8TLMZOL39SNqIzGZ6TxflB21LE6KCAvaIV16KYE7GQUr56Whr27cj+D+5QeXK36rPZcaE PGR1pSlW+jWFHNx9qdnx4DqQ9H0E+V4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=AYlGz70P; spf=pass (imf28.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665525433; a=rsa-sha256; cv=none; b=bytx+4AA5ZMFftcFT9fEKMFkZdyA1cAIUyv43TqCUVARjvQqfAtScAZXOi7ULb8Qjbbk5D SxtgNoLMlKgxlP/gPxEyavZgpdMRJQeze2JWX1tA44IKDoSnZXsAjdY9qx281Mo+aab21R UxnfPKnCtbHJIviiTs7YHmaXpe8GoBM= X-Rspam-User: X-Rspamd-Server: rspam11 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=AYlGz70P; spf=pass (imf28.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: tw5kc3wt4bzugdxxo681mcmcuos1m8ee X-Rspamd-Queue-Id: 6925BC002C X-HE-Tag: 1665525433-441092 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The indices array is unnecessary. Folios keep track of their xarray indices in the folio->index field which can simply be accessed as needed. This change removes the indices argument from find_lock_entries() and find_get_entries(). All of the callers are able to remove their indices arrays as well. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 8 ++------ mm/internal.h | 4 ++-- mm/shmem.c | 6 ++---- mm/truncate.c | 12 ++++-------- 4 files changed, 10 insertions(+), 20 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1b8022c18dc7..1f6be113a214 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2034,7 +2034,6 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, * @start: The starting page cache index * @end: The final page index (inclusive). * @fbatch: Where the resulting entries are placed. - * @indices: The cache indices corresponding to the entries in @entries * * find_get_entries() will search for and return a batch of entries in * the mapping. The entries are placed in @fbatch. find_get_entries() @@ -2050,7 +2049,7 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, * Also updates @start to be positioned after the last found entry */ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) + pgoff_t end, struct folio_batch *fbatch) { XA_STATE(xas, &mapping->i_pages, *start); unsigned long nr; @@ -2058,7 +2057,6 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, rcu_read_lock(); while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { - indices[fbatch->nr] = xas.xa_index; if (!folio_batch_add(fbatch, folio)) break; } @@ -2082,7 +2080,6 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, * @start: The starting page cache index. * @end: The final page index (inclusive). * @fbatch: Where the resulting entries are placed. - * @indices: The cache indices of the entries in @fbatch. * * find_lock_entries() will return a batch of entries from @mapping. * Swap, shadow and DAX entries are included. Folios are returned @@ -2098,7 +2095,7 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, * Also updates @start to be positioned after the last found entry */ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) + pgoff_t end, struct folio_batch *fbatch) { XA_STATE(xas, &mapping->i_pages, *start); unsigned long nr; @@ -2119,7 +2116,6 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), folio); } - indices[fbatch->nr] = xas.xa_index; if (!folio_batch_add(fbatch, folio)) break; continue; diff --git a/mm/internal.h b/mm/internal.h index 68afdbe7106e..db8d5dfa6d68 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -107,9 +107,9 @@ static inline void force_page_cache_readahead(struct address_space *mapping, } unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); + pgoff_t end, struct folio_batch *fbatch); unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, - pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); + pgoff_t end, struct folio_batch *fbatch); void filemap_free_folio(struct address_space *mapping, struct folio *folio); int truncate_inode_folio(struct address_space *mapping, struct folio *folio); bool truncate_inode_partial_folio(struct folio *folio, loff_t start, diff --git a/mm/shmem.c b/mm/shmem.c index 8240e066edfc..ad6b5adf04ac 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -907,7 +907,6 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, pgoff_t start = (lstart + PAGE_SIZE - 1) >> PAGE_SHIFT; pgoff_t end = (lend + 1) >> PAGE_SHIFT; struct folio_batch fbatch; - pgoff_t indices[PAGEVEC_SIZE]; struct folio *folio; bool same_folio; long nr_swaps_freed = 0; @@ -923,7 +922,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, folio_batch_init(&fbatch); index = start; while (index < end && find_lock_entries(mapping, &index, end - 1, - &fbatch, indices)) { + &fbatch)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { folio = fbatch.folios[i]; @@ -973,8 +972,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end) { cond_resched(); - if (!find_get_entries(mapping, &index, end - 1, &fbatch, - indices)) { + if (!find_get_entries(mapping, &index, end - 1, &fbatch)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index == start || end != -1) break; diff --git a/mm/truncate.c b/mm/truncate.c index 4e63d885498a..9db247a88483 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -332,7 +332,6 @@ void truncate_inode_pages_range(struct address_space *mapping, pgoff_t start; /* inclusive */ pgoff_t end; /* exclusive */ struct folio_batch fbatch; - pgoff_t indices[PAGEVEC_SIZE]; pgoff_t index; int i; struct folio *folio; @@ -361,7 +360,7 @@ void truncate_inode_pages_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; while (index < end && find_lock_entries(mapping, &index, end - 1, - &fbatch, indices)) { + &fbatch)) { truncate_folio_batch_exceptionals(mapping, &fbatch); for (i = 0; i < folio_batch_count(&fbatch); i++) truncate_cleanup_folio(fbatch.folios[i]); @@ -399,8 +398,7 @@ void truncate_inode_pages_range(struct address_space *mapping, index = start; while (index < end) { cond_resched(); - if (!find_get_entries(mapping, &index, end - 1, &fbatch, - indices)) { + if (!find_get_entries(mapping, &index, end - 1, &fbatch)) { /* If all gone from start onwards, we're done */ if (index == start) break; @@ -497,7 +495,6 @@ EXPORT_SYMBOL(truncate_inode_pages_final); unsigned long invalidate_mapping_pagevec(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) { - pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; pgoff_t index = start; unsigned long ret; @@ -505,7 +502,7 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, int i; folio_batch_init(&fbatch); - while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { + while (find_lock_entries(mapping, &index, end, &fbatch)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; @@ -620,7 +617,6 @@ static int folio_launder(struct address_space *mapping, struct folio *folio) int invalidate_inode_pages2_range(struct address_space *mapping, pgoff_t start, pgoff_t end) { - pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; pgoff_t index; int i; @@ -633,7 +629,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; - while (find_get_entries(mapping, &index, end, &fbatch, indices)) { + while (find_get_entries(mapping, &index, end, &fbatch)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i];