From patchwork Mon Oct 17 16:17:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13009020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6D1EC43219 for ; Mon, 17 Oct 2022 16:18:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 663D86B0078; Mon, 17 Oct 2022 12:18:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 612B08E0001; Mon, 17 Oct 2022 12:18:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DB4C6B007D; Mon, 17 Oct 2022 12:18:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 409296B0078 for ; Mon, 17 Oct 2022 12:18:08 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id EC5341A081C for ; Mon, 17 Oct 2022 16:18:07 +0000 (UTC) X-FDA: 80030948214.08.FEEE3FD Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf09.hostedemail.com (Postfix) with ESMTP id 72647140033 for ; Mon, 17 Oct 2022 16:18:07 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id c24so11247637plo.3 for ; Mon, 17 Oct 2022 09:18:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5j539BVjq27URS/HNG9DrLeM7Wv07f4M+R4mIEbK3S8=; b=IZf7M4v9cnmFrzMJwOe8cLn5+Qylo/CbrK6HuvbD4tXqp1aNaO+PxREpTWmtbiQfoZ rkFFM5OGIXhOG5RBxFssGCCjha4z7MMaWFEdyDb8btzv4cryoz6nmkPaAZScnIBMLlZl yxW/lgSJK+ViQKNVz/hGoIxhgwdZrPf6n1CrYP59o9rKamMFFpKamkmYRw7bGUaYmBJx BZ1GZGEW1O/ez8uQK+m5hpt3CmvuUcxTcKVG3dJvbUU48n1JKRfjs3U0EnZFxNHv/H80 EqTGht9vIXXrK5u+HWNrVo7MYwxH+QgbSPttQjmN1VdaVQZvqqDqNZSBhDOoZuS/unrJ bPSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5j539BVjq27URS/HNG9DrLeM7Wv07f4M+R4mIEbK3S8=; b=WTyXzrPW4fMo84/j3fCtrVvNYjzVprOq0FK2IH7mXmnQuNHW1RperJOygdVLuDK7hx qNoPfXjw7Mzx6tU/KVp2raZkxUQPlk3bPZaSVClGsvuzJk3I9Qud21XtG7NlDn6ryniZ JBppQKdT0A40KOy8Fh969Gopp26R3wzckZOLujF8kGeO6k+kd1wfNQ958ys5rB9elz0c dEQva1TDne32cNzNMz/2NPpSCs+JCz2VQLtH7riqGdH0RV8q7MyRHqvsIeWMDI2Gnht0 uS9+iLUFzmdUgEZ5olZ2wxh6eLuRN9gOsnJ8+MY5r8W9MA5h4QVccgD7AYQEfWoiP79U 8XpQ== X-Gm-Message-State: ACrzQf0GO+Hg0Fes6mIPuo91Qag+WGLmraoFflsjwa9Psn47HJ40yh9H 44szp5FoDu4c5nyzjjyfu/DDy6GMJO4tqQ== X-Google-Smtp-Source: AMsMyM4ftJIMA25PfGYTh9jK2D4BU/OQiynH29ZlT6JT9jOJKWuweEcyzHkyoCTQhSqQMCMdystdRA== X-Received: by 2002:a17:90a:1785:b0:20a:6162:2b6a with SMTP id q5-20020a17090a178500b0020a61622b6amr14373016pja.180.1666023486520; Mon, 17 Oct 2022 09:18:06 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z22-20020a62d116000000b0055f209690c0sm7272326pfg.50.2022.10.17.09.18.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Oct 2022 09:18:06 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH v3 1/2] filemap: find_lock_entries() now updates start offset Date: Mon, 17 Oct 2022 09:17:59 -0700 Message-Id: <20221017161800.2003-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221017161800.2003-1-vishal.moola@gmail.com> References: <20221017161800.2003-1-vishal.moola@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666023487; a=rsa-sha256; cv=none; b=D8AZegXPVwLB5B8qrmiascamkcTAN1fBZOOSy85nQ+6t9u4yWdjKvmftZx4EGVFhkLIwHr wCpkHPiyFgbqW5HiWlfYZqcYPg6+ESqkx3vn9O6Q9ln5YT5J/1xKWWz4C84/ljkCN3Gxsu si8le8m4WTGWTUATW9PuznwOc5x4KKU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IZf7M4v9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666023487; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5j539BVjq27URS/HNG9DrLeM7Wv07f4M+R4mIEbK3S8=; b=X8bM1xUyCg9puI0BaqVrGuzfdCWVqwKSGjkDQxiyTAEvwx6OR1v2pk0Cr0K2R/tDy0C5bs n2YImtSPMejX5QfbYOsBSMVw2ySbwwfqcBiE1So/FYo6PXq+IDDLDS+SpoTXAZUrJ3VtML V6NFLd3vLbYeSKBljvwgvU9VYVRXf/A= X-Stat-Signature: 7q8jyfp6r53tf3gtktsk6u1c9u6b87eh X-Rspamd-Queue-Id: 72647140033 X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IZf7M4v9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com X-Rspamd-Server: rspam11 X-HE-Tag: 1666023487-506616 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Initially, find_lock_entries() was being passed in the start offset as a value. That left the calculation of the offset to the callers. This led to complexity in the callers trying to keep track of the index. Now find_lock_entries() takes in a pointer to the start offset and updates the value to be directly after the last entry found. If no entry is found, the offset is not changed. This gets rid of multiple hacky calculations that kept track of the start offset. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 15 ++++++++++++--- mm/internal.h | 2 +- mm/shmem.c | 8 ++------ mm/truncate.c | 11 +++-------- 4 files changed, 18 insertions(+), 18 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index c943d1b90cc2..f1fec7bf5b15 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2090,16 +2090,16 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, * * Return: The number of entries which were found. */ -unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, +unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { - XA_STATE(xas, &mapping->i_pages, start); + XA_STATE(xas, &mapping->i_pages, *start); struct folio *folio; rcu_read_lock(); while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { if (!xa_is_value(folio)) { - if (folio->index < start) + if (folio->index < *start) goto put; if (folio->index + folio_nr_pages(folio) - 1 > end) goto put; @@ -2122,6 +2122,15 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, } rcu_read_unlock(); + if (folio_batch_count(fbatch)) { + unsigned long nr = 1; + int idx = folio_batch_count(fbatch) - 1; + + folio = fbatch->folios[idx]; + if (!xa_is_value(folio) && !folio_test_hugetlb(folio)) + nr = folio_nr_pages(folio); + *start = indices[idx] + nr; + } return folio_batch_count(fbatch); } diff --git a/mm/internal.h b/mm/internal.h index 785409805ed7..14625de6714b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -104,7 +104,7 @@ static inline void force_page_cache_readahead(struct address_space *mapping, force_page_cache_ra(&ractl, nr_to_read); } -unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, +unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); diff --git a/mm/shmem.c b/mm/shmem.c index 42e5888bf84d..9e17a2b0dc43 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -932,21 +932,18 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, folio_batch_init(&fbatch); index = start; - while (index < end && find_lock_entries(mapping, index, end - 1, + while (index < end && find_lock_entries(mapping, &index, end - 1, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { folio = fbatch.folios[i]; - index = indices[i]; - if (xa_is_value(folio)) { if (unfalloc) continue; nr_swaps_freed += !shmem_free_swap(mapping, - index, folio); + indices[i], folio); continue; } - index += folio_nr_pages(folio) - 1; if (!unfalloc || !folio_test_uptodate(folio)) truncate_inode_folio(mapping, folio); @@ -955,7 +952,6 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT); diff --git a/mm/truncate.c b/mm/truncate.c index 0b0708bf935f..9fbe282e70ba 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -361,9 +361,8 @@ void truncate_inode_pages_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; - while (index < end && find_lock_entries(mapping, index, end - 1, + while (index < end && find_lock_entries(mapping, &index, end - 1, &fbatch, indices)) { - index = indices[folio_batch_count(&fbatch) - 1] + 1; truncate_folio_batch_exceptionals(mapping, &fbatch, indices); for (i = 0; i < folio_batch_count(&fbatch); i++) truncate_cleanup_folio(fbatch.folios[i]); @@ -510,20 +509,17 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, int i; folio_batch_init(&fbatch); - while (find_lock_entries(mapping, index, end, &fbatch, indices)) { + while (find_lock_entries(mapping, &index, end, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing folio->index */ - index = indices[i]; if (xa_is_value(folio)) { count += invalidate_exceptional_entry(mapping, - index, - folio); + indices[i], folio); continue; } - index += folio_nr_pages(folio) - 1; ret = mapping_evict_folio(mapping, folio); folio_unlock(folio); @@ -542,7 +538,6 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } return count; } From patchwork Mon Oct 17 16:18:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13009021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9F2BC43217 for ; Mon, 17 Oct 2022 16:18:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79D206B007B; Mon, 17 Oct 2022 12:18:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7272F8E0001; Mon, 17 Oct 2022 12:18:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D1DB6B007B; Mon, 17 Oct 2022 12:18:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 50A166B007B for ; Mon, 17 Oct 2022 12:18:10 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 26307A0DD1 for ; Mon, 17 Oct 2022 16:18:10 +0000 (UTC) X-FDA: 80030948340.07.0DBE0AF Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf24.hostedemail.com (Postfix) with ESMTP id B17B418003D for ; Mon, 17 Oct 2022 16:18:09 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id 78so10861878pgb.13 for ; Mon, 17 Oct 2022 09:18:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SKguiJYDkzC95u9M4wpAvwaM2MiZsshmjZtegeg9N/Q=; b=TIQdmVF/NUwB/SC10cgEjN0v81RQV4XfsM2gsj49cKH1uKhnxv8or3N2n+8di1/cy5 ksaoz6ZWmpfqgJw8oe4LP1eHuU5iYjvbWUdEcIU42mhLa9/DR1wk8eF7w5mkpvuDxH5k Frc5J7/dz8+dcZosr/RJ3iVX5qCKTfHNnWy1HOm5YQQsR0JzKvH1BPAQGIjt5eEq7nOy urZE6CmYghSeCdoTM33tuH7cESEIQoAjwunqDBnehyZ534twCYrHGJabtZvKdDoSFBGs ZNAY7x48BwWy1V5HyLU6HHoSbsJWMrpr0v9p18sup05YIbPm1FGcgcg9E9QW4PD02jDF Z3Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SKguiJYDkzC95u9M4wpAvwaM2MiZsshmjZtegeg9N/Q=; b=mQiZWPKMtwgXor+oMuPBXwlVuC711kas+xRbm4ubrGP/PFPd47BrcWic2pYCe5O4CT JLRwh+F1GELIIJNMYnctyv8mE4b4/c5G0x2u6+TRWy8Pbzvz627pyW0uJyy7FtGkgVyE jcqgTB2dA6nCIOkWq5ItQOFs6fO4XvQOXGFTkZ7B/0UokMyXcMVaghR7xTMYoCB8iqyo uu/bHq6SXbs5AbrEyE57iAkHikgNLG5PqGh/ghR3IFLjh+JP4yvt7p6xtTfoTCBh/0yT OJTZ4X4kkkNtGH+uzWJ9i15dhYapYpxNhoa4uyKdRxHePIdAFIMHHdrQxDHF9aDoy806 3MMg== X-Gm-Message-State: ACrzQf1Lb8TjekBt6sd5u+m9vUThxleQZqhh5DEa9+iX4l6HnwplkXOe vuK+CcWv0i1+6QsA9j5WQNxxbsImjFMxXA== X-Google-Smtp-Source: AMsMyM6s03W1kgLl5YtFLJYwAnNNLW3lEp+bMieGrzvLyShF1+2uxmb0yhnKBrwbaSQZRb0uJ64giA== X-Received: by 2002:aa7:888b:0:b0:563:aa1:adae with SMTP id z11-20020aa7888b000000b005630aa1adaemr13235253pfe.15.1666023488527; Mon, 17 Oct 2022 09:18:08 -0700 (PDT) Received: from vmfolio.. (c-76-102-73-225.hsd1.ca.comcast.net. [76.102.73.225]) by smtp.googlemail.com with ESMTPSA id z22-20020a62d116000000b0055f209690c0sm7272326pfg.50.2022.10.17.09.18.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Oct 2022 09:18:08 -0700 (PDT) From: "Vishal Moola (Oracle)" To: akpm@linux-foundation.org Cc: willy@infradead.org, hughd@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH v3 2/2] filemap: find_get_entries() now updates start offset Date: Mon, 17 Oct 2022 09:18:00 -0700 Message-Id: <20221017161800.2003-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221017161800.2003-1-vishal.moola@gmail.com> References: <20221017161800.2003-1-vishal.moola@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="TIQdmVF/"; spf=pass (imf24.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666023489; a=rsa-sha256; cv=none; b=oeqKU6TzY54soLBfM4y+Og7ULiYtA9qLudp+a6k4/6WO5X9IJcU7FZKLVVLU/tiYou/FU4 2M2LeY5jL1QPnMkNjJcgYG+whWy+yYxRHpNBee8yJ/Q3UaGLsn/LFnKKaSEBhc/F0nXc9o iQVUmOlSlc8a7PUvfYaAn/Q3aDLocCM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666023489; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SKguiJYDkzC95u9M4wpAvwaM2MiZsshmjZtegeg9N/Q=; b=ScepKZfBT93wzxqB8SA9zg3+beJlhG0HZG/ty7nMpISwr1aO6WeLGR+iEPZjt4z/Rv5ml/ jnF5o1GX3a5YYu001i9G/jDruWXU6Ctaf30EAUABiCfuooveCmAsVIn1zD4xWrlBAtZKqX S6OGHoMJBC2XOgbi9tB8oPYPbnW42Is= X-Stat-Signature: tk633hjtnp5c4z5qfhco8dexhn5maquo X-Rspamd-Queue-Id: B17B418003D Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="TIQdmVF/"; spf=pass (imf24.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1666023489-8229 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Initially, find_get_entries() was being passed in the start offset as a value. That left the calculation of the offset to the callers. This led to complexity in the callers trying to keep track of the index. Now find_get_entries() takes in a pointer to the start offset and updates the value to be directly after the last entry found. If no entry is found, the offset is not changed. This gets rid of multiple hacky calculations that kept track of the start offset. Signed-off-by: Vishal Moola (Oracle) --- mm/filemap.c | 13 +++++++++++-- mm/internal.h | 2 +- mm/shmem.c | 11 ++++------- mm/truncate.c | 19 +++++++------------ 4 files changed, 23 insertions(+), 22 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index f1fec7bf5b15..804d335504f0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2053,10 +2053,10 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, * * Return: The number of entries which were found. */ -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { - XA_STATE(xas, &mapping->i_pages, start); + XA_STATE(xas, &mapping->i_pages, *start); struct folio *folio; rcu_read_lock(); @@ -2067,6 +2067,15 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, } rcu_read_unlock(); + if (folio_batch_count(fbatch)) { + unsigned long nr = 1; + int idx = folio_batch_count(fbatch) - 1; + + folio = fbatch->folios[idx]; + if (!xa_is_value(folio) && !folio_test_hugetlb(folio)) + nr = folio_nr_pages(folio); + *start = indices[idx] + nr; + } return folio_batch_count(fbatch); } diff --git a/mm/internal.h b/mm/internal.h index 14625de6714b..e87982cf1d48 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -106,7 +106,7 @@ static inline void force_page_cache_readahead(struct address_space *mapping, unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); -unsigned find_get_entries(struct address_space *mapping, pgoff_t start, +unsigned find_get_entries(struct address_space *mapping, pgoff_t *start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); void filemap_free_folio(struct address_space *mapping, struct folio *folio); int truncate_inode_folio(struct address_space *mapping, struct folio *folio); diff --git a/mm/shmem.c b/mm/shmem.c index 9e17a2b0dc43..8c3c2ac15759 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -983,7 +983,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, while (index < end) { cond_resched(); - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone or hole-punch or unfalloc, we're done */ if (index == start || end != -1) @@ -995,13 +995,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, for (i = 0; i < folio_batch_count(&fbatch); i++) { folio = fbatch.folios[i]; - index = indices[i]; if (xa_is_value(folio)) { if (unfalloc) continue; - if (shmem_free_swap(mapping, index, folio)) { + if (shmem_free_swap(mapping, indices[i], folio)) { /* Swap was replaced by page: retry */ - index--; + index = indices[i]; break; } nr_swaps_freed++; @@ -1014,19 +1013,17 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (folio_mapping(folio) != mapping) { /* Page was replaced by swap: retry */ folio_unlock(folio); - index--; + index = indices[i]; break; } VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); truncate_inode_folio(mapping, folio); } - index = folio->index + folio_nr_pages(folio) - 1; folio_unlock(folio); } folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); - index++; } spin_lock_irq(&info->lock); diff --git a/mm/truncate.c b/mm/truncate.c index 9fbe282e70ba..faeeca45d4ed 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -400,7 +400,7 @@ void truncate_inode_pages_range(struct address_space *mapping, index = start; while (index < end) { cond_resched(); - if (!find_get_entries(mapping, index, end - 1, &fbatch, + if (!find_get_entries(mapping, &index, end - 1, &fbatch, indices)) { /* If all gone from start onwards, we're done */ if (index == start) @@ -414,21 +414,18 @@ void truncate_inode_pages_range(struct address_space *mapping, struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing page->index */ - index = indices[i]; if (xa_is_value(folio)) continue; folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]), folio); folio_wait_writeback(folio); truncate_inode_folio(mapping, folio); folio_unlock(folio); - index = folio_index(folio) + folio_nr_pages(folio) - 1; } truncate_folio_batch_exceptionals(mapping, &fbatch, indices); folio_batch_release(&fbatch); - index++; } } EXPORT_SYMBOL(truncate_inode_pages_range); @@ -636,16 +633,15 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_init(&fbatch); index = start; - while (find_get_entries(mapping, index, end, &fbatch, indices)) { + while (find_get_entries(mapping, &index, end, &fbatch, indices)) { for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; /* We rely upon deletion not changing folio->index */ - index = indices[i]; if (xa_is_value(folio)) { if (!invalidate_exceptional_entry2(mapping, - index, folio)) + indices[i], folio)) ret = -EBUSY; continue; } @@ -655,13 +651,13 @@ int invalidate_inode_pages2_range(struct address_space *mapping, * If folio is mapped, before taking its lock, * zap the rest of the file in one hit. */ - unmap_mapping_pages(mapping, index, - (1 + end - index), false); + unmap_mapping_pages(mapping, indices[i], + (1 + end - indices[i]), false); did_range_unmap = 1; } folio_lock(folio); - VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); + VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]), folio); if (folio->mapping != mapping) { folio_unlock(folio); continue; @@ -684,7 +680,6 @@ int invalidate_inode_pages2_range(struct address_space *mapping, folio_batch_remove_exceptionals(&fbatch); folio_batch_release(&fbatch); cond_resched(); - index++; } /* * For DAX we invalidate page tables after invalidating page cache. We