From patchwork Wed Jun 21 16:45:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A76EEB64D8 for ; Wed, 21 Jun 2023 16:46:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D92C310E324; Wed, 21 Jun 2023 16:46:09 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 88D6F10E32D for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V8mDv0EPPLn9jbcc1q5//Yn83nwPOxgWUqrHoyXGI0c=; b=PLRq9KKLTjSuYOK7GHTRrCHMdg 05lLjp1ehbCv+052CPARXxvQnb1O43zxj7iURZQn/E1jaN0bwIQW5tu0K45Zbr+3zxWU+wPrVZlCK +QqsOSr6EUqo15LlAazBFikNdb1fRpMqJvFYQgSLPIYQxMIV53HL0n1UNVHnv6JnPh6JKkExbpJ6p J6MYSwHHyE8H5k3lgy2R+EbEeMzUGq7Yt8LNMEzUcWUZ+ExfwFRmFIPsNXNCW5Mk31QYFy5kzCoHz 0e5RPgd7Cd8SGsqqWpYTptkDT+lG0sounVV2V2o0EUvc/3O7Y3fLam5B//dQFJa/rpuagffUbR3Pv xw/swJBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDg-9p; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 01/13] afs: Convert pagevec to folio_batch in afs_extend_writeback() Date: Wed, 21 Jun 2023 17:45:45 +0100 Message-Id: <20230621164557.3510324-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Removes a folio->page->folio conversion for each folio that's involved. More importantly, removes one of the last few uses of a pagevec. Signed-off-by: Matthew Wilcox (Oracle) --- fs/afs/write.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 18ccb613dff8..6e68c70d0b22 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -465,7 +465,7 @@ static void afs_extend_writeback(struct address_space *mapping, bool caching, unsigned int *_len) { - struct pagevec pvec; + struct folio_batch fbatch; struct folio *folio; unsigned long priv; unsigned int psize, filler = 0; @@ -476,7 +476,7 @@ static void afs_extend_writeback(struct address_space *mapping, unsigned int i; XA_STATE(xas, &mapping->i_pages, index); - pagevec_init(&pvec); + folio_batch_init(&fbatch); do { /* Firstly, we gather up a batch of contiguous dirty pages @@ -535,7 +535,7 @@ static void afs_extend_writeback(struct address_space *mapping, stop = false; index += folio_nr_pages(folio); - if (!pagevec_add(&pvec, &folio->page)) + if (!folio_batch_add(&fbatch, folio)) break; if (stop) break; @@ -545,14 +545,14 @@ static void afs_extend_writeback(struct address_space *mapping, xas_pause(&xas); rcu_read_unlock(); - /* Now, if we obtained any pages, we can shift them to being + /* Now, if we obtained any folios, we can shift them to being * writable and mark them for caching. */ - if (!pagevec_count(&pvec)) + if (!folio_batch_count(&fbatch)) break; - for (i = 0; i < pagevec_count(&pvec); i++) { - folio = page_folio(pvec.pages[i]); + for (i = 0; i < folio_batch_count(&fbatch); i++) { + folio = fbatch.folios[i]; trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio); if (!folio_clear_dirty_for_io(folio)) @@ -565,7 +565,7 @@ static void afs_extend_writeback(struct address_space *mapping, folio_unlock(folio); } - pagevec_release(&pvec); + folio_batch_release(&fbatch); cond_resched(); } while (!stop); From patchwork Wed Jun 21 16:45:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A82B4EB64D7 for ; Wed, 21 Jun 2023 16:46:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F349D10E32B; Wed, 21 Jun 2023 16:46:34 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1723810E32D for ; Wed, 21 Jun 2023 16:46:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KFMiCla2naG7Z7vKPwGKVD4TrVn4emFE0PLxxevs+7o=; b=OU4HmcXWzyUGQD65hDOdwkgu0o u74l9DMZxcBXT5V4Z5qLLe5pROXO6krjW5ruc1DT0SkscC1GAv6CUiRHqoT/xZNB+FTYdY7HTZcSC 1nPv6XnNmEo7hrUJxM01iosMG8Y0Ym2VcY6jvvzCBkvQC7lZJfWEJjt8YGSGMlqEn7izRDRm51hz1 GMTsyqcMH1J1dyYFMGFeU3Saag7AZyuOX2CkLBm+3ksa29C3VbudjYZzIgY5yxIiJcoHDGi0dytI4 BSi9r/ik6bNx2dBlS0BKcqh1/yQ1YIwWVl75bC6VUoMpy4h0Khgb2dDFrfbmsM3CyCJYXba2lPTTt z9KQ53Ag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDi-Cc; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 02/13] mm: Add __folio_batch_release() Date: Wed, 21 Jun 2023 17:45:46 +0100 Message-Id: <20230621164557.3510324-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This performs the same role as __pagevec_release(), ie skipping the check for batch length of 0. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index f582f7213ea5..42aad53e382e 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -127,9 +127,15 @@ static inline unsigned folio_batch_add(struct folio_batch *fbatch, return fbatch_space(fbatch); } +static inline void __folio_batch_release(struct folio_batch *fbatch) +{ + __pagevec_release((struct pagevec *)fbatch); +} + static inline void folio_batch_release(struct folio_batch *fbatch) { - pagevec_release((struct pagevec *)fbatch); + if (folio_batch_count(fbatch)) + __folio_batch_release(fbatch); } void folio_batch_remove_exceptionals(struct folio_batch *fbatch); From patchwork Wed Jun 21 16:45:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B5FA9EB64D8 for ; Wed, 21 Jun 2023 16:46:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B261C10E331; Wed, 21 Jun 2023 16:46:45 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id F360810E326 for ; Wed, 21 Jun 2023 16:46:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jTDGdnu+fG/6kslyclH2srz/sHr6MJEFetnoOEAqThE=; b=J7syLfgLtqZEvF+FaGdNCpIxYK FGrBsHrwiUgwuOgUEvOX2i7PWXG/ZXhqiAhW6CpfyR4l4rDqNGULpmdUDJAIMFqrXnlWMyJuuuQew CRIwuxDz6BodPG9A2vL0dc3r9jSUjJ096kbZOKhyXbwTEF5WnPbOv35W1YxmyI5xeO3eW9SZeu6Ui elEI4SaI8VG431yTXK/Agfd1OvARTxSno98kcl4XPxpk7V8Q46NmOwcGLYx7LMoOCYaku65I0hGaM vfxZSH7R76ZxcVPpw10enxxDEKsn/iEIr1hxmLFS6RNnFgRQiKGzIC7i3HzHCE4ZskLC1FZi9nT0k 4UQzvpAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDk-FK; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 03/13] scatterlist: Add sg_set_folio() Date: Wed, 21 Jun 2023 17:45:47 +0100 Message-Id: <20230621164557.3510324-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This wrapper for sg_set_page() lets drivers add folios to a scatterlist more easily. We could, perhaps, do better by using a different page in the folio if offset is larger than UINT_MAX, but let's hope we get a better data structure than this before we need to care about such large folios. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/scatterlist.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index ec46d8e8e49d..77df3d7b18a6 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -141,6 +141,30 @@ static inline void sg_set_page(struct scatterlist *sg, struct page *page, sg->length = len; } +/** + * sg_set_folio - Set sg entry to point at given folio + * @sg: SG entry + * @folio: The folio + * @len: Length of data + * @offset: Offset into folio + * + * Description: + * Use this function to set an sg entry pointing at a folio, never assign + * the folio directly. We encode sg table information in the lower bits + * of the folio pointer. See sg_page() for looking up the page belonging + * to an sg entry. + * + **/ +static inline void sg_set_folio(struct scatterlist *sg, struct folio *folio, + size_t len, size_t offset) +{ + WARN_ON_ONCE(len > UINT_MAX); + WARN_ON_ONCE(offset > UINT_MAX); + sg_assign_page(sg, &folio->page); + sg->offset = offset; + sg->length = len; +} + static inline struct page *sg_page(struct scatterlist *sg) { #ifdef CONFIG_DEBUG_SG From patchwork Wed Jun 21 16:45:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E413CEB64D8 for ; Wed, 21 Jun 2023 16:46:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3050F10E330; Wed, 21 Jun 2023 16:46:43 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5B73610E326 for ; Wed, 21 Jun 2023 16:46:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eONAo72om3NJJPfeQU98nnaiEyx1girYaBRrJpkgqkU=; b=JXUQqB4dBUOu93/qitkyE97f3Z vicwKtf2V+aPt3Dbg9/7v8WVwzM1uW09KbzEgrV7HoZavbGv57iMq0P8yKRiVHqblH2VN4jJwQ51P 3UBOa8B2hu7+kv08AyN/nJEB47YJdT5WmFl8qEnhdI5jyaHNGAsAFAZ1nL/q35cRWiLbB1YToy/e8 42a1+W41mg0rHN6jVqxvTZByBSAn9Z9fMtkxjLN56svcogUEoy8anGD/EDPT8G5k/Bd4memePz7s9 CrQmO7YpoNGtdgXoNsSSDZ7tXeMZyLnRddcjDjPExN2w8ZnbHBneVjox+kOnqzeVCqXZWHclrs02X NKuWOUMg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDm-IE; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 04/13] i915: Convert shmem_sg_free_table() to use a folio_batch Date: Wed, 21 Jun 2023 17:45:48 +0100 Message-Id: <20230621164557.3510324-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Remove a few hidden compound_head() calls by converting the returned page to a folio once and using the folio APIs. We also only increment the refcount on the folio once instead of once for each page. Ideally, we would have a for_each_sgt_folio macro, but until then this will do. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 55 +++++++++++++---------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 33d5d5178103..8f1633c3fb93 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -19,13 +19,13 @@ #include "i915_trace.h" /* - * Move pages to appropriate lru and release the pagevec, decrementing the - * ref count of those pages. + * Move folios to appropriate lru and release the batch, decrementing the + * ref count of those folios. */ -static void check_release_pagevec(struct pagevec *pvec) +static void check_release_folio_batch(struct folio_batch *fbatch) { - check_move_unevictable_pages(pvec); - __pagevec_release(pvec); + check_move_unevictable_folios(fbatch); + __folio_batch_release(fbatch); cond_resched(); } @@ -33,24 +33,29 @@ void shmem_sg_free_table(struct sg_table *st, struct address_space *mapping, bool dirty, bool backup) { struct sgt_iter sgt_iter; - struct pagevec pvec; + struct folio_batch fbatch; + struct folio *last = NULL; struct page *page; mapping_clear_unevictable(mapping); - pagevec_init(&pvec); + folio_batch_init(&fbatch); for_each_sgt_page(page, sgt_iter, st) { - if (dirty) - set_page_dirty(page); + struct folio *folio = page_folio(page); + if (folio == last) + continue; + last = folio; + if (dirty) + folio_mark_dirty(folio); if (backup) - mark_page_accessed(page); + folio_mark_accessed(folio); - if (!pagevec_add(&pvec, page)) - check_release_pagevec(&pvec); + if (!folio_batch_add(&fbatch, folio)) + check_release_folio_batch(&fbatch); } - if (pagevec_count(&pvec)) - check_release_pagevec(&pvec); + if (fbatch.nr) + check_release_folio_batch(&fbatch); sg_free_table(st); } @@ -63,8 +68,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, unsigned int page_count; /* restricted by sg_alloc_table */ unsigned long i; struct scatterlist *sg; - struct page *page; - unsigned long last_pfn = 0; /* suppress gcc warning */ + unsigned long next_pfn = 0; /* suppress gcc warning */ gfp_t noreclaim; int ret; @@ -95,6 +99,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, sg = st->sgl; st->nents = 0; for (i = 0; i < page_count; i++) { + struct folio *folio; const unsigned int shrink[] = { I915_SHRINK_BOUND | I915_SHRINK_UNBOUND, 0, @@ -103,12 +108,12 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, do { cond_resched(); - page = shmem_read_mapping_page_gfp(mapping, i, gfp); - if (!IS_ERR(page)) + folio = shmem_read_folio_gfp(mapping, i, gfp); + if (!IS_ERR(folio)) break; if (!*s) { - ret = PTR_ERR(page); + ret = PTR_ERR(folio); goto err_sg; } @@ -147,19 +152,21 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, if (!i || sg->length >= max_segment || - page_to_pfn(page) != last_pfn + 1) { + folio_pfn(folio) != next_pfn) { if (i) sg = sg_next(sg); st->nents++; - sg_set_page(sg, page, PAGE_SIZE, 0); + sg_set_folio(sg, folio, folio_size(folio), 0); } else { - sg->length += PAGE_SIZE; + /* XXX: could overflow? */ + sg->length += folio_size(folio); } - last_pfn = page_to_pfn(page); + next_pfn = folio_pfn(folio) + folio_nr_pages(folio); + i += folio_nr_pages(folio) - 1; /* Check that the i965g/gm workaround works. */ - GEM_BUG_ON(gfp & __GFP_DMA32 && last_pfn >= 0x00100000UL); + GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL); } if (sg) /* loop terminated early; short sg table */ sg_mark_end(sg); From patchwork Wed Jun 21 16:45:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75919EB64D7 for ; Wed, 21 Jun 2023 16:46:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EBDE010E329; Wed, 21 Jun 2023 16:46:10 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7EC7310E32B for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yzS6LjqchbTZ4R7rHRoGuy4VZGAEBFE5nqlz2+V2o7A=; b=WBpyN9BXXz0sujrWmftgID2xoV kdlIrjAuPrDvE02NGbEGcLyBZ4Gh5juLzLRwS1aTs/iFXq/1LzJmB5OnGfnoB/gk9TrEijhUjAXEa zfpHDMGrt3LnGpZtYE4JE6eWPWyYdp2X2QggxQXZSI//nfSqIYLKPmLQneg7ai1fOJgcdgv30e8MD nMvTO7EwuTOqP49d04wRvN5Qg/gyTbkAb6jsLtg9q8C+RRUI0bD+XBrJIyUyiEfPwvF9jfze9zMT6 j21Ep+8UpwfWW4NGk2mJombjhn0YtTQ396FgivudfclsdbCqddsGj33hwbEIDLfpb4JolRAADXAQL nT3lFN0g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDo-LH; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 05/13] drm: Convert drm_gem_put_pages() to use a folio_batch Date: Wed, 21 Jun 2023 17:45:49 +0100 Message-Id: <20230621164557.3510324-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Remove a few hidden compound_head() calls by converting the returned page to a folio once and using the folio APIs. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/gpu/drm/drm_gem.c | 68 ++++++++++++++++++++++----------------- 1 file changed, 39 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1a5a2cd0d4ec..78dcae201cc6 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -496,13 +496,13 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj) EXPORT_SYMBOL(drm_gem_create_mmap_offset); /* - * Move pages to appropriate lru and release the pagevec, decrementing the - * ref count of those pages. + * Move folios to appropriate lru and release the folios, decrementing the + * ref count of those folios. */ -static void drm_gem_check_release_pagevec(struct pagevec *pvec) +static void drm_gem_check_release_batch(struct folio_batch *fbatch) { - check_move_unevictable_pages(pvec); - __pagevec_release(pvec); + check_move_unevictable_folios(fbatch); + __folio_batch_release(fbatch); cond_resched(); } @@ -534,10 +534,10 @@ static void drm_gem_check_release_pagevec(struct pagevec *pvec) struct page **drm_gem_get_pages(struct drm_gem_object *obj) { struct address_space *mapping; - struct page *p, **pages; - struct pagevec pvec; - int i, npages; - + struct page **pages; + struct folio *folio; + struct folio_batch fbatch; + int i, j, npages; if (WARN_ON(!obj->filp)) return ERR_PTR(-EINVAL); @@ -559,11 +559,14 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) mapping_set_unevictable(mapping); - for (i = 0; i < npages; i++) { - p = shmem_read_mapping_page(mapping, i); - if (IS_ERR(p)) + i = 0; + while (i < npages) { + folio = shmem_read_folio_gfp(mapping, i, + mapping_gfp_mask(mapping)); + if (IS_ERR(folio)) goto fail; - pages[i] = p; + for (j = 0; j < folio_nr_pages(folio); j++, i++) + pages[i] = folio_file_page(folio, i); /* Make sure shmem keeps __GFP_DMA32 allocated pages in the * correct region during swapin. Note that this requires @@ -571,23 +574,26 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) * so shmem can relocate pages during swapin if required. */ BUG_ON(mapping_gfp_constraint(mapping, __GFP_DMA32) && - (page_to_pfn(p) >= 0x00100000UL)); + (folio_pfn(folio) >= 0x00100000UL)); } return pages; fail: mapping_clear_unevictable(mapping); - pagevec_init(&pvec); - while (i--) { - if (!pagevec_add(&pvec, pages[i])) - drm_gem_check_release_pagevec(&pvec); + folio_batch_init(&fbatch); + j = 0; + while (j < i) { + struct folio *f = page_folio(pages[j]); + if (!folio_batch_add(&fbatch, f)) + drm_gem_check_release_batch(&fbatch); + j += folio_nr_pages(f); } - if (pagevec_count(&pvec)) - drm_gem_check_release_pagevec(&pvec); + if (fbatch.nr) + drm_gem_check_release_batch(&fbatch); kvfree(pages); - return ERR_CAST(p); + return ERR_CAST(folio); } EXPORT_SYMBOL(drm_gem_get_pages); @@ -603,7 +609,7 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, { int i, npages; struct address_space *mapping; - struct pagevec pvec; + struct folio_batch fbatch; mapping = file_inode(obj->filp)->i_mapping; mapping_clear_unevictable(mapping); @@ -616,23 +622,27 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, npages = obj->size >> PAGE_SHIFT; - pagevec_init(&pvec); + folio_batch_init(&fbatch); for (i = 0; i < npages; i++) { + struct folio *folio; + if (!pages[i]) continue; + folio = page_folio(pages[i]); if (dirty) - set_page_dirty(pages[i]); + folio_mark_dirty(folio); if (accessed) - mark_page_accessed(pages[i]); + folio_mark_accessed(folio); /* Undo the reference we took when populating the table */ - if (!pagevec_add(&pvec, pages[i])) - drm_gem_check_release_pagevec(&pvec); + if (!folio_batch_add(&fbatch, folio)) + drm_gem_check_release_batch(&fbatch); + i += folio_nr_pages(folio) - 1; } - if (pagevec_count(&pvec)) - drm_gem_check_release_pagevec(&pvec); + if (folio_batch_count(&fbatch)) + drm_gem_check_release_batch(&fbatch); kvfree(pages); } From patchwork Wed Jun 21 16:45:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 677B1EB64DC for ; Wed, 21 Jun 2023 16:46:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2D99410E32F; Wed, 21 Jun 2023 16:46:11 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7966510E329 for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7niWMu6g/Ga9ZksmA0zYh6a0pI2cBG5yqVDcIc9jMKw=; b=mY+uBPiycIfHNeNVplxKugqw8y ja2K6C7MvJK6a5FI2+yURLUgfwFYXZEiQLFhJKZKFoH26P6b8EqoKQNl4q/RV/1sfv1WL7OKRLXrA INno6Z7BtR+COXHsKd7SXQ6k1Df/kRG5SHduBcfxktVSs0GP9cCVaLt+gjY79DWa5MOXGMXWbYAT6 etNCZZSKWiK1mHVDNJ+WCmubynjm50iSWFRDEX+P84WWha4AngPA+I2KS8lh3AQ0Lhy0SKsmaFCKv oy8NE0qqvtkZw7y0DBgRqj6k2mTOcmX89ClyhVMXPPrdLQC/DiqTeH0jAYyZs7bulU0tpckAefW40 zJ4u14gw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDy-P0; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 06/13] mm: Remove check_move_unevictable_pages() Date: Wed, 21 Jun 2023 17:45:50 +0100 Message-Id: <20230621164557.3510324-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" All callers have now been converted to call check_move_unevictable_folios(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 1 - mm/vmscan.c | 17 ----------------- 2 files changed, 18 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index ce7e82cf787f..456546443f1f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -439,7 +439,6 @@ static inline bool node_reclaim_enabled(void) } void check_move_unevictable_folios(struct folio_batch *fbatch); -void check_move_unevictable_pages(struct pagevec *pvec); extern void __meminit kswapd_run(int nid); extern void __meminit kswapd_stop(int nid); diff --git a/mm/vmscan.c b/mm/vmscan.c index 45d17c7cc555..f8dd1db15897 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -8074,23 +8074,6 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order) } #endif -void check_move_unevictable_pages(struct pagevec *pvec) -{ - struct folio_batch fbatch; - unsigned i; - - folio_batch_init(&fbatch); - for (i = 0; i < pvec->nr; i++) { - struct page *page = pvec->pages[i]; - - if (PageTransTail(page)) - continue; - folio_batch_add(&fbatch, page_folio(page)); - } - check_move_unevictable_folios(&fbatch); -} -EXPORT_SYMBOL_GPL(check_move_unevictable_pages); - /** * check_move_unevictable_folios - Move evictable folios to appropriate zone * lru list From patchwork Wed Jun 21 16:45:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DDD83C0015E for ; Wed, 21 Jun 2023 16:46:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B6FAF10E319; Wed, 21 Jun 2023 16:46:09 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 75A7B10E326 for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5w9pZnjuH0cui0+bbF2YOmmpAFWEgjcRmo7ue1AvTtQ=; b=jUfm5Cn2cwJWNOM2prgSqprGkR LmBwKRhuxKAUtr8BE0MWy123wKScLJoAtBpp0gWtFk3qBzdvXx83AHH41KrVC1zrHidgkWPuXbrUN WfjD0ZrDlse8jhsHm3mVGDt2m2kicLbSXaIYsdsLwQGBh+zoXbupr5MgiE7h5bsNCA2miJz2l/Fzn JGs/PkNFCac9dJLJDLKFksQrmKWOn8+imY/f0PkCnuqNBHAGKE/4jhmFP0h25e/UcAXoK54snBzmr ASDFZdUINRldKxUu0XLKMVMG+6bqx2IQgdu4zsWwH2B//OzYxwHJGhCgXv+2qTr4vXqFF+Ng3qqub 1asVHOzQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjE6-T3; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 07/13] pagevec: Rename fbatch_count() Date: Wed, 21 Jun 2023 17:45:51 +0100 Message-Id: <20230621164557.3510324-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This should always have been called folio_batch_count(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 42aad53e382e..3a9d29dd28a3 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -105,7 +105,7 @@ static inline unsigned int folio_batch_count(struct folio_batch *fbatch) return fbatch->nr; } -static inline unsigned int fbatch_space(struct folio_batch *fbatch) +static inline unsigned int folio_batch_space(struct folio_batch *fbatch) { return PAGEVEC_SIZE - fbatch->nr; } @@ -124,7 +124,7 @@ static inline unsigned folio_batch_add(struct folio_batch *fbatch, struct folio *folio) { fbatch->folios[fbatch->nr++] = folio; - return fbatch_space(fbatch); + return folio_batch_space(fbatch); } static inline void __folio_batch_release(struct folio_batch *fbatch) From patchwork Wed Jun 21 16:45:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75602EB64D7 for ; Wed, 21 Jun 2023 16:46:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 004BD10E25A; Wed, 21 Jun 2023 16:46:10 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 716AF10E323 for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=S9L58IPbLSJMGWxFtvlnrG4TzIQ7IOqSwhUMe45kT8U=; b=PW4Dwh4Gf1suM6iF8qmdwxhedV dGBA+OGAVi6nz1TzXhqQwejLNG6p8I/gHBdyJdF+v9Gwlfc+qXUjfsQRhEupLyGODb3oBlcxjqeZg MjDYrwlGzkp9KSAXfQWGPMLoY7z8XE2Ha7+Drv6kVcDGv88l6mKIj3vMNnmJc0wPJLF/1/j23opEa eo5D3L64k7UfoSUT0lpwRO6h7SIv4cOQ8G08UCt/pjyMlmUKsodXKsDFTzzN9xwAVenix5X7iC12y ymJp18fQp4MbBy2x2WRcwk0mKw0eF20AfD6Cvpc56WVKj7Wa6gjeZD91h/NOb3XGNU2NojtTr6joC HO//Edeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEL-0r; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 08/13] i915: Convert i915_gpu_error to use a folio_batch Date: Wed, 21 Jun 2023 17:45:52 +0100 Message-Id: <20230621164557.3510324-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Remove one of the last remaining users of pagevec. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/gpu/drm/i915/i915_gpu_error.c | 50 +++++++++++++-------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index ec368e700235..0c38bfb60c9a 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -187,64 +187,64 @@ i915_error_printer(struct drm_i915_error_state_buf *e) } /* single threaded page allocator with a reserved stash for emergencies */ -static void pool_fini(struct pagevec *pv) +static void pool_fini(struct folio_batch *fbatch) { - pagevec_release(pv); + folio_batch_release(fbatch); } -static int pool_refill(struct pagevec *pv, gfp_t gfp) +static int pool_refill(struct folio_batch *fbatch, gfp_t gfp) { - while (pagevec_space(pv)) { - struct page *p; + while (folio_batch_space(fbatch)) { + struct folio *folio; - p = alloc_page(gfp); - if (!p) + folio = folio_alloc(gfp, 0); + if (!folio) return -ENOMEM; - pagevec_add(pv, p); + folio_batch_add(fbatch, folio); } return 0; } -static int pool_init(struct pagevec *pv, gfp_t gfp) +static int pool_init(struct folio_batch *fbatch, gfp_t gfp) { int err; - pagevec_init(pv); + folio_batch_init(fbatch); - err = pool_refill(pv, gfp); + err = pool_refill(fbatch, gfp); if (err) - pool_fini(pv); + pool_fini(fbatch); return err; } -static void *pool_alloc(struct pagevec *pv, gfp_t gfp) +static void *pool_alloc(struct folio_batch *fbatch, gfp_t gfp) { - struct page *p; + struct folio *folio; - p = alloc_page(gfp); - if (!p && pagevec_count(pv)) - p = pv->pages[--pv->nr]; + folio = folio_alloc(gfp, 0); + if (!folio && folio_batch_count(fbatch)) + folio = fbatch->folios[--fbatch->nr]; - return p ? page_address(p) : NULL; + return folio ? folio_address(folio) : NULL; } -static void pool_free(struct pagevec *pv, void *addr) +static void pool_free(struct folio_batch *fbatch, void *addr) { - struct page *p = virt_to_page(addr); + struct folio *folio = virt_to_folio(addr); - if (pagevec_space(pv)) - pagevec_add(pv, p); + if (folio_batch_space(fbatch)) + folio_batch_add(fbatch, folio); else - __free_page(p); + folio_put(folio); } #ifdef CONFIG_DRM_I915_COMPRESS_ERROR struct i915_vma_compress { - struct pagevec pool; + struct folio_batch pool; struct z_stream_s zstream; void *tmp; }; @@ -381,7 +381,7 @@ static void err_compression_marker(struct drm_i915_error_state_buf *m) #else struct i915_vma_compress { - struct pagevec pool; + struct folio_batch pool; }; static bool compress_init(struct i915_vma_compress *c) From patchwork Wed Jun 21 16:45:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9D36EB64DD for ; Wed, 21 Jun 2023 16:46:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 97B5F10E332; Wed, 21 Jun 2023 16:46:51 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 767F310E328 for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=AQBN8EOVNpFsbWS+GiPKIXRePvvUD9BdImp+Rjg78p8=; b=b7ub/H7BmTbgUPuQXUwrP95FRP 0IEUGdTz1HyQpbYbP/rbXcBqz+W7V+5ZyOjKUhJSJl1n0gWxNajfSW0gpLlDNGLr+5x3X7lQR0rGc sJWHL9uXmDvqiZXMwcdWkmeIwBlxZ0p2TGny4RZf+u9Oy/cwjqb38MgaLGQbLvGbHq+8CEI/DGZKo gUMOu3TMpZs3AvjXPtNNn7A40mwTZlRy5EJveUHHXZJ+ekg0kRdv2vQGboivcW/SubBxp2z4OSPtU 9wVf5kjxK4hgOVoj68acJOECaVXYQcN43ZnqYfbigs2K4HQL3lMwUjSgTAavCrqxUYqENlRUp9rhI tkXqYsRw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjER-5C; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 09/13] net: Convert sunrpc from pagevec to folio_batch Date: Wed, 21 Jun 2023 17:45:53 +0100 Message-Id: <20230621164557.3510324-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Remove the last usage of pagevecs. There is a slight change here; we now free the folio_batch as soon as it fills up instead of freeing the folio_batch when we try to add a page to a full batch. This should have no effect in practice. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Chuck Lever --- include/linux/sunrpc/svc.h | 2 +- net/sunrpc/svc.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index c2807e301790..f8751118c122 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -222,7 +222,7 @@ struct svc_rqst { struct page * *rq_next_page; /* next reply page to use */ struct page * *rq_page_end; /* one past the last page */ - struct pagevec rq_pvec; + struct folio_batch rq_fbatch; struct kvec rq_vec[RPCSVC_MAXPAGES]; /* generally useful.. */ struct bio_vec rq_bvec[RPCSVC_MAXPAGES]; diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index e7c101290425..587811a002c9 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -640,7 +640,7 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node) if (!rqstp) return rqstp; - pagevec_init(&rqstp->rq_pvec); + folio_batch_init(&rqstp->rq_fbatch); __set_bit(RQ_BUSY, &rqstp->rq_flags); rqstp->rq_server = serv; @@ -851,9 +851,9 @@ bool svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page) } if (*rqstp->rq_next_page) { - if (!pagevec_space(&rqstp->rq_pvec)) - __pagevec_release(&rqstp->rq_pvec); - pagevec_add(&rqstp->rq_pvec, *rqstp->rq_next_page); + if (!folio_batch_add(&rqstp->rq_fbatch, + page_folio(*rqstp->rq_next_page))) + __folio_batch_release(&rqstp->rq_fbatch); } get_page(page); @@ -887,7 +887,7 @@ void svc_rqst_release_pages(struct svc_rqst *rqstp) void svc_rqst_free(struct svc_rqst *rqstp) { - pagevec_release(&rqstp->rq_pvec); + folio_batch_release(&rqstp->rq_fbatch); svc_release_buffer(rqstp); if (rqstp->rq_scratch_page) put_page(rqstp->rq_scratch_page); From patchwork Wed Jun 21 16:45:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E64C7EB64DD for ; Wed, 21 Jun 2023 16:46:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2979D10E24C; Wed, 21 Jun 2023 16:46:09 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 62F4410E25A for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/u3R6ommui0jzmNm5KqQbCKjAYY2UMYMLfe/n8/4JuI=; b=SA7wCjqcWMwZI6QRZjnXU4BdsS H2NVvLwwEgWYTdDfqbIir6UM//Pdysn6BSpEU5tQrwXLiUi9lYRn6b1kZ9qx1fAUBvXd7kLDwwkhY 5AAQLBbNy1luSMqNDTeWh+iK0WGYWrfqKDGYqZciSOpheoDHNVBuHpWm43NXX9E4ZJXfruCyIH/UW HB+wG8b8HwLzzjf53mwVPcA53m7cznZE0yRcrwmRkU0fOjoCKs9ya07eXP9QmjjMztGSPQSsClls5 jOcuaSGUbCg3/YuJMblU2LvWjfE9SoN7h5k2L6Zq6mpZcMsd5xCOJiy9+fCqNLsH/jrQq/u4kXAFF K/koQlnw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEX-9H; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 10/13] mm: Remove struct pagevec Date: Wed, 21 Jun 2023 17:45:54 +0100 Message-Id: <20230621164557.3510324-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" All users are now converted to use the folio_batch so we can get rid of this data structure. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 63 +++-------------------------------------- mm/swap.c | 18 ++++++------ 2 files changed, 13 insertions(+), 68 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 3a9d29dd28a3..87cc678adc85 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -3,65 +3,18 @@ * include/linux/pagevec.h * * In many places it is efficient to batch an operation up against multiple - * pages. A pagevec is a multipage container which is used for that. + * folios. A folio_batch is a container which is used for that. */ #ifndef _LINUX_PAGEVEC_H #define _LINUX_PAGEVEC_H -#include +#include -/* 15 pointers + header align the pagevec structure to a power of two */ +/* 15 pointers + header align the folio_batch structure to a power of two */ #define PAGEVEC_SIZE 15 -struct page; struct folio; -struct address_space; - -/* Layout must match folio_batch */ -struct pagevec { - unsigned char nr; - bool percpu_pvec_drained; - struct page *pages[PAGEVEC_SIZE]; -}; - -void __pagevec_release(struct pagevec *pvec); - -static inline void pagevec_init(struct pagevec *pvec) -{ - pvec->nr = 0; - pvec->percpu_pvec_drained = false; -} - -static inline void pagevec_reinit(struct pagevec *pvec) -{ - pvec->nr = 0; -} - -static inline unsigned pagevec_count(struct pagevec *pvec) -{ - return pvec->nr; -} - -static inline unsigned pagevec_space(struct pagevec *pvec) -{ - return PAGEVEC_SIZE - pvec->nr; -} - -/* - * Add a page to a pagevec. Returns the number of slots still available. - */ -static inline unsigned pagevec_add(struct pagevec *pvec, struct page *page) -{ - pvec->pages[pvec->nr++] = page; - return pagevec_space(pvec); -} - -static inline void pagevec_release(struct pagevec *pvec) -{ - if (pagevec_count(pvec)) - __pagevec_release(pvec); -} /** * struct folio_batch - A collection of folios. @@ -78,11 +31,6 @@ struct folio_batch { struct folio *folios[PAGEVEC_SIZE]; }; -/* Layout must match pagevec */ -static_assert(sizeof(struct pagevec) == sizeof(struct folio_batch)); -static_assert(offsetof(struct pagevec, pages) == - offsetof(struct folio_batch, folios)); - /** * folio_batch_init() - Initialise a batch of folios * @fbatch: The folio batch. @@ -127,10 +75,7 @@ static inline unsigned folio_batch_add(struct folio_batch *fbatch, return folio_batch_space(fbatch); } -static inline void __folio_batch_release(struct folio_batch *fbatch) -{ - __pagevec_release((struct pagevec *)fbatch); -} +void __folio_batch_release(struct folio_batch *pvec); static inline void folio_batch_release(struct folio_batch *fbatch) { diff --git a/mm/swap.c b/mm/swap.c index 423199ee8478..10348c1cf9c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1044,25 +1044,25 @@ void release_pages(release_pages_arg arg, int nr) EXPORT_SYMBOL(release_pages); /* - * The pages which we're about to release may be in the deferred lru-addition + * The folios which we're about to release may be in the deferred lru-addition * queues. That would prevent them from really being freed right now. That's - * OK from a correctness point of view but is inefficient - those pages may be + * OK from a correctness point of view but is inefficient - those folios may be * cache-warm and we want to give them back to the page allocator ASAP. * - * So __pagevec_release() will drain those queues here. + * So __folio_batch_release() will drain those queues here. * folio_batch_move_lru() calls folios_put() directly to avoid * mutual recursion. */ -void __pagevec_release(struct pagevec *pvec) +void __folio_batch_release(struct folio_batch *fbatch) { - if (!pvec->percpu_pvec_drained) { + if (!fbatch->percpu_pvec_drained) { lru_add_drain(); - pvec->percpu_pvec_drained = true; + fbatch->percpu_pvec_drained = true; } - release_pages(pvec->pages, pagevec_count(pvec)); - pagevec_reinit(pvec); + release_pages(fbatch->folios, folio_batch_count(fbatch)); + folio_batch_reinit(fbatch); } -EXPORT_SYMBOL(__pagevec_release); +EXPORT_SYMBOL(__folio_batch_release); /** * folio_batch_remove_exceptionals() - Prune non-folios from a batch. From patchwork Wed Jun 21 16:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 272BEEB64DD for ; Wed, 21 Jun 2023 16:46:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D76E910E328; Wed, 21 Jun 2023 16:46:34 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7CAB510E24D for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=H7xjtBA+jzTamJkZvSrBReaOKW0qDsx5kc7RVJ9s550=; b=t7EgP9l+MssKQftiMAo7VAdBCc Nv98h8aNtVO16LGaXGeEq2m9ltD2DVLbWxZHMVIHdnansE52q8B0dmRkBjGAlUiCMoIMy/OreImcr 5cqHvjHzMp6Nq3yxahSKcv8h0zZPtEmJc2nmPl3Y7HZoHQEaS6cdgbsAZkrux0e53j9vMSEKgXdOK EKZ68SjKoRwct8bGiDcah4w9QZ3tVXt3ANsdqnDF3UeEkp2GKmn12AODcUbhsUcILgUOYmiiBAZWV 9DXjTmPrk3hO4ePJxd9hC8YahWikG3UdRtJUwE1RGLdoobQf/AQ8MdhsOblwR8sJo3t78oQYQslj4 WpEPi5ag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEd-D4; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 11/13] mm: Rename invalidate_mapping_pagevec to mapping_try_invalidate Date: Wed, 21 Jun 2023 17:45:55 +0100 Message-Id: <20230621164557.3510324-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We don't use pagevecs for the LRU cache any more, and we don't know that the failed invalidations were due to the folio being in an LRU cache. So rename it to be more accurate. Signed-off-by: Matthew Wilcox (Oracle) --- mm/fadvise.c | 16 +++++++--------- mm/internal.h | 4 ++-- mm/truncate.c | 25 ++++++++++++------------- 3 files changed, 21 insertions(+), 24 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index fb7c5f43fd2a..f684ffd7f9c9 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -143,7 +143,7 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) } if (end_index >= start_index) { - unsigned long nr_pagevec = 0; + unsigned long nr_failed = 0; /* * It's common to FADV_DONTNEED right after @@ -156,17 +156,15 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) */ lru_add_drain(); - invalidate_mapping_pagevec(mapping, - start_index, end_index, - &nr_pagevec); + mapping_try_invalidate(mapping, start_index, end_index, + &nr_failed); /* - * If fewer pages were invalidated than expected then - * it is possible that some of the pages were on - * a per-cpu pagevec for a remote CPU. Drain all - * pagevecs and try again. + * The failures may be due to the folio being + * in the LRU cache of a remote CPU. Drain all + * caches and try again. */ - if (nr_pagevec) { + if (nr_failed) { lru_add_drain_all(); invalidate_mapping_pages(mapping, start_index, end_index); diff --git a/mm/internal.h b/mm/internal.h index 119a8241f9d9..2ff7587b4045 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -133,8 +133,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio); bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end); long invalidate_inode_page(struct page *page); -unsigned long invalidate_mapping_pagevec(struct address_space *mapping, - pgoff_t start, pgoff_t end, unsigned long *nr_pagevec); +unsigned long mapping_try_invalidate(struct address_space *mapping, + pgoff_t start, pgoff_t end, unsigned long *nr_failed); /** * folio_evictable - Test whether a folio is evictable. diff --git a/mm/truncate.c b/mm/truncate.c index 86de31ed4d32..4a917570887f 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -486,18 +486,17 @@ void truncate_inode_pages_final(struct address_space *mapping) EXPORT_SYMBOL(truncate_inode_pages_final); /** - * invalidate_mapping_pagevec - Invalidate all the unlocked pages of one inode - * @mapping: the address_space which holds the pages to invalidate + * mapping_try_invalidate - Invalidate all the evictable folios of one inode + * @mapping: the address_space which holds the folios to invalidate * @start: the offset 'from' which to invalidate * @end: the offset 'to' which to invalidate (inclusive) - * @nr_pagevec: invalidate failed page number for caller + * @nr_failed: How many folio invalidations failed * - * This helper is similar to invalidate_mapping_pages(), except that it accounts - * for pages that are likely on a pagevec and counts them in @nr_pagevec, which - * will be used by the caller. + * This function is similar to invalidate_mapping_pages(), except that it + * returns the number of folios which could not be evicted in @nr_failed. */ -unsigned long invalidate_mapping_pagevec(struct address_space *mapping, - pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) +unsigned long mapping_try_invalidate(struct address_space *mapping, + pgoff_t start, pgoff_t end, unsigned long *nr_failed) { pgoff_t indices[PAGEVEC_SIZE]; struct folio_batch fbatch; @@ -527,9 +526,9 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, */ if (!ret) { deactivate_file_folio(folio); - /* It is likely on the pagevec of a remote CPU */ - if (nr_pagevec) - (*nr_pagevec)++; + /* Likely in the lru cache of a remote CPU */ + if (nr_failed) + (*nr_failed)++; } count += ret; } @@ -552,12 +551,12 @@ unsigned long invalidate_mapping_pagevec(struct address_space *mapping, * If you want to remove all the pages of one inode, regardless of * their use and writeback state, use truncate_inode_pages(). * - * Return: the number of the cache entries that were invalidated + * Return: The number of indices that had their contents invalidated */ unsigned long invalidate_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t end) { - return invalidate_mapping_pagevec(mapping, start, end, NULL); + return mapping_try_invalidate(mapping, start, end, NULL); } EXPORT_SYMBOL(invalidate_mapping_pages); From patchwork Wed Jun 21 16:45:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3D2E0EB64DC for ; Wed, 21 Jun 2023 16:46:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 38E2D10E32D; Wed, 21 Jun 2023 16:46:51 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 681B210E319 for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6bhEYpaGLo7Jfiv7Oy69a9Rbop+oIUfrkuAWfFjKyOQ=; b=fVZ+hZCi7U6/W3v6NYZl+xn2jM 1lVvZ/cm5oqnPHkQ4EN7CqyGeAw3OMc02BCqfH3FTZmTMHtCBD4UugTykydjG+4AmzTCgaS9E8NLc wKJc9To6lc+TRJYTmbE9iZpa5GxvUxBx93xjO14Of2MZUZKkLfhAFBexr19TA9W5Gd5B2U373kAvV GmaNimalG9cGk8dLFYvIyCnvuLSvif6kbPYg/4lQKcbQpvf5FV10wKGWpAg6nhnlY9wVPc67Bm56J qj3Q3UL1kqOhdvt6U+UEl8OlB/v8JQGW4W3cGebFFTHD4WcBZZyOxiaJ0GfE+kgOuTUknBoJ8vlyb be8Lx8Hg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEj-HR; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 12/13] mm: Remove references to pagevec Date: Wed, 21 Jun 2023 17:45:56 +0100 Message-Id: <20230621164557.3510324-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Most of these should just refer to the LRU cache rather than the data structure used to implement the LRU cache. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 2 +- mm/khugepaged.c | 6 +++--- mm/ksm.c | 6 +++--- mm/memory.c | 6 +++--- mm/migrate_device.c | 2 +- mm/swap.c | 2 +- mm/truncate.c | 2 +- 7 files changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e94fe292f30a..eb3678360b97 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1344,7 +1344,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) /* * See do_wp_page(): we can only reuse the folio exclusively if * there are no additional references. Note that we always drain - * the LRU pagevecs immediately after adding a THP. + * the LRU cache immediately after adding a THP. */ if (folio_ref_count(folio) > 1 + folio_test_swapcache(folio) * folio_nr_pages(folio)) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5ef1e08b2a06..3beb4ad2ee5e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1051,7 +1051,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, if (pte) pte_unmap(pte); - /* Drain LRU add pagevec to remove extra pin on the swapped in pages */ + /* Drain LRU cache to remove extra pin on the swapped in pages */ if (swapped_in) lru_add_drain(); @@ -1972,7 +1972,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, result = SCAN_FAIL; goto xa_unlocked; } - /* drain pagevecs to help isolate_lru_page() */ + /* drain lru cache to help isolate_lru_page() */ lru_add_drain(); page = folio_file_page(folio, index); } else if (trylock_page(page)) { @@ -1988,7 +1988,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, page_cache_sync_readahead(mapping, &file->f_ra, file, index, end - index); - /* drain pagevecs to help isolate_lru_page() */ + /* drain lru cache to help isolate_lru_page() */ lru_add_drain(); page = find_lock_page(mapping, index); if (unlikely(page == NULL)) { diff --git a/mm/ksm.c b/mm/ksm.c index d995779dc1fe..ba266359da55 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -932,7 +932,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node) * The stable node did not yet appear stale to get_ksm_page(), * since that allows for an unmapped ksm page to be recognized * right up until it is freed; but the node is safe to remove. - * This page might be in a pagevec waiting to be freed, + * This page might be in an LRU cache waiting to be freed, * or it might be PageSwapCache (perhaps under writeback), * or it might have been removed from swapcache a moment ago. */ @@ -2303,8 +2303,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) trace_ksm_start_scan(ksm_scan.seqnr, ksm_rmap_items); /* - * A number of pages can hang around indefinitely on per-cpu - * pagevecs, raised page count preventing write_protect_page + * A number of pages can hang around indefinitely in per-cpu + * LRU cache, raised page count preventing write_protect_page * from merging them. Though it doesn't really matter much, * it is puzzling to see some stuck in pages_volatile until * other activity jostles them out, and they also prevented diff --git a/mm/memory.c b/mm/memory.c index 9f2723749f55..d034c52071f4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3404,8 +3404,8 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) goto copy; if (!folio_test_lru(folio)) /* - * Note: We cannot easily detect+handle references from - * remote LRU pagevecs or references to LRU folios. + * We cannot easily detect+handle references from + * remote LRU caches or references to LRU folios. */ lru_add_drain(); if (folio_ref_count(folio) > 1 + folio_test_swapcache(folio)) @@ -3883,7 +3883,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * If we want to map a page that's in the swapcache writable, we * have to detect via the refcount if we're really the exclusive * owner. Try removing the extra reference from the local LRU - * pagevecs if required. + * caches if required. */ if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache && !folio_test_ksm(folio) && !folio_test_lru(folio)) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 02d272b909b5..8365158460ed 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -376,7 +376,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns, /* ZONE_DEVICE pages are not on LRU */ if (!is_zone_device_page(page)) { if (!PageLRU(page) && allow_drain) { - /* Drain CPU's pagevec */ + /* Drain CPU's lru cache */ lru_add_drain_all(); allow_drain = false; } diff --git a/mm/swap.c b/mm/swap.c index 10348c1cf9c5..cd8f0150ba3a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -76,7 +76,7 @@ static DEFINE_PER_CPU(struct cpu_fbatches, cpu_fbatches) = { /* * This path almost never happens for VM activity - pages are normally freed - * via pagevecs. But it gets used by networking - and for compound pages. + * in batches. But it gets used by networking - and for compound pages. */ static void __page_cache_release(struct folio *folio) { diff --git a/mm/truncate.c b/mm/truncate.c index 4a917570887f..95d1291d269b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -565,7 +565,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages); * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave pages behind because * shrink_page_list() has a temp ref on them, or because they're transiently - * sitting in the folio_add_lru() pagevecs. + * sitting in the folio_add_lru() caches. */ static int invalidate_complete_folio2(struct address_space *mapping, struct folio *folio) From patchwork Wed Jun 21 16:45:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CD3BEB64DD for ; Wed, 21 Jun 2023 16:46:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8867E10E24D; Wed, 21 Jun 2023 16:46:09 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5EEE910E24D for ; Wed, 21 Jun 2023 16:46:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8TiSCbnL9mJc/af1uk/cttNt5G00aLFokvOncuurQEU=; b=ROBkxSuxyK2GHbWwt0qLgFJiv8 zmj9C6CSHPQM7LyW41/OjbHPRndJOpiS6l5UG+bFw4bxt2G+Ca7A4O02XW/hXrAyAyyAbJMtRBI8r utehe7BbwNc6CVtpJO37VKw3d8j5SlrD8p0NYvoc8pLtWcqyqllAEQ9zMkh80iRJ0C6+pqSF/oH6C 84eZ0XPv0VUhnwCY5IUj6PaAMQvlFyUmc2KJEaEsXJxuERpYSwSOxsIiY96AQnV53rDs25P4xDSor f26+lGIPExHjRgT8ClXur7P2d8b4h7khDh7JQZha7/A8GMB16vux2F4j1+4MR3dBM2Nyij9QAVuoG PqM1b+Jw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y2-00EjEp-LV; Wed, 21 Jun 2023 16:46:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Subject: [PATCH 13/13] mm: Remove unnecessary pagevec includes Date: Wed, 21 Jun 2023 17:45:57 +0100 Message-Id: <20230621164557.3510324-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Matthew Wilcox \(Oracle\)" , linux-fsdevel@vger.kernel.org, linux-afs@lists.infradead.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" These files no longer need pagevec.h, mostly due to function declarations being moved out of it. Signed-off-by: Matthew Wilcox (Oracle) --- mm/fadvise.c | 1 - mm/memory_hotplug.c | 1 - mm/migrate.c | 1 - mm/readahead.c | 1 - mm/swap_state.c | 1 - 5 files changed, 5 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index f684ffd7f9c9..6c39d42f16dc 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 35db4108bb15..3f231cf1b410 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/migrate.c b/mm/migrate.c index ce35afdbc1e3..ee26f4a962ef 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/readahead.c b/mm/readahead.c index 47afbca1d122..a9c999aa19af 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -120,7 +120,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/swap_state.c b/mm/swap_state.c index 4a5c7b748051..f8ea7015bad4 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -16,7 +16,6 @@ #include #include #include -#include #include #include #include