From patchwork Sun Jan 2 21:57:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF179C433EF for ; Sun, 2 Jan 2022 21:57:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55C136B0083; Sun, 2 Jan 2022 16:57:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E7396B0085; Sun, 2 Jan 2022 16:57:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 274516B0087; Sun, 2 Jan 2022 16:57:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id F30E56B0083 for ; Sun, 2 Jan 2022 16:57:52 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 97A45180BA546 for ; Sun, 2 Jan 2022 21:57:52 +0000 (UTC) X-FDA: 78986709984.12.77B4775 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 45AFF80004 for ; Sun, 2 Jan 2022 21:57:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eAJUq7P6VyOf1/FXgOre2SEFMK8BDDSP7tINfd7cEAI=; b=pQkrEv3bFOsLxh188Jk1wUuf5b 9zoY20KNFE+fIt5RuXsj3w6HjJTkc63RJj/D7pax03BAcjyvrxSzc4+1d3Rjsu92DVle9/iQxihfP 4MEyVPd+64Ljo2WNJugA+AzN0LuLvA6lt2s/vuiSAxj9zGoDZxSQ2VY+rQsh+Y5ycJYJYrB6q3kFm EuWvmvitRxb4IIaWxh0v0FjZwE032GbArQUgj8kHxIT45BCEk+Hc97wNvSEixYognMUm1725oby/W m4oR8N3gg0bOBABU98yVkPaOgw8QYDWaSbGgt/d7vlSKY+uy1wwvEyukUUy0ALV8ECUhaBt8OPG6W yMbgAW8g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r5-00CLnv-TA; Sun, 02 Jan 2022 21:57:31 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 01/17] mm: Add folio_put_refs() Date: Sun, 2 Jan 2022 21:57:13 +0000 Message-Id: <20220102215729.2943705-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: arg1gm9kfoegeq4jucz4wjf4zzrapsg7 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 45AFF80004 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pQkrEv3b; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160672-292256 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like folio_put(), but puts N references at once instead of just one. It's like put_page_refs(), but does one atomic operation instead of two, and is available to more than just gup.c. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index d8b7d7ed14dd..98a10412d581 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1237,6 +1237,26 @@ static inline void folio_put(struct folio *folio) __put_page(&folio->page); } +/** + * folio_put_refs - Reduce the reference count on a folio. + * @folio: The folio. + * @refs: The number of references to reduce. + * + * If the folio's reference count reaches zero, the memory will be + * released back to the page allocator and may be used by another + * allocation immediately. Do not access the memory or the struct folio + * after calling folio_put_refs() unless you can be sure that these weren't + * the last references. + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void folio_put_refs(struct folio *folio, int refs) +{ + if (folio_ref_sub_and_test(folio, refs)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { struct folio *folio = page_folio(page); From patchwork Sun Jan 2 21:57:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF5A5C433EF for ; Sun, 2 Jan 2022 21:57:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 401A36B0085; Sun, 2 Jan 2022 16:57:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1AFB26B0087; Sun, 2 Jan 2022 16:57:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02F356B0088; Sun, 2 Jan 2022 16:57:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id E0A0C6B0085 for ; Sun, 2 Jan 2022 16:57:55 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A38288249980 for ; Sun, 2 Jan 2022 21:57:55 +0000 (UTC) X-FDA: 78986710110.21.3215A86 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 0A00B8000C for ; Sun, 2 Jan 2022 21:57:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/ULprEWFj6Xtz7N1o//OT1f6GGRgqj0l3CguOhHZiW4=; b=OuvzOjJAIlpiSOXEDbMWMEQbzB RTVCbd169FsGfpj5AdzO4QXT4e6/o8jxb4+ry2k8g2vhcUA0zGKD681sEhn2TsTQ/FJFV+nREbYaG f5jBXQNRbA5KE0ck/dvlugHPTTx1llnnffbw1arhF7XxnoQTt2lHDDJ1NI9SzkVKV7Tho+o5/D3qX YGat7rJLdoVCoPMHyjjpKkt0g91im1gyuQfGO6cu/EUUzO3AOvbmp6d4sX+4kPaxiR3RBvv7CqYkS FpOgJ+AYgKOnacckYvfXbhuNNCcW1m8wom/9zWqhK8ZDoyvzsl1NqR+z5lccCStVjeAxOYXzr6T9V O7fqU3YQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r5-00CLny-Vc; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 02/17] mm: Add folio_pincount_available() Date: Sun, 2 Jan 2022 21:57:14 +0000 Message-Id: <20220102215729.2943705-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0A00B8000C X-Stat-Signature: ktnqjnjudmbik41ig7j4wa47h1qhmo71 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OuvzOjJA; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641160668-361459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert hpage_pincount_available() into folio_pincount_available() and turn hpage_pincount_available() into a wrapper. We don't need to check folio_test_large() before checking folio_order() as folio_order() includes a check of folio_test_large(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 98a10412d581..269b5484d66e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -927,15 +927,19 @@ static inline void destroy_compound_page(struct page *page) compound_page_dtors[page[1].compound_dtor](page); } -static inline bool hpage_pincount_available(struct page *page) +static inline bool folio_pincount_available(struct folio *folio) { /* - * Can the page->hpage_pinned_refcount field be used? That field is in + * Can the folio->hpage_pinned_refcount field be used? That field is in * the 3rd page of the compound page, so the smallest (2-page) compound * pages cannot support it. */ - page = compound_head(page); - return PageCompound(page) && compound_order(page) > 1; + return folio_order(folio) > 1; +} + +static inline bool hpage_pincount_available(struct page *page) +{ + return folio_pincount_available(page_folio(page)); } static inline int head_compound_pincount(struct page *head) From patchwork Sun Jan 2 21:57:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7876FC433EF for ; Sun, 2 Jan 2022 21:57:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33AF56B007E; Sun, 2 Jan 2022 16:57:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 29B0C6B0080; Sun, 2 Jan 2022 16:57:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 006876B0081; Sun, 2 Jan 2022 16:57:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id D10366B007E for ; Sun, 2 Jan 2022 16:57:40 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8F0698EBFC for ; Sun, 2 Jan 2022 21:57:40 +0000 (UTC) X-FDA: 78986709480.01.62D766A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 3911710000A for ; Sun, 2 Jan 2022 21:57:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lkzBBTBtnaYHbEwMclgvuY6YoqFw72+OidxpJYTyEVk=; b=nhRTvks2nUT4t4t9TEwzGs4x9o NBCf32l1eVloky9O/jtn7odwou6rxHG9i9v6VVxXjGfO1diKu5NW3m0B7i8ZAbyzXYE+JLporRSd/ 85YivczD/jrT5Hy6bVfvGuF36DgSo7OeGV7bcfogtT2fYeMKCAAE1IBJW7p0+SPLCgBxWocLz1ofI vv1OoOZwAzt0tZhKh+hzwBzc98iBdnTSlDC7dTCf+qaR1yKDTKQGJZSbE2JJWNMiODWMDziT1vq00 RON7MhOW/sI//Yj3A+g4Jbsyvku5IJnsZK5jBWkwzZHVPOSao45AkTDh8NGLiOpuvJ5utjTSYJvrA APPTm0mg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLo4-1o; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 03/17] mm: Add folio_pincount_ptr() Date: Sun, 2 Jan 2022 21:57:15 +0000 Message-Id: <20220102215729.2943705-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: x19hh34tjg3nnya6aegj7jfmyn9cxeto X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3911710000A Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nhRTvks2; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160660-440852 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of compound_pincount_ptr(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm_types.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c3a6e6209600..09d9e2c4a2c5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -309,6 +309,12 @@ static inline atomic_t *compound_mapcount_ptr(struct page *page) return &page[1].compound_mapcount; } +static inline atomic_t *folio_pincount_ptr(struct folio *folio) +{ + struct page *tail = &folio->page + 2; + return &tail->hpage_pinned_refcount; +} + static inline atomic_t *compound_pincount_ptr(struct page *page) { return &page[2].hpage_pinned_refcount; From patchwork Sun Jan 2 21:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C9D2C433EF for ; Sun, 2 Jan 2022 21:57:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3ADF26B0082; Sun, 2 Jan 2022 16:57:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 310C36B0083; Sun, 2 Jan 2022 16:57:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02D6D6B0085; Sun, 2 Jan 2022 16:57:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id C74566B0082 for ; Sun, 2 Jan 2022 16:57:49 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8DACD87CBC for ; Sun, 2 Jan 2022 21:57:49 +0000 (UTC) X-FDA: 78986709858.16.1B10870 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 5695140004 for ; Sun, 2 Jan 2022 21:57:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KfDUYH1goyPtRjvKMEhHtFz87m5PVbcjSEBMTVJFkl4=; b=u59sVK9Frl7TsTek9ipIhgin45 FvuLb9xJZ0FE4IaxjBK/fpYYQ8pvcahY8YN2o3Bv3/PzfhQuitOHP9OShlRzXh5R1FXrKhXqBvDcS feOoSZYaBrJri4zx+Kt5M1oJCTssy4sURDRyko6ImSyW4IOfwnZkEHlkUR3bTDMIqOuNnpo1knjLj 5iyzvvv8nL8Iy9Z5FShqYu8VIG+ufr4tY2C/SYNKRnT6zaEONSRL9MOLNz981EEuPSJUasneqcS7x ZahxbbKiU1tlm4QDGr1PPGdxZsrD2ZRqr3PYo9DSoj3B8JIhHqhlAvn0jZnJ6VkwBj36S8Z7VhJz4 FcFG8V6Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLo6-4I; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 04/17] mm: Convert page_maybe_dma_pinned() to use a folio Date: Sun, 2 Jan 2022 21:57:16 +0000 Message-Id: <20220102215729.2943705-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=u59sVK9F; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 5695140004 X-Stat-Signature: syugdsqbs86pw1zzu66dqkz8fgn59185 X-Rspamd-Server: rspam04 X-HE-Tag: 1641160665-96692 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replaces three calls to compound_head() with one. This removes the last user of compound_pincount(), so remove that helper too. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 269b5484d66e..00dcea53bb96 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -947,13 +947,6 @@ static inline int head_compound_pincount(struct page *head) return atomic_read(compound_pincount_ptr(head)); } -static inline int compound_pincount(struct page *page) -{ - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); - page = compound_head(page); - return head_compound_pincount(page); -} - static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; @@ -1347,18 +1340,20 @@ void unpin_user_pages(struct page **pages, unsigned long npages); */ static inline bool page_maybe_dma_pinned(struct page *page) { - if (hpage_pincount_available(page)) - return compound_pincount(page) > 0; + struct folio *folio = page_folio(page); + + if (folio_pincount_available(folio)) + return atomic_read(folio_pincount_ptr(folio)) > 0; /* * page_ref_count() is signed. If that refcount overflows, then * page_ref_count() returns a negative value, and callers will avoid * further incrementing the refcount. * - * Here, for that overflow case, use the signed bit to count a little + * Here, for that overflow case, use the sign bit to count a little * bit higher via unsigned math, and thus still get an accurate result. */ - return ((unsigned int)page_ref_count(compound_head(page))) >= + return ((unsigned int)folio_ref_count(folio)) >= GUP_PIN_COUNTING_BIAS; } From patchwork Sun Jan 2 21:57:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4167DC433F5 for ; Sun, 2 Jan 2022 21:57:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 117156B0078; Sun, 2 Jan 2022 16:57:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 050C06B007B; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0B726B007D; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id CC5556B0078 for ; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8F0538EBFC for ; Sun, 2 Jan 2022 21:57:37 +0000 (UTC) X-FDA: 78986709354.06.D89E398 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 9B8B91C0004 for ; Sun, 2 Jan 2022 21:57:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+rxdVzArFGjcrpa/qm5P7oEyraYzpZzNWOByz+nCGN4=; b=WeohFo8TJmNQ5QJjzg417utvPV pmGAAR+HgrFfulgux1puhU0iMEVPpe9MU0NrBXTbsFye8p/kNYqfv7bduV6uH0QIQbswPvEMNBYdC etLitxG4mmKYn4c5edSiBH4fSmOfr5JpHisjeq3OLpgETLWah561mv3VDT5fcO2BK6Ma5qtT/tzb4 ZAbl+VmsOurKImKimQlHKq8C8Mm3WnWGu/1p5OV7lYE46s81wxJmeMXUhSeTz0pLQYS6JYyQw3uUg blYhAViM9Ga9kXctkLQ79asLeQrLTGQXrZvvQ5D8pFAh9mgUZG1gR3BfDRJbSq14px6OcKGx6QWYc J+/ULPBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLo8-6z; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 05/17] gup: Add try_get_folio() Date: Sun, 2 Jan 2022 21:57:17 +0000 Message-Id: <20220102215729.2943705-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9B8B91C0004 X-Stat-Signature: 5bnofg6bw6nskjf9nwpt1xddhpph8irb Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WeohFo8T; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641160650-407677 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This replaces try_get_compound_head(). It includes a small optimisation for the race where a folio is split between being looked up from its tail page and the reference count being obtained. Before, it returned NULL, which presumably triggered a retry under the mmap_lock, whereas now it will retry without the lock. Finding a frozen page will still return NULL. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/gup.c | 69 +++++++++++++++++++++++++++++--------------------------- 1 file changed, 36 insertions(+), 33 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2c51e9748a6a..58e5cfaaa676 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,12 +29,11 @@ struct follow_page_context { unsigned int page_mask; }; -static void hpage_pincount_add(struct page *page, int refs) +static void folio_pincount_add(struct folio *folio, int refs) { - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); - VM_BUG_ON_PAGE(page != compound_head(page), page); + VM_BUG_ON_FOLIO(!folio_pincount_available(folio), folio); - atomic_add(refs, compound_pincount_ptr(page)); + atomic_add(refs, folio_pincount_ptr(folio)); } static void hpage_pincount_sub(struct page *page, int refs) @@ -63,33 +62,35 @@ static void put_page_refs(struct page *page, int refs) } /* - * Return the compound head page with ref appropriately incremented, + * Return the folio with ref appropriately incremented, * or NULL if that failed. */ -static inline struct page *try_get_compound_head(struct page *page, int refs) +static inline struct folio *try_get_folio(struct page *page, int refs) { - struct page *head = compound_head(page); + struct folio *folio; - if (WARN_ON_ONCE(page_ref_count(head) < 0)) +retry: + folio = page_folio(page); + if (WARN_ON_ONCE(folio_ref_count(folio) < 0)) return NULL; - if (unlikely(!page_cache_add_speculative(head, refs))) + if (unlikely(!folio_ref_try_add_rcu(folio, refs))) return NULL; /* - * At this point we have a stable reference to the head page; but it - * could be that between the compound_head() lookup and the refcount - * increment, the compound page was split, in which case we'd end up - * holding a reference on a page that has nothing to do with the page + * At this point we have a stable reference to the folio; but it + * could be that between calling page_folio() and the refcount + * increment, the folio was split, in which case we'd end up + * holding a reference on a folio that has nothing to do with the page * we were given anymore. - * So now that the head page is stable, recheck that the pages still - * belong together. + * So now that the folio is stable, recheck that the page still + * belongs to this folio. */ - if (unlikely(compound_head(page) != head)) { - put_page_refs(head, refs); - return NULL; + if (unlikely(page_folio(page) != folio)) { + folio_put_refs(folio, refs); + goto retry; } - return head; + return folio; } /** @@ -128,8 +129,10 @@ struct page *try_grab_compound_head(struct page *page, int refs, unsigned int flags) { if (flags & FOLL_GET) - return try_get_compound_head(page, refs); + return &try_get_folio(page, refs)->page; else if (flags & FOLL_PIN) { + struct folio *folio; + /* * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a * right zone, so fail and let the caller fall back to the slow @@ -143,29 +146,29 @@ struct page *try_grab_compound_head(struct page *page, * CAUTION: Don't use compound_head() on the page before this * point, the result won't be stable. */ - page = try_get_compound_head(page, refs); - if (!page) + folio = try_get_folio(page, refs); + if (!folio) return NULL; /* - * When pinning a compound page of order > 1 (which is what + * When pinning a folio of order > 1 (which is what * hpage_pincount_available() checks for), use an exact count to - * track it, via hpage_pincount_add/_sub(). + * track it, via folio_pincount_add/_sub(). * - * However, be sure to *also* increment the normal page refcount - * field at least once, so that the page really is pinned. + * However, be sure to *also* increment the normal folio refcount + * field at least once, so that the folio really is pinned. * That's why the refcount from the earlier - * try_get_compound_head() is left intact. + * try_get_folio() is left intact. */ - if (hpage_pincount_available(page)) - hpage_pincount_add(page, refs); + if (folio_pincount_available(folio)) + folio_pincount_add(folio, refs); else - page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); + folio_ref_add(folio, + refs * (GUP_PIN_COUNTING_BIAS - 1)); - mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, - refs); + node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs); - return page; + return &folio->page; } WARN_ON_ONCE(1); From patchwork Sun Jan 2 21:57:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E698BC433EF for ; Sun, 2 Jan 2022 21:58:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 758386B008C; Sun, 2 Jan 2022 16:58:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E0BE6B0092; Sun, 2 Jan 2022 16:58:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2736E6B0092; Sun, 2 Jan 2022 16:58:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id E2BF16B0089 for ; Sun, 2 Jan 2022 16:58:04 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9F0AB8F6D5 for ; Sun, 2 Jan 2022 21:58:04 +0000 (UTC) X-FDA: 78986710488.21.E7697EF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 1872920002 for ; Sun, 2 Jan 2022 21:57:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dbjnXTvxC0I7nSJmZFs63I8OXQqB2y71bpU/hQGQvv8=; b=Ci1N2s7g/KV/ONjPsgPm1V99an Q8JqwFdFtI0brk5gatwcVGq+lgHlpsKR9WhgnLlWTSI7j+SjQGY/mXGwf2HdPv1GzCCU8DfQ1OBhs 6R3z14iJJQ2a6vCnuFAV4cS9v8YhrrkUUbdiAji4stK8nplGF4yyZ7ZmvASpNy5TGQucvcOTpzvQD 4dSvtwZwAm9YuHpY0jQGGSCuEDATNbi9w5OvWh3HpwtNAQtUxBS5P67MKRdWWRGcCH9KdD4lAIkQ2 8CGPxbExl0CXTsm8QAmSL20Ro1orfH1KZhPiMXacGt2BOmPdSurwNYKdjy/Pxk0uafn56QhCJ7PR9 5nQUc8hA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoA-9h; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 06/17] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Date: Sun, 2 Jan 2022 21:57:18 +0000 Message-Id: <20220102215729.2943705-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1872920002 X-Stat-Signature: 8ab6te5ee9hiagwtrsnmur8yaycm3ixg Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ci1N2s7g; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641160660-85957 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These wrappers have no more callers, so delete them. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 7 +++---- include/linux/pagemap.h | 11 ----------- 2 files changed, 3 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 00dcea53bb96..602de23482ef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1298,10 +1298,9 @@ static inline void put_page(struct page *page) * applications that don't have huge page reference counts, this won't be an * issue. * - * Locking: the lockless algorithm described in page_cache_get_speculative() - * and page_cache_gup_pin_speculative() provides safe operation for - * get_user_pages and page_mkclean and other calls that race to set up page - * table entries. + * Locking: the lockless algorithm described in folio_try_get_rcu() + * provides safe operation for get_user_pages(), page_mkclean() and + * other calls that race to set up page table entries. */ #define GUP_PIN_COUNTING_BIAS (1U << 10) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 704cb1b4b15d..4a63176b6417 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -283,17 +283,6 @@ static inline struct inode *folio_inode(struct folio *folio) return folio->mapping->host; } -static inline bool page_cache_add_speculative(struct page *page, int count) -{ - VM_BUG_ON_PAGE(PageTail(page), page); - return folio_ref_try_add_rcu((struct folio *)page, count); -} - -static inline bool page_cache_get_speculative(struct page *page) -{ - return page_cache_add_speculative(page, 1); -} - /** * folio_attach_private - Attach private data to a folio. * @folio: Folio to attach data to. From patchwork Sun Jan 2 21:57:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 739CDC433F5 for ; Sun, 2 Jan 2022 21:57:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 471546B0081; Sun, 2 Jan 2022 16:57:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FB0D6B0082; Sun, 2 Jan 2022 16:57:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B02D6B0083; Sun, 2 Jan 2022 16:57:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 02CAF6B0081 for ; Sun, 2 Jan 2022 16:57:47 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B4317181AC9C6 for ; Sun, 2 Jan 2022 21:57:46 +0000 (UTC) X-FDA: 78986709732.12.4C9D409 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 3E43D40002 for ; Sun, 2 Jan 2022 21:57:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LMlTYupDjmdQdz1OVwZOJWvhaN7jKXAby3ZJWkuTnfY=; b=DVr6g0TpicQ0sPagN/WVWanq3J +Q3tmnVN+81jXpCK7a7Mtum4fxAvBfALfzgW0lnSRTUBHiM0/sZke8PuJa3x6xds48W69M1VxlNUj fTHcTjBHGgGo733wmTjtV1waMwQRwssCJpgJZxpZZeb6Od1OTrOOkA/PQG2hLybmlCk+q7Z2X4bXK te5WOSukv/oY63PDd2k+hisw+RLGgGwImyWKd9PDt7VlKv456w8kCgOK8vpgaZU3ggR+FX8GVnOt8 db0TFJ6PsldGd7phXmmw2JRGNSSEBvtJmXymVUlPAS1i/oibdYUrPooIxLq0gs4D5mFGWRuAAhbd9 sOHqy2wQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoC-CO; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 07/17] gup: Add gup_put_folio() Date: Sun, 2 Jan 2022 21:57:19 +0000 Message-Id: <20220102215729.2943705-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3E43D40002 X-Stat-Signature: hgg9y8nmqtutmag7szh59iks1juzcqri Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DVr6g0Tp; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641160666-880518 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: put_compound_head() is turned into a call to gup_put_folio(). This removes the last call to put_page_refs(), so delete it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 44 +++++++++++++++----------------------------- 1 file changed, 15 insertions(+), 29 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 58e5cfaaa676..6d827f7d66d8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -36,29 +36,11 @@ static void folio_pincount_add(struct folio *folio, int refs) atomic_add(refs, folio_pincount_ptr(folio)); } -static void hpage_pincount_sub(struct page *page, int refs) +static void folio_pincount_sub(struct folio *folio, int refs) { - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); - VM_BUG_ON_PAGE(page != compound_head(page), page); - - atomic_sub(refs, compound_pincount_ptr(page)); -} - -/* Equivalent to calling put_page() @refs times. */ -static void put_page_refs(struct page *page, int refs) -{ -#ifdef CONFIG_DEBUG_VM - if (VM_WARN_ON_ONCE_PAGE(page_ref_count(page) < refs, page)) - return; -#endif + VM_BUG_ON_FOLIO(!folio_pincount_available(folio), folio); - /* - * Calling put_page() for each ref is unnecessarily slow. Only the last - * ref needs a put_page(). - */ - if (refs > 1) - page_ref_sub(page, refs - 1); - put_page(page); + atomic_sub(refs, folio_pincount_ptr(folio)); } /* @@ -175,19 +157,23 @@ struct page *try_grab_compound_head(struct page *page, return NULL; } -static void put_compound_head(struct page *page, int refs, unsigned int flags) +static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { - mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED, - refs); - - if (hpage_pincount_available(page)) - hpage_pincount_sub(page, refs); + node_stat_mod_folio(folio, NR_FOLL_PIN_RELEASED, refs); + if (folio_pincount_available(folio)) + folio_pincount_sub(folio, refs); else refs *= GUP_PIN_COUNTING_BIAS; } - put_page_refs(page, refs); + folio_put_refs(folio, refs); +} + +static void put_compound_head(struct page *page, int refs, unsigned int flags) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + gup_put_folio((struct folio *)page, refs, flags); } /** @@ -228,7 +214,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) */ void unpin_user_page(struct page *page) { - put_compound_head(compound_head(page), 1, FOLL_PIN); + gup_put_folio(page_folio(page), 1, FOLL_PIN); } EXPORT_SYMBOL(unpin_user_page); From patchwork Sun Jan 2 21:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD8C9C433EF for ; Sun, 2 Jan 2022 21:57:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED27B6B0074; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA9576B0073; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D70186B0078; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id BDE076B0073 for ; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6DB278249980 for ; Sun, 2 Jan 2022 21:57:36 +0000 (UTC) X-FDA: 78986709312.05.9DA51E7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 867C1180002 for ; Sun, 2 Jan 2022 21:57:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=929agv0r8S5vJjGS58k1PTYSR4s8wgn1Za10SWEinyw=; b=t3j3faXEGTZGma3GnXVHyJzcJa aCXf/Dage6s/QjTASZLgZGZ6eKBwXTpm0sYCGnl9ojs13Y0mxXMtqNdfqmxUR/bLfJ+rAXBYQ/APM BCly3OrzLIn+67A7KQhhX9kSZDozwe2wvaYxNeGzNFXfyq+/velx9vKPcx51f3wvWgNYOSqtqXFDW m8gQk3XUDMqSgCSDzoMz7VjbzLAGTej8x1bD+Njt3VDEYOSgmBw54l2rVUTVMWUPLLamBCtXdmDHC pxbqMZbMv7X5oz7AZ72oB1jtGmU42YADlne+T3cYwYMPAf5x9g3LxlpgCNROieebkN3NcfaTaUfxO Wm+FUPeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoE-F9; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 08/17] gup: Add try_grab_folio() Date: Sun, 2 Jan 2022 21:57:20 +0000 Message-Id: <20220102215729.2943705-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 867C1180002 X-Stat-Signature: oxmp11xf95cbnxgrctcgp7nnz6trc7hu Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t3j3faXE; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-HE-Tag: 1641160648-530423 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: try_grab_compound_head() is turned into a call to try_grab_folio(). Convert the two callers who only care about a boolean success/fail. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- include/linux/mm.h | 4 +--- mm/gup.c | 25 +++++++++++++------------ mm/hugetlb.c | 7 +++---- 3 files changed, 17 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 602de23482ef..4e763a590c9c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1202,9 +1202,7 @@ static inline void get_page(struct page *page) } bool __must_check try_grab_page(struct page *page, unsigned int flags); -struct page *try_grab_compound_head(struct page *page, int refs, - unsigned int flags); - +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); static inline __must_check bool try_get_page(struct page *page) { diff --git a/mm/gup.c b/mm/gup.c index 6d827f7d66d8..2307b2917055 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -76,12 +76,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) } /** - * try_grab_compound_head() - attempt to elevate a page's refcount, by a + * try_grab_folio() - attempt to elevate a page's refcount, by a * flags-dependent amount. - * - * Even though the name includes "compound_head", this function is still - * appropriate for callers that have a non-compound @page to get. - * * @page: pointer to page to be grabbed * @refs: the value to (effectively) add to the page's refcount * @flags: gup flags: these are the FOLL_* flag values. @@ -102,16 +98,15 @@ static inline struct folio *try_get_folio(struct page *page, int refs) * FOLL_PIN on normal pages, or compound pages that are two pages long: * page's refcount will be incremented by @refs * GUP_PIN_COUNTING_BIAS. * - * Return: head page (with refcount appropriately incremented) for success, or + * Return: folio (with refcount appropriately incremented) for success, or * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's * considered failure, and furthermore, a likely bug in the caller, so a warning * is also emitted. */ -struct page *try_grab_compound_head(struct page *page, - int refs, unsigned int flags) +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { if (flags & FOLL_GET) - return &try_get_folio(page, refs)->page; + return try_get_folio(page, refs); else if (flags & FOLL_PIN) { struct folio *folio; @@ -150,13 +145,19 @@ struct page *try_grab_compound_head(struct page *page, node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs); - return &folio->page; + return folio; } WARN_ON_ONCE(1); return NULL; } +static inline struct page *try_grab_compound_head(struct page *page, + int refs, unsigned int flags) +{ + return &try_grab_folio(page, refs, flags)->page; +} + static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { @@ -188,7 +189,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags) * @flags: gup flags: these are the FOLL_* flag values. * * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same - * time. Cases: please see the try_grab_compound_head() documentation, with + * time. Cases: please see the try_grab_folio() documentation, with * "refs=1". * * Return: true for success, or if no action was required (if neither FOLL_PIN @@ -200,7 +201,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) if (!(flags & (FOLL_GET | FOLL_PIN))) return true; - return try_grab_compound_head(page, 1, flags); + return try_grab_folio(page, 1, flags); } /** diff --git a/mm/hugetlb.c b/mm/hugetlb.c index abcd1785c629..ab67b13c4a71 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6072,7 +6072,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, if (pages) { /* - * try_grab_compound_head() should always succeed here, + * try_grab_folio() should always succeed here, * because: a) we hold the ptl lock, and b) we've just * checked that the huge page is present in the page * tables. If the huge page is present, then the tail @@ -6081,9 +6081,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * any way. So this page must be available at this * point, unless the page refcount overflowed: */ - if (WARN_ON_ONCE(!try_grab_compound_head(pages[i], - refs, - flags))) { + if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, + flags))) { spin_unlock(ptl); remainder = 0; err = -ENOMEM; From patchwork Sun Jan 2 21:57:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EBBAC433F5 for ; Sun, 2 Jan 2022 21:58:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07AA56B008A; Sun, 2 Jan 2022 16:58:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 007E46B008C; Sun, 2 Jan 2022 16:58:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBD876B0092; Sun, 2 Jan 2022 16:58:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id C396D6B0089 for ; Sun, 2 Jan 2022 16:58:04 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8C3271815139A for ; Sun, 2 Jan 2022 21:58:04 +0000 (UTC) X-FDA: 78986710488.26.66662A3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 396C340008 for ; Sun, 2 Jan 2022 21:58:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/2dX4hdorZxsUUc85e2iLoNcb+YUtIuP/pe7WeYZMQ8=; b=IUj3hC1hDkhSs0sBD7tZeiaNHM CBWVultZMoZnz0WHoKfdTqMeP3CDuLEcm3Z1mMzqsEauphHSaQmk0r2ldAtOq19MGeuvhoioWFksh Nx1ipwhq2qrRW7sI/OPmA4gv4PKa8BPavXrWq+zPNYAvDuoIDWsnkDcgxAx3WOhlwhc1wcGdstcIN dg+r1N5Fds6V5hg0R61XSwInno1dKMUiH2odZ/CoxVXgoEzZ2/yu4ekNVYeG0kFtcjdpipwtS6BrG hxA35Lv488jbT1JOxhqsRpCqeIwn81+jiQtZqAKfdiQ04EMq9IUcjmK2s21D2BTmJS8MkFQmxiZ1A bXfDaGgQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoG-Ho; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 09/17] gup: Convert gup_pte_range() to use a folio Date: Sun, 2 Jan 2022 21:57:21 +0000 Message-Id: <20220102215729.2943705-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 396C340008 X-Stat-Signature: mo7iuuuxhbm5mfdce1z3w5pxuaakd87i Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IUj3hC1h; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160684-299847 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We still call try_grab_folio() once per PTE; a future patch could optimise to just adjust the reference count for each page within the folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2307b2917055..d8535f9d5622 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2260,7 +2260,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ptem = ptep = pte_offset_map(&pmd, addr); do { pte_t pte = ptep_get_lockless(ptep); - struct page *head, *page; + struct page *page; + struct folio *folio; /* * Similar to the PMD case below, NUMA hinting must take slow @@ -2287,22 +2288,20 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); - head = try_grab_compound_head(page, 1, flags); - if (!head) + folio = try_grab_folio(page, 1, flags); + if (!folio) goto pte_unmap; if (unlikely(page_is_secretmem(page))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } - VM_BUG_ON_PAGE(compound_head(page) != head, page); - /* * We need to make the page accessible if and only if we are * going to access its content (the FOLL_PIN case). Please @@ -2316,10 +2315,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, goto pte_unmap; } } - SetPageReferenced(page); + folio_set_referenced(folio); pages[*nr] = page; (*nr)++; - } while (ptep++, addr += PAGE_SIZE, addr != end); ret = 1; From patchwork Sun Jan 2 21:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3E91C433FE for ; Sun, 2 Jan 2022 21:57:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C3BF46B007D; Sun, 2 Jan 2022 16:57:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B4CD86B007E; Sun, 2 Jan 2022 16:57:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B80D6B0080; Sun, 2 Jan 2022 16:57:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 7528F6B007D for ; Sun, 2 Jan 2022 16:57:39 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3A1C48249980 for ; Sun, 2 Jan 2022 21:57:39 +0000 (UTC) X-FDA: 78986709438.19.0675E8B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id C249640003 for ; Sun, 2 Jan 2022 21:57:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7rrOX6FmyXBcc7eoNznqrOG9hGHkjj9XXUMuwFtq2qk=; b=BwVvF95q2utIqTHB292u8nyYx6 C7cYNEzygqW+8D/IXLvztbV5yBKEoHnaO8vD2wOJiIG2JwxdTeZGcZzho1raQvl4bSIfhPOEmweBe BtpdxSCNXxY7jSgPGn0wIvWXLdVjOt6o5yU4d3queXSU+IKKyljB4sVwxz6TSLdxlUarpkJPc8n/o fU9XgN+3HYL7+pAZI/rnSnnNCYBnX69yq98iUeQQ1UwNRK0zoipYgxKQXy0EDi0VfCUTUsXaW9UAa ZTwOrfCRCbEcrxfT+vpi/sVOxW/pC23gY7qqh+Z6VdqxsFdSD333UXd9vpc36mjzUSlqDapkaadSE ywgLEhEQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoI-KZ; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 10/17] gup: Convert gup_hugepte() to use a folio Date: Sun, 2 Jan 2022 21:57:22 +0000 Message-Id: <20220102215729.2943705-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C249640003 X-Stat-Signature: 7qbbicaskyqqq1qqpb5gwocy33ykxtat Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BwVvF95q; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam10 X-HE-Tag: 1641160657-572540 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There should be little to no effect from this patch; just removing uses of some old APIs. While I'm looking at this, take the opportunity to use nth_page() instead of doing the arithmetic ourselves in case hugetlbfs pages are ever allocated across memmap boundaries. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index d8535f9d5622..1c7fb668b46d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2435,7 +2435,7 @@ static int record_subpages(struct page *page, unsigned long addr, int nr; for (nr = 0; addr != end; addr += PAGE_SIZE) - pages[nr++] = page++; + pages[nr++] = nth_page(page, nr); return nr; } @@ -2453,7 +2453,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, struct page **pages, int *nr) { unsigned long pte_end; - struct page *head, *page; + struct page *page; + struct folio *folio; pte_t pte; int refs; @@ -2469,21 +2470,20 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - head = pte_page(pte); - page = head + ((addr & (sz-1)) >> PAGE_SHIFT); + page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(head, refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Sun Jan 2 21:57:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A195C433FE for ; Sun, 2 Jan 2022 21:57:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 277516B0080; Sun, 2 Jan 2022 16:57:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FEBA6B0081; Sun, 2 Jan 2022 16:57:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 077296B0082; Sun, 2 Jan 2022 16:57:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id DB3A46B0080 for ; Sun, 2 Jan 2022 16:57:43 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9EF728F6C5 for ; Sun, 2 Jan 2022 21:57:43 +0000 (UTC) X-FDA: 78986709606.25.099616A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id ABBD1180005 for ; Sun, 2 Jan 2022 21:57:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ogX81m9Ft4R1GjsUNqvQECfcCwDVZq/jxbq6z+UDrDo=; b=qpFifqnGGp7RAQdXRK2f+g159C GC0q1ToxvY78zsMJP5JZjQzPCRMm2zxwWsH7yKUcZNVCLyNGjxoHi8qRFFG0Hg8EX5GdImhxt+cuB iuFthD7h1EO5TOZabkVcmjhniuM3phImf13YqSYy/sE713NiR8fxg/R83MUtlCRRFi3dpLLO7ratl bfEqnL5YxWk+wISilndFwYqnT1QuoDkuPaQdm+TBi/XMRm9h+My+ICm/RNafA7zowDguLTUZILEzz HL9UgHIFfWkndfAOwGC7uTc11nwFcakxX6XJfArkQAG4x3M2EgWgk4VJoMAqH781UcTRJTXh8+KsW e52YwqSw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoK-NY; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 11/17] gup: Convert gup_huge_pmd() to use a folio Date: Sun, 2 Jan 2022 21:57:23 +0000 Message-Id: <20220102215729.2943705-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 4kpzxixw8snqc9181zpfwidw1hh9abk3 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: ABBD1180005 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qpFifqnG; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160661-658254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. Also fix an assumption that memmap is contiguous. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1c7fb668b46d..be965c965484 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2517,7 +2517,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { - struct page *head, *page; + struct page *page; + struct folio *folio; int refs; if (!pmd_access_permitted(orig, flags & FOLL_WRITE)) @@ -2530,20 +2531,20 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pmd_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Sun Jan 2 21:57:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07D0BC433F5 for ; Sun, 2 Jan 2022 21:58:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 166A36B0088; Sun, 2 Jan 2022 16:57:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F00B6B0089; Sun, 2 Jan 2022 16:57:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E416C6B008A; Sun, 2 Jan 2022 16:57:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id C26C16B0088 for ; Sun, 2 Jan 2022 16:57:58 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 841EC181AC9C6 for ; Sun, 2 Jan 2022 21:57:58 +0000 (UTC) X-FDA: 78986710236.17.3FE95A9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 392311A0004 for ; Sun, 2 Jan 2022 21:57:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nhlQUBzMFsa5asyVak1xcCftk6Snd8OT173I7f6pJXs=; b=Fm/Lkv9eV2P2rsomeiY5g79wFy hOdy31eJACMCWR4Mf3nsY3ddwlrCQsqRtzN9rOszKJKDggsREr7nzgz+JztzJdUFsCzrNOF5l0LEr o0VyN6RzOkgkpnPscDCxPV8JBDmagLA7G4pvfLlrpmabgb4Nvb8IvaIH2Ix4sjuRWl9dyTR1pRKXN AlBbg20Th8LpiXxxC9N4SjuLmam1qvMx0vBhYdhPWBtmZciaO+4cKgrCaZfrcqPFy4t5tnygaq5iP +I4zm+vuzgJdVJSihUv3K6ROJOjuipAIRHOg6LvM8KMUr6XH5mVJvoOf81XQEUKIWVcQZH56smYct XTZMavKw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoM-R0; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 12/17] gup: Convert gup_huge_pud() to use a folio Date: Sun, 2 Jan 2022 21:57:24 +0000 Message-Id: <20220102215729.2943705-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 392311A0004 X-Stat-Signature: at33cotscejm5w9jaxywyty74zbitpmb Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Fm/Lkv9e"; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641160678-863993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. Also fix an assumption that memmap is contiguous. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index be965c965484..e7bcee8776e1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2552,7 +2552,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { - struct page *head, *page; + struct page *page; + struct folio *folio; int refs; if (!pud_access_permitted(orig, flags & FOLL_WRITE)) @@ -2565,20 +2566,20 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pud_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pud_val(orig) != pud_val(*pudp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Sun Jan 2 21:57:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C80A5C433F5 for ; Sun, 2 Jan 2022 21:58:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D5666B0092; Sun, 2 Jan 2022 16:58:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B4146B0095; Sun, 2 Jan 2022 16:58:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61A4E6B0098; Sun, 2 Jan 2022 16:58:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id F036C6B0095 for ; Sun, 2 Jan 2022 16:58:04 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A70DF8F6D6 for ; Sun, 2 Jan 2022 21:58:04 +0000 (UTC) X-FDA: 78986710488.20.86023FA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 2D7D6C000E for ; Sun, 2 Jan 2022 21:58:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ns/BmlmITh6DPrVTa85AsKcevSB+O645ygLvNbfaBNo=; b=bxh+BnEbPiGmCV0o3WD7D+ndHu cQlzqsf4HtnOrF1g0syobVubDvUSixq13s+XY831LkZTsTTDVEE2GKxog5uNcw8Y2OskfQidYisdD uI9DSwwfH7PSwwN/eXKdbyAXfN6EDRnSxrCvjgmzHMLw9/w+jhuk9xAOus2zsRwvy5BFooPMM2oLo +G8Xd0ax51hoYDrlrvwpZb6o0x9nOrbZ46j+pB8FT8WCykBU6RLWux+x8an6rDRv5+e3sCh1GJ30m 0/HsVcOqT0HXgQ0JNiRdyuIMP53zxwICOPEwd0sRlORHoGaz02cw8YofT8pFb96QbFjC/390wMVQz 14ll0eqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoO-TQ; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 13/17] gup: Convert gup_huge_pgd() to use a folio Date: Sun, 2 Jan 2022 21:57:25 +0000 Message-Id: <20220102215729.2943705-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bxh+BnEb; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2D7D6C000E X-Stat-Signature: sfcradht7jewc4woobed5xjooy1ffajy X-HE-Tag: 1641160684-973393 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. Also fix an assumption that memmap is contiguous. This was the last user of try_grab_compound_head(), so remove it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index e7bcee8776e1..7bd1e4a2648a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -152,12 +152,6 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) return NULL; } -static inline struct page *try_grab_compound_head(struct page *page, - int refs, unsigned int flags) -{ - return &try_grab_folio(page, refs, flags)->page; -} - static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { @@ -2588,27 +2582,28 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, struct page **pages, int *nr) { int refs; - struct page *head, *page; + struct page *page; + struct folio *folio; if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; BUILD_BUG_ON(pgd_devmap(orig)); - page = pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); + page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pgd_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Sun Jan 2 21:57:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC256C4332F for ; Sun, 2 Jan 2022 21:57:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BB836B0075; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 643066B0078; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B2EE6B007B; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 16F0C6B0078 for ; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A1DF5181AC9CC for ; Sun, 2 Jan 2022 21:57:36 +0000 (UTC) X-FDA: 78986709312.27.78FEA89 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id DAC23A0008 for ; Sun, 2 Jan 2022 21:57:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gAjRxseV+t/LIhWsenv/zeG83hgwWvidXx2i63TeJpM=; b=wE1l6Qv4BHorPkUj+TuiySwEc+ 4r2XQJkP9EKMgNlqAOjStbDxg23VStHWxpZl9uzwJpF4trkt1cXl/oFIHKTKqvD8uPKD7lAJLH43+ Wpvcf8TV0rgQUcpP2vFIDzM1nrKZXZPS4mTXMETNtVtChBsRAnAPGJsrSxFjmXjA8VmigaK52n/c0 wriweOGoFFEOds4MRkeqWJ2OMWE+fcg3ewk3nqAtzZb70EJXL/hXjYHKgckZPqkVcOpXZxBPjWeT6 MIQ9c1DcmEqWx8qXVROHG4NuHwNnf+8+XX882o2Ga1oVAf1kto9Vh7gxgU3hH3Ai74xnLN6cZYBFf XkZARJ1g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoQ-Vw; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio() Date: Sun, 2 Jan 2022 21:57:26 +0000 Message-Id: <20220102215729.2943705-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: fqya8ytoydt6robaruaaqyma5c9xk3fk X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DAC23A0008 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wE1l6Qv4; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160639-13802 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro can be considerably simplified by returning the folio from gup_folio_next() instead of void from compound_next(). Convert both callers to work on folios instead of pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 47 ++++++++++++++++++++++++----------------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 7bd1e4a2648a..eaffa6807609 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -239,31 +239,29 @@ static inline void compound_range_next(unsigned long i, unsigned long npages, __i < __npages; __i += __ntails, \ compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) -static inline void compound_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct folio *gup_folio_next(unsigned long i, + unsigned long npages, struct page **list, unsigned int *ntails) { - struct page *page; + struct folio *folio; unsigned int nr; if (i >= npages) - return; + return NULL; - page = compound_head(list[i]); + folio = page_folio(list[i]); for (nr = i + 1; nr < npages; nr++) { - if (compound_head(list[nr]) != page) + if (page_folio(list[nr]) != folio) break; } - *head = page; *ntails = nr - i; + return folio; } -#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails))) +#define gup_for_each_folio(__i, __list, __npages, __folio, __ntails) \ + for (__i = 0; \ + (__folio = gup_folio_next(__i, __npages, __list, &(__ntails))) != NULL; \ + __i += __ntails) /** * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages @@ -291,15 +289,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, bool make_dirty) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; if (!make_dirty) { unpin_user_pages(pages, npages); return; } - for_each_compound_head(index, pages, npages, head, ntails) { + gup_for_each_folio(index, pages, npages, folio, nr) { /* * Checking PageDirty at this point may race with * clear_page_dirty_for_io(), but that's OK. Two key @@ -320,9 +318,12 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, * written back, so it gets written back again in the * next writeback cycle. This is harmless. */ - if (!PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + if (!folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_pages_dirty_lock); @@ -375,8 +376,8 @@ EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); void unpin_user_pages(struct page **pages, unsigned long npages) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; /* * If this WARN_ON() fires, then the system *might* be leaking pages (by @@ -386,8 +387,8 @@ void unpin_user_pages(struct page **pages, unsigned long npages) if (WARN_ON(IS_ERR_VALUE(npages))) return; - for_each_compound_head(index, pages, npages, head, ntails) - put_compound_head(head, ntails, FOLL_PIN); + gup_for_each_folio(index, pages, npages, folio, nr) + gup_put_folio(folio, nr, FOLL_PIN); } EXPORT_SYMBOL(unpin_user_pages); From patchwork Sun Jan 2 21:57:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4899C433EF for ; Sun, 2 Jan 2022 21:57:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F45A6B007B; Sun, 2 Jan 2022 16:57:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77BCB6B007D; Sun, 2 Jan 2022 16:57:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61CD46B007E; Sun, 2 Jan 2022 16:57:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 322396B007D for ; Sun, 2 Jan 2022 16:57:38 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C0FC9181AC9C6 for ; Sun, 2 Jan 2022 21:57:37 +0000 (UTC) X-FDA: 78986709354.23.9177BE6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id EC8BF40003 for ; Sun, 2 Jan 2022 21:57:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bMEns1jMi8fBdm6AnOYnvP1ksSGlqXXNLa7fiGYvARg=; b=TXp7M7UwuLlN0Efe6AirsweSbZ OlrYttJwhIFYjIJcZIVoN2bW1ZZGIL++PzAVhO24YuUAwIEg68bbqwhxsnqMtc63H8ohynisj0cVo 89gi2+TiB5Swz6p4AT/QqiVlQ0u9RdeDfuiJunc8dp5+8egv0TAOdTQEW4L/NIZ+mKoU86TypUdwS xvFkG2oCTiAwKiYGF4xi0rBvzgEx3rRO6MDRVPr/TWxmOjtjykYPlZjv2mQb5X62QuGVET5rn383v QHa7VJAGD3C/DsxyNqAnNBYp21gfU9DQHFNWduxXWHolKil2ekzaju0+h+/UoJ2BcRmPm5uKXZczq rvbjiz8Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r7-00CLoj-42; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 15/17] gup: Convert for_each_compound_range() to gup_for_each_folio_range() Date: Sun, 2 Jan 2022 21:57:27 +0000 Message-Id: <20220102215729.2943705-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EC8BF40003 X-Stat-Signature: 51ziqgsw15w3gx84qy4mk7n65r95uhri Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TXp7M7Uw; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160637-80394 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro can be considerably simplified by returning the folio from gup_folio_range_next() instead of void from compound_next(). Convert the only caller to work on folios instead of pages. This removes the last caller of put_compound_head(), so delete it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 50 +++++++++++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index eaffa6807609..76717e05413d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -165,12 +165,6 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) folio_put_refs(folio, refs); } -static void put_compound_head(struct page *page, int refs, unsigned int flags) -{ - VM_BUG_ON_PAGE(PageTail(page), page); - gup_put_folio((struct folio *)page, refs, flags); -} - /** * try_grab_page() - elevate a page's refcount by a flag-dependent amount * @@ -213,31 +207,30 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); -static inline void compound_range_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct folio *gup_folio_range_next(unsigned long i, + unsigned long npages, struct page **list, unsigned int *ntails) { - struct page *next, *page; + struct page *next; + struct folio *folio; unsigned int nr = 1; if (i >= npages) - return; + return NULL; next = *list + i; - page = compound_head(next); - if (PageCompound(page) && compound_order(page) >= 1) - nr = min_t(unsigned int, - page + compound_nr(page) - next, npages - i); + folio = page_folio(next); + if (folio_test_large(folio)) + nr = min_t(unsigned int, npages - i, + &folio->page + folio_nr_pages(folio) - next); - *head = page; *ntails = nr; + return folio; } -#define for_each_compound_range(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) +#define gup_for_each_folio_range(__i, __list, __npages, __folio, __ntails) \ + for (__i = 0; \ + (__folio = gup_folio_range_next(__i, __npages, __list, &(__ntails))) != NULL; \ + __i += __ntails) static inline struct folio *gup_folio_next(unsigned long i, unsigned long npages, struct page **list, unsigned int *ntails) @@ -353,13 +346,16 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; - for_each_compound_range(index, &page, npages, head, ntails) { - if (make_dirty && !PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + gup_for_each_folio_range(index, &page, npages, folio, nr) { + if (make_dirty && !folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); From patchwork Sun Jan 2 21:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736A7C433F5 for ; Sun, 2 Jan 2022 21:57:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5AA066B0071; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5321B6B0073; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3ABAF6B0074; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 24F256B0071 for ; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CC57E1813BCE0 for ; Sun, 2 Jan 2022 21:57:35 +0000 (UTC) X-FDA: 78986709270.11.EE1BEE3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 45FE14000A for ; Sun, 2 Jan 2022 21:57:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wYKmLfJGf6ghLEntBIruhlhTlplxap0wolbbz1ji1ms=; b=F/8wOFfszKpXEhmiLMAs0hHl2A tQ7BOzfynIEr3jRzwqHJDpoYdpl4jvJ4o3FgYxzDSyVp7rm5T6lw7EcaJ/Vfs/utN/FdpGNiD6VF2 VIxKgMo4enP6YaZA/XLd8ZCTi+do5PvNIiuKqCL0Mgx7UeY32Kms1KeZxwo4VoXowL61cilr0cOIf ckFgRVDQT9dMZ7aua9IJchs34vIkjeOsL8ysP89VUjMx/uLkj27FInfDb0Qj4NMZtLvHZVZrApE7U FRTs2hPoqtafmMHG2oCxki1lMiv8itH5WkiX+ql6OYFX7P6GmGcbzjfDuV72BquTfUGc/LE4es7i2 Owihie7Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r7-00CLom-6i; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 16/17] mm: Add isolate_lru_folio() Date: Sun, 2 Jan 2022 21:57:28 +0000 Message-Id: <20220102215729.2943705-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 45FE14000A X-Stat-Signature: 53ys8nqtbk6dckuce1o76r75zcughu6q Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="F/8wOFfs"; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160642-947322 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn isolate_lru_page() into a wrapper around isolate_lru_folio(). TestClearPageLRU() would have always failed on a tail page, so returning -EBUSY is the same behaviour. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- arch/powerpc/include/asm/mmu_context.h | 1 - mm/folio-compat.c | 8 +++++ mm/internal.h | 3 +- mm/vmscan.c | 43 ++++++++++++-------------- 4 files changed, 29 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 9ba6b585337f..b9cab0a11421 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -21,7 +21,6 @@ extern void destroy_context(struct mm_struct *mm); #ifdef CONFIG_SPAPR_TCE_IOMMU struct mm_iommu_table_group_mem_t; -extern int isolate_lru_page(struct page *page); /* from internal.h */ extern bool mm_iommu_preregistered(struct mm_struct *mm); extern long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries, diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 749555a232a8..782e766cd1ee 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -7,6 +7,7 @@ #include #include #include +#include "internal.h" struct address_space *page_mapping(struct page *page) { @@ -151,3 +152,10 @@ int try_to_release_page(struct page *page, gfp_t gfp) return filemap_release_folio(page_folio(page), gfp); } EXPORT_SYMBOL(try_to_release_page); + +int isolate_lru_page(struct page *page) +{ + if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) + return -EBUSY; + return isolate_lru_folio((struct folio *)page); +} diff --git a/mm/internal.h b/mm/internal.h index e989d8ceec91..977d5116d327 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -178,7 +178,8 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ -extern int isolate_lru_page(struct page *page); +int isolate_lru_page(struct page *page); +int isolate_lru_folio(struct folio *folio); extern void putback_lru_page(struct page *page); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); diff --git a/mm/vmscan.c b/mm/vmscan.c index fb9584641ac7..ac2f5b76cdb2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2168,45 +2168,40 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, } /** - * isolate_lru_page - tries to isolate a page from its LRU list - * @page: page to isolate from its LRU list + * isolate_lru_folio - Try to isolate a folio from its LRU list. + * @folio: Folio to isolate from its LRU list. * - * Isolates a @page from an LRU list, clears PageLRU and adjusts the - * vmstat statistic corresponding to whatever LRU list the page was on. + * Isolate a @folio from an LRU list and adjust the vmstat statistic + * corresponding to whatever LRU list the folio was on. * - * Returns 0 if the page was removed from an LRU list. - * Returns -EBUSY if the page was not on an LRU list. - * - * The returned page will have PageLRU() cleared. If it was found on - * the active list, it will have PageActive set. If it was found on - * the unevictable list, it will have the PageUnevictable bit set. That flag + * The folio will have its LRU flag cleared. If it was found on the + * active list, it will have the Active flag set. If it was found on the + * unevictable list, it will have the Unevictable flag set. These flags * may need to be cleared by the caller before letting the page go. * - * The vmstat statistic corresponding to the list on which the page was - * found will be decremented. - * - * Restrictions: + * Context: * * (1) Must be called with an elevated refcount on the page. This is a - * fundamental difference from isolate_lru_pages (which is called + * fundamental difference from isolate_lru_pages() (which is called * without a stable reference). - * (2) the lru_lock must not be held. - * (3) interrupts must be enabled. + * (2) The lru_lock must not be held. + * (3) Interrupts must be enabled. + * + * Return: 0 if the folio was removed from an LRU list. + * -EBUSY if the folio was not on an LRU list. */ -int isolate_lru_page(struct page *page) +int isolate_lru_folio(struct folio *folio) { - struct folio *folio = page_folio(page); int ret = -EBUSY; - VM_BUG_ON_PAGE(!page_count(page), page); - WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); + VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); - if (TestClearPageLRU(page)) { + if (folio_test_clear_lru(folio)) { struct lruvec *lruvec; - get_page(page); + folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, folio); unlock_page_lruvec_irq(lruvec); ret = 0; } From patchwork Sun Jan 2 21:57:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 860C2C433FE for ; Sun, 2 Jan 2022 21:57:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 200D16B0073; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CED06B007D; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E59156B007B; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id CD91A6B0075 for ; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8BDBA8248D52 for ; Sun, 2 Jan 2022 21:57:36 +0000 (UTC) X-FDA: 78986709312.30.15193A0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id B082840005 for ; Sun, 2 Jan 2022 21:57:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OIYZTtL5z9IBnWXwCyTZcTYVTvNGO7oeHGEs35IwMTE=; b=jcx0YmZ7o3ZZYfl3nRoJhy5Q50 bPaISejmfAU8j/t1x9GWuLMPuKHPNAGLHhYkwWbSxsVErn0TEBdlbEkJI6o+rXkL0yOrUghhszMdT +zRi3I6PvMVgol2hlPDciihFXTI4NF0VRjZ5hMytRkHFhaxAYCZDE/KPPejynkeRrQFGWosSPlmvu Vlwhl1jIFy4JS5+mLfGgs3qHQBEcNVecj21CAtVGiOUS8z2jWTNPrcGlp2+GPlzbuD0SI92jpDf0s AbxEJkWqcUIQ7SdtqEIShgfLMJGyDXc8xNNvK8HbvrxaXRO5/fjkeqYxN2wzE6uW00C3+EKwixevA OEweFkgg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r7-00CLos-AR; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 17/17] gup: Convert check_and_migrate_movable_pages() to use a folio Date: Sun, 2 Jan 2022 21:57:29 +0000 Message-Id: <20220102215729.2943705-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B082840005 X-Stat-Signature: artbt4aht4n4duuz7da79oppumrqxc6s Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jcx0YmZ7; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160636-423385 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Switch from head pages to folios. This removes an assumption that THPs are the only way to have a high-order page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 76717e05413d..eb7c66e2b785 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1822,41 +1822,41 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, bool drain_allow = true; LIST_HEAD(movable_page_list); long ret = 0; - struct page *prev_head = NULL; - struct page *head; + struct folio *folio, *prev_folio = NULL; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; for (i = 0; i < nr_pages; i++) { - head = compound_head(pages[i]); - if (head == prev_head) + folio = page_folio(pages[i]); + if (folio == prev_folio) continue; - prev_head = head; + prev_folio = folio; /* * If we get a movable page, since we are going to be pinning * these entries, try to move them out if possible. */ - if (!is_pinnable_page(head)) { - if (PageHuge(head)) { - if (!isolate_huge_page(head, &movable_page_list)) + if (!is_pinnable_page(&folio->page)) { + if (folio_test_hugetlb(folio)) { + if (!isolate_huge_page(&folio->page, + &movable_page_list)) isolation_error_count++; } else { - if (!PageLRU(head) && drain_allow) { + if (!folio_test_lru(folio) && drain_allow) { lru_add_drain_all(); drain_allow = false; } - if (isolate_lru_page(head)) { + if (isolate_lru_folio(folio)) { isolation_error_count++; continue; } - list_add_tail(&head->lru, &movable_page_list); - mod_node_page_state(page_pgdat(head), + list_add_tail(&folio->lru, &movable_page_list); + node_stat_mod_folio(folio, NR_ISOLATED_ANON + - page_is_file_lru(head), - thp_nr_pages(head)); + folio_is_file_lru(folio), + folio_nr_pages(folio)); } } }