From patchwork Mon Jan 10 04:23:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 888F4C433F5 for ; Mon, 10 Jan 2022 04:24:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76BD76B0096; Sun, 9 Jan 2022 23:24:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CB906B0098; Sun, 9 Jan 2022 23:24:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51D0F6B0099; Sun, 9 Jan 2022 23:24:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 3607F6B0096 for ; Sun, 9 Jan 2022 23:24:33 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EE5E0181C1958 for ; Mon, 10 Jan 2022 04:24:32 +0000 (UTC) X-FDA: 79013085984.08.D27C4D7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 87D2FC0005 for ; Mon, 10 Jan 2022 04:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rKKpy0oluulkvLBfnA/9hhQL753tZQacHkRyC1WeJU0=; b=sGdCDh4ZVcVD4kmjTA+PY/SRmA Xy0yRyA5WDiywX9swJnbRDtdRVp/t9tnbO0FbJIbD8klaRw1FSmjZ6aQ3s87Zqfm83GSNw9+W6EBB jUEqlKQg87FdYHfliSejnX27WaWwrifLIdLGxqtuIivNfw8F47wH/NJVLGNkh9RiWTPp/jc+Mjpr/ 4OzgBeACGD0VX+bJqHy4Oh3gbqvvXQK2HdoX5rjTR92I/QwNL951f1izvcp4+2VZw0t66GoH8Pp2c dzDjq+odwp2UsO4gda+0abuQYve0KMcJcKKClan7zrqcmr/MWnR0NbwZfC0BJO5Z0xGFZMxeKcQML SErKzKMg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE7-0025wS-Uo; Mon, 10 Jan 2022 04:24:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 01/28] gup: Remove for_each_compound_range() Date: Mon, 10 Jan 2022 04:23:39 +0000 Message-Id: <20220110042406.499429-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 87D2FC0005 X-Stat-Signature: wefcfuczzmntmn5fn3na3mn79ra8dqm5 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sGdCDh4Z; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788672-653081 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro doesn't simplify the users; it's easier to just call compound_range_next() inside the loop. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2c51e9748a6a..7a07e0c00bf5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -236,9 +236,6 @@ static inline void compound_range_next(unsigned long i, unsigned long npages, struct page *next, *page; unsigned int nr = 1; - if (i >= npages) - return; - next = *list + i; page = compound_head(next); if (PageCompound(page) && compound_order(page) >= 1) @@ -249,12 +246,6 @@ static inline void compound_range_next(unsigned long i, unsigned long npages, *ntails = nr; } -#define for_each_compound_range(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) - static inline void compound_next(unsigned long i, unsigned long npages, struct page **list, struct page **head, unsigned int *ntails) @@ -371,7 +362,8 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, struct page *head; unsigned int ntails; - for_each_compound_range(index, &page, npages, head, ntails) { + for (index = 0; index < npages; index += ntails) { + compound_range_next(index, npages, &page, &head, &ntails); if (make_dirty && !PageDirty(head)) set_page_dirty_lock(head); put_compound_head(head, ntails, FOLL_PIN); From patchwork Mon Jan 10 04:23:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2F59C433F5 for ; Mon, 10 Jan 2022 04:24:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6EDB6B00AA; Sun, 9 Jan 2022 23:24:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF6B16B00AC; Sun, 9 Jan 2022 23:24:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAF386B00AD; Sun, 9 Jan 2022 23:24:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 9FF276B00AA for ; Sun, 9 Jan 2022 23:24:44 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 62B848121D for ; Mon, 10 Jan 2022 04:24:44 +0000 (UTC) X-FDA: 79013086488.05.5C976C9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 0FCF7180003 for ; Mon, 10 Jan 2022 04:24:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8C3hUhjCGJ2Sdk9gK8xC03PTtTrcCZ/ERUofNnCUuq0=; b=bcS4FwNQbPo7ETR9B5dTcak8o1 MF2eSv6wiCnWc63DybJky7QZlDMkktmdQuJDgJcBCVo+tlYbuzp4WhHd1o9huxTeqgfkcCugweGYP gqTHByJSSStlrJazHskBOCGem9kIh1fzgaOvfDyCl9xN7pUCFyE5JowreUCy+8ESnLI40DXcyKRIX PFosyUxUrqzjI8XE4dsZU+x894HS9y3PwzR7QIMWUepl8eIIEhiJbJkAH29JfE6k94S4GTS1mQg6F mdIsdT/h+YjPWErGXlAFdxrKlkxuiREifCZtqAVSC/2Y0xG/mxTUSwrGWKB0nNmlq0XOi4Qmxh3tP Yp3tCFHw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wU-1G; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 02/28] gup: Remove for_each_compound_head() Date: Mon, 10 Jan 2022 04:23:40 +0000 Message-Id: <20220110042406.499429-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0FCF7180003 X-Stat-Signature: maa5gf3im6zdg57wyskmt7r5w15qfx37 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bcS4FwNQ; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788683-280286 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro doesn't simplify the users; it's easier to just call compound_next() inside a standard loop. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 7a07e0c00bf5..86f8d843de72 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -253,9 +253,6 @@ static inline void compound_next(unsigned long i, unsigned long npages, struct page *page; unsigned int nr; - if (i >= npages) - return; - page = compound_head(list[i]); for (nr = i + 1; nr < npages; nr++) { if (compound_head(list[nr]) != page) @@ -266,12 +263,6 @@ static inline void compound_next(unsigned long i, unsigned long npages, *ntails = nr - i; } -#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails))) - /** * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages * @pages: array of pages to be maybe marked dirty, and definitely released. @@ -306,7 +297,8 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, return; } - for_each_compound_head(index, pages, npages, head, ntails) { + for (index = 0; index < npages; index += ntails) { + compound_next(index, npages, pages, &head, &ntails); /* * Checking PageDirty at this point may race with * clear_page_dirty_for_io(), but that's OK. Two key @@ -394,8 +386,10 @@ void unpin_user_pages(struct page **pages, unsigned long npages) if (WARN_ON(IS_ERR_VALUE(npages))) return; - for_each_compound_head(index, pages, npages, head, ntails) + for (index = 0; index < npages; index += ntails) { + compound_next(index, npages, pages, &head, &ntails); put_compound_head(head, ntails, FOLL_PIN); + } } EXPORT_SYMBOL(unpin_user_pages); From patchwork Mon Jan 10 04:23:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8BBFC433F5 for ; Mon, 10 Jan 2022 04:25:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C70576B00AD; Sun, 9 Jan 2022 23:24:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD4296B00AE; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 483576B00B1; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id EFC3D6B00B0 for ; Sun, 9 Jan 2022 23:24:47 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B1017181C1958 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) X-FDA: 79013086614.03.55ACD21 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 5CBC514000C for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gT8RG9vD2iSbfvGg96Co6y7w74G3UgjYv7qBo/++MyI=; b=L2bhR2Cjml5DgbFsb0VbH/LASg hlrKrN5EDZSd+ik3iO94aKLBEhpESoh/ldGOKrbgbPmXGIu2qcbXCiAV8shqq+2/ODx3xlHB3+uR1 WXT2IWUHvpb3nAyxNS6UB8iQuPoLnEXLBQjD+gbex6c8IeDmYwqqlmyXJYD5WTjcNo12L6LLjE4i6 5IM8D1DQiY1HLOCzEqDN7ohzauoczPQSlIU0qIRibEvBiy4Aoezi+R2uI2P9jUtjmtREtxignvDIQ cPKhkILOzRhHzQBEAIQvme1db678BNuBgNhF3s0NoYWQDURb8fcVaWWKuI8yOQSqfYGyJIuDXyPx1 dFjKdG2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wW-3h; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 03/28] gup: Change the calling convention for compound_range_next() Date: Mon, 10 Jan 2022 04:23:41 +0000 Message-Id: <20220110042406.499429-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5CBC514000C X-Stat-Signature: qtyti3sug4sgibbfet4rf1kg8trw1d1j Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=L2bhR2Cj; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam10 X-HE-Tag: 1641788687-369564 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Return the head page instead of storing it to a passed parameter. Pass the start page directly instead of passing a pointer to it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 86f8d843de72..3c93d2fdf4da 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -229,21 +229,20 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); -static inline void compound_range_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct page *compound_range_next(unsigned long i, + unsigned long npages, struct page *start, unsigned int *ntails) { struct page *next, *page; unsigned int nr = 1; - next = *list + i; + next = start + i; page = compound_head(next); if (PageCompound(page) && compound_order(page) >= 1) nr = min_t(unsigned int, page + compound_nr(page) - next, npages - i); - *head = page; *ntails = nr; + return page; } static inline void compound_next(unsigned long i, unsigned long npages, @@ -355,7 +354,7 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, unsigned int ntails; for (index = 0; index < npages; index += ntails) { - compound_range_next(index, npages, &page, &head, &ntails); + head = compound_range_next(index, npages, page, &ntails); if (make_dirty && !PageDirty(head)) set_page_dirty_lock(head); put_compound_head(head, ntails, FOLL_PIN); From patchwork Mon Jan 10 04:23:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1125AC433EF for ; Mon, 10 Jan 2022 04:24:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DADAB6B008A; Sun, 9 Jan 2022 23:24:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D0D0F6B008C; Sun, 9 Jan 2022 23:24:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5F616B0092; Sun, 9 Jan 2022 23:24:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 9DE146B008A for ; Sun, 9 Jan 2022 23:24:29 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4B07A181AEF09 for ; Mon, 10 Jan 2022 04:24:29 +0000 (UTC) X-FDA: 79013085858.12.A84C41D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 13C2540008 for ; Mon, 10 Jan 2022 04:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zipY2Xef+bdp0dO6EMAI54lIjbt9nG2TQ8Ia2LQ9a/Q=; b=ns5OGy9Zz2OyKInKaLEalu2rr6 qg8S/j7UAKJxQzhpBFlCFvfgDQm/xaI4heaNVADhMk5hbH25uKPWE70CgIHr+u8FtdkeRgHDzldrm btpoe3/2uFb7+edVFD6tJjWaEj/OMkHQ22Xu/Bu/de0utTgIEQjHOvtK6+1wR6JmAwZww3Of1yb5v /MGjs6Jk2ANLvphPBpkFDvrreMxs5n11Hd9CsxU7X+KEfc9SL4iyLC8X53kQFaY785bAEQM6ng0Lu CpzNuR/qQh1XJpRTKtfDtTncReANru/+fgpdvIXzZc2dpTNsEN9TOtbOdOv5WHQLXxHehsufxIYfU BkVJoCOg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wY-68; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 04/28] gup: Optimise compound_range_next() Date: Mon, 10 Jan 2022 04:23:42 +0000 Message-Id: <20220110042406.499429-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 13C2540008 X-Stat-Signature: qpo5ngn89h8tfp4u4w5u8pt8753ac8k7 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ns5OGy9Z; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam10 X-HE-Tag: 1641788668-455392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: By definition, a compound page has an order >= 1, so the second half of the test was redundant. Also, this cannot be a tail page since it's the result of calling compound_head(), so use PageHead() instead of PageCompound(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 3c93d2fdf4da..6eedca605b3d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -237,7 +237,7 @@ static inline struct page *compound_range_next(unsigned long i, next = start + i; page = compound_head(next); - if (PageCompound(page) && compound_order(page) >= 1) + if (PageHead(page)) nr = min_t(unsigned int, page + compound_nr(page) - next, npages - i); From patchwork Mon Jan 10 04:23:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A3D0C433F5 for ; Mon, 10 Jan 2022 04:24:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 105F36B0087; Sun, 9 Jan 2022 23:24:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E94616B0088; Sun, 9 Jan 2022 23:24:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C23256B0089; Sun, 9 Jan 2022 23:24:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id A5D826B0087 for ; Sun, 9 Jan 2022 23:24:26 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 72A5196F0E for ; Mon, 10 Jan 2022 04:24:26 +0000 (UTC) X-FDA: 79013085732.31.BB80C68 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 083B51C0002 for ; Mon, 10 Jan 2022 04:24:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Z6FL4QqZXpHUsas/XSj8HAlMM6U3EcqhJKpaCEuMmN8=; b=gxWZW6abe4i+kvz3u9LtWrvatH BoiG6HjaV3GJHxMRnfZSa7H9MJrW92RwHaeh3xjnfLoiR0eUxaQ7XdWjA/BL6eJb0/QcWDZvVS8Rt DCaSPmPce+iympJJ2YASM5qmcfx5BuQSc65ohl4fkLmb+tdj5xAwUXehLGGdLSl619nzH1L/FHKl+ UDpwP47wln4ui7FvBWkA7D+nkmrPQjhcA/QvbT0pxWGDMVnveDBAJfolGSS/6tgIsLbRqfcL/616V gIejTrUhUfqYl3v3R+N0T6w0T8MRjR91OhMBPs1fbjvyix4twfVWIX6h6Z46/JLPO1L/n2goXg0KT xg9aX3Qw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wa-8a; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 05/28] gup: Change the calling convention for compound_next() Date: Mon, 10 Jan 2022 04:23:43 +0000 Message-Id: <20220110042406.499429-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=gxWZW6ab; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 083B51C0002 X-Stat-Signature: 9u8ruenmhn6xqmubdtnhnye4k7uk1ke7 X-Rspamd-Server: rspam04 X-HE-Tag: 1641788665-771203 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Return the head page instead of storing it to a passed parameter. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 6eedca605b3d..8a0ea220ced1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -245,9 +245,8 @@ static inline struct page *compound_range_next(unsigned long i, return page; } -static inline void compound_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct page *compound_next(unsigned long i, + unsigned long npages, struct page **list, unsigned int *ntails) { struct page *page; unsigned int nr; @@ -258,8 +257,8 @@ static inline void compound_next(unsigned long i, unsigned long npages, break; } - *head = page; *ntails = nr - i; + return page; } /** @@ -297,7 +296,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, } for (index = 0; index < npages; index += ntails) { - compound_next(index, npages, pages, &head, &ntails); + head = compound_next(index, npages, pages, &ntails); /* * Checking PageDirty at this point may race with * clear_page_dirty_for_io(), but that's OK. Two key @@ -386,7 +385,7 @@ void unpin_user_pages(struct page **pages, unsigned long npages) return; for (index = 0; index < npages; index += ntails) { - compound_next(index, npages, pages, &head, &ntails); + head = compound_next(index, npages, pages, &ntails); put_compound_head(head, ntails, FOLL_PIN); } } From patchwork Mon Jan 10 04:23:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39DA8C433FE for ; Mon, 10 Jan 2022 04:24:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B4366B00A8; Sun, 9 Jan 2022 23:24:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F0C726B00AA; Sun, 9 Jan 2022 23:24:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBEC66B00AB; Sun, 9 Jan 2022 23:24:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id AB8706B00A8 for ; Sun, 9 Jan 2022 23:24:41 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6FAE396F0F for ; Mon, 10 Jan 2022 04:24:41 +0000 (UTC) X-FDA: 79013086362.19.6C32114 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 17BAEC0011 for ; Mon, 10 Jan 2022 04:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/mKomX3uXiZG7krGrOYnz6hJemX5OW1n+xjyCfpBJys=; b=a+pmaVeLJi8SRHS5L7Dmup8EfI 2Nsn3zM4iQfxtuZc0F+2ox0aylv0ROJxj6j+/0/YRSBk8bdqF/Js/jkheepU4xFCwlCqSafOo1kBC CmnDDcwxdecjjSCy5j4PCjipzr8m+UYAtqmsXVrU7MeC8Y2k5a8hOGFfDTSW0y1QshGRHnLBAynJ3 R/EcAV18PSYHlf7fr3+xRy51kcSev0z3bPVPQlqluI3IlARSIoqqlbaXvePozuIKaCyFsuN39gR1y 8t672VdTzDDI9o7KCcm08Si2loC7UJ4T3XmiHaept+7O6n4Dwkb8SCamVEAvgMjnYtMrbrPIiJdar lE1BtiUA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wc-B3; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 06/28] gup: Fix some contiguous memmap assumptions Date: Mon, 10 Jan 2022 04:23:44 +0000 Message-Id: <20220110042406.499429-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 17BAEC0011 X-Stat-Signature: 4p9czzjc1u8xxy3gxk5up543nc89aeaw Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a+pmaVeL; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788680-466773 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Several functions in gup.c assume that a compound page has virtually contiguous page structs. This isn't true for SPARSEMEM configs unless SPARSEMEM_VMEMMAP is also set. Fix them by using nth_page() instead of plain pointer arithmetic. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 8a0ea220ced1..9c0a702a4e03 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -235,7 +235,7 @@ static inline struct page *compound_range_next(unsigned long i, struct page *next, *page; unsigned int nr = 1; - next = start + i; + next = nth_page(start, i); page = compound_head(next); if (PageHead(page)) nr = min_t(unsigned int, @@ -2430,8 +2430,8 @@ static int record_subpages(struct page *page, unsigned long addr, { int nr; - for (nr = 0; addr != end; addr += PAGE_SIZE) - pages[nr++] = page++; + for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) + pages[nr] = nth_page(page, nr); return nr; } @@ -2466,7 +2466,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); head = pte_page(pte); - page = head + ((addr & (sz-1)) >> PAGE_SHIFT); + page = nth_page(head, (addr & (sz-1)) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); head = try_grab_compound_head(head, refs, flags); @@ -2526,7 +2526,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); head = try_grab_compound_head(pmd_page(orig), refs, flags); @@ -2560,7 +2560,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); head = try_grab_compound_head(pud_page(orig), refs, flags); @@ -2589,7 +2589,7 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); + page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); head = try_grab_compound_head(pgd_page(orig), refs, flags); From patchwork Mon Jan 10 04:23:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF9A3C433F5 for ; Mon, 10 Jan 2022 04:24:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 128EF6B00A7; Sun, 9 Jan 2022 23:24:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B4396B00A8; Sun, 9 Jan 2022 23:24:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4C2C6B00A9; Sun, 9 Jan 2022 23:24:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id A696C6B00A7 for ; Sun, 9 Jan 2022 23:24:38 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 67A6D96F0F for ; Mon, 10 Jan 2022 04:24:38 +0000 (UTC) X-FDA: 79013086236.01.8615C2F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 1010420002 for ; Mon, 10 Jan 2022 04:24:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/DPJvh8nZY8O3CzKnWIX/byaa2jenc4w/Fleth+y20E=; b=gHzU4niHDL73CDvZ4y9ZrGch48 bMd5a3+9Z3TkUFgqpFk1IFK3Yu6nb+K2xMlltYCDn/3nNwCt58gKoG/cLHjhUoUH0s6MH/PH5GsOt 8xM+W9e6+2bFcXR9YEJMzKaqI5d6E2wpTwXYlSlGgf/PYNXXGRUYIxs9iLRgwoR3Jd5QhKxrkvT4B n7YcK3RyLBwZNGitu7ZYNkzKAeCx6UNIvZ5FikJ+1t9s6FxkfNsmyD6tV2wkKOyW7UaFwyLWhDK3V KyVFyPhmfKXNObz8PJkzqer7lRnp7CJiUO6sO4BFlN869hlAxZAwSfE4NJHkcKdjm6ePwcZ0UAebu s/Fm3+JQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025we-Dp; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 07/28] gup: Remove an assumption of a contiguous memmap Date: Mon, 10 Jan 2022 04:23:45 +0000 Message-Id: <20220110042406.499429-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1010420002 X-Stat-Signature: dfwrbf3e8jp1zzcdh1k7wzbjta1gi9c5 Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=gHzU4niH; dmarc=none; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788677-801412 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This assumption needs the inverse of nth_page(), which I've temporarily named page_nth() until someone comes up with a better name. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 2 ++ mm/gup.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d8b7d7ed14dd..f2f3400665a4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -216,8 +216,10 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) +#define page_nth(head, tail) (page_to_pfn(tail) - page_to_pfn(head)) #else #define nth_page(page,n) ((page) + (n)) +#define page_nth(head, tail) ((tail) - (head)) #endif /* to align the pointer to the (next) page boundary */ diff --git a/mm/gup.c b/mm/gup.c index 9c0a702a4e03..afb638a30e44 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -238,8 +238,8 @@ static inline struct page *compound_range_next(unsigned long i, next = nth_page(start, i); page = compound_head(next); if (PageHead(page)) - nr = min_t(unsigned int, - page + compound_nr(page) - next, npages - i); + nr = min_t(unsigned int, npages - i, + compound_nr(page) - page_nth(page, next)); *ntails = nr; return page; From patchwork Mon Jan 10 04:23:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E536C433F5 for ; Mon, 10 Jan 2022 04:24:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45F6A6B00B0; Sun, 9 Jan 2022 23:24:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C9FB6B00B4; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B4A96B00B2; Sun, 9 Jan 2022 23:24:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id D5CCD6B00AD for ; Sun, 9 Jan 2022 23:24:47 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99F2C181C49AF for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) X-FDA: 79013086614.06.64903AB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 5E88E80005 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sKOYhN/P0cXC4qhPysyjZBfoAmLRLRFdOpaxGNZHEz8=; b=PJspN4WZf/k+qjE/mGfGFFrgkY cA9Kf5HN8fC0T5MXeLaNiI+h4JFXcp/9uKj/3uMyuTO3+G9NY0mZ3v0iR+rS7St765/En7TFFYyaK afHgj6DavPUc5K/Gsvjr8ejQPe2KTJly2O5vES3UKprH4ErwcIuKqvQP9L6OXTEv598D/7XyE9ljw 8hMsHnVRf1L1j0kB+nF0cImF8Sw9s2rRq0e6wHGDOJ1gVRMGSTi/oKvvNaCPc+7DovYWH/cF8mhFs r0XVqQPnvjqP4glZTdD9RFs9qMq2qgOLgKuwdzz8EG0OLCYNN7CpZuOPMKe/A7VZvKgLI4LWkuluJ 33unxJwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wg-GU; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 08/28] gup: Handle page split race more efficiently Date: Mon, 10 Jan 2022 04:23:46 +0000 Message-Id: <20220110042406.499429-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5E88E80005 X-Stat-Signature: zy6c5qbrp8xczoqoaxwrhdhq97x6rucd Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PJspN4WZ; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788687-262335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we hit the page split race, the current code returns NULL which will presumably trigger a retry under the mmap_lock. This isn't necessary; we can just retry the compound_head() lookup. This is a very minor optimisation of an unlikely path, but conceptually it matches (eg) the page cache RCU-protected lookup. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index afb638a30e44..dbb1b54d0def 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -68,7 +68,10 @@ static void put_page_refs(struct page *page, int refs) */ static inline struct page *try_get_compound_head(struct page *page, int refs) { - struct page *head = compound_head(page); + struct page *head; + +retry: + head = compound_head(page); if (WARN_ON_ONCE(page_ref_count(head) < 0)) return NULL; @@ -86,7 +89,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) */ if (unlikely(compound_head(page) != head)) { put_page_refs(head, refs); - return NULL; + goto retry; } return head; From patchwork Mon Jan 10 04:23:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 728BDC4332F for ; Mon, 10 Jan 2022 04:24:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F329D6B0099; Sun, 9 Jan 2022 23:24:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E984F6B009B; Sun, 9 Jan 2022 23:24:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4E1C6B009C; Sun, 9 Jan 2022 23:24:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id A7A6B6B0099 for ; Sun, 9 Jan 2022 23:24:35 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 656BE181C1958 for ; Mon, 10 Jan 2022 04:24:35 +0000 (UTC) X-FDA: 79013086110.17.291C22E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 18BB31C0002 for ; Mon, 10 Jan 2022 04:24:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cNkGRRqE4MZ2IOY+aRtZ6O11ENnM+VUoRZ1VxqjqE3o=; b=XOj3bE4s2SziUPl4NNs9v1GOQC HevnwI8lrxL7DmnEAPUYAWy34kzK5npISqem7T56xj61z6L1q/Ddv/LabQ3++ka6+lsIEVo41JWsX 8rtSy+opZf39sTkTo8cAEMnv/XkqN6anRr/PyfCxHDwlLFFWajLlzFxx8mA+fso+lCsl+OSsqnfJa 6MJvBhMubiCav7PUMH1D4cVum7gqyi9YjODfwdYR0krwMxJO4ODeZhPC8iY0HPWKXqS5m8d63HWV4 CFSi3NBDsz+n8zSkyDHuxQ1sJpNTOR1oMnGth7hL9Z1Ux+EPuZAqRCG7JaoW4hwP7AQx5bDnFoUVp urm0Tg0g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wi-IQ; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 09/28] gup: Turn hpage_pincount_add() into page_pincount_add() Date: Mon, 10 Jan 2022 04:23:47 +0000 Message-Id: <20220110042406.499429-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 18BB31C0002 X-Stat-Signature: 39ykp31hqd6nyow86t9i1wz7shfxfity Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XOj3bE4s; dmarc=none; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788674-924390 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Simplify try_grab_compound_head() and remove an unnecessary VM_BUG_ON by handling pages both with and without a pincount field in page_pincount_add(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index dbb1b54d0def..3ed9907f3c8d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,12 +29,23 @@ struct follow_page_context { unsigned int page_mask; }; -static void hpage_pincount_add(struct page *page, int refs) +/* + * When pinning a compound page of order > 1 (which is what + * hpage_pincount_available() checks for), use an exact count to track + * it, via page_pincount_add/_sub(). + * + * However, be sure to *also* increment the normal page refcount field + * at least once, so that the page really is pinned. That's why the + * refcount from the earlier try_get_compound_head() is left intact. + */ +static void page_pincount_add(struct page *page, int refs) { - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); VM_BUG_ON_PAGE(page != compound_head(page), page); - atomic_add(refs, compound_pincount_ptr(page)); + if (hpage_pincount_available(page)) + atomic_add(refs, compound_pincount_ptr(page)); + else + page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); } static void hpage_pincount_sub(struct page *page, int refs) @@ -150,21 +161,7 @@ struct page *try_grab_compound_head(struct page *page, if (!page) return NULL; - /* - * When pinning a compound page of order > 1 (which is what - * hpage_pincount_available() checks for), use an exact count to - * track it, via hpage_pincount_add/_sub(). - * - * However, be sure to *also* increment the normal page refcount - * field at least once, so that the page really is pinned. - * That's why the refcount from the earlier - * try_get_compound_head() is left intact. - */ - if (hpage_pincount_available(page)) - hpage_pincount_add(page, refs); - else - page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); - + page_pincount_add(page, refs); mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, refs); From patchwork Mon Jan 10 04:23:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE837C433EF for ; Mon, 10 Jan 2022 04:24:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7ECD96B00AF; Sun, 9 Jan 2022 23:24:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 65A216B00AD; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25FC96B00AF; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id F24AF6B00B1 for ; Sun, 9 Jan 2022 23:24:47 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A77B7181C5562 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) X-FDA: 79013086614.04.AA0DC4C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 5576E20012 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PMXingW4eQcetsP+Ankg7iWuo3nyPl2vlV/6qXVRFks=; b=n18ODInjFZJSyEpHvkEsh9YJTK +uPkafon+0J6706NZc+vLmkEIuqkh8Q8XhvMVc7lJ/Jo7aEh42HgZ+/eFQ9PytDBKBT4DdS9JBr0x 4bcO22N4r7Ibi0uEIDXR3WjTKCZT7Ut8zxXQHtquJq69Z2sQntuaHku+2aGnVbbjGw9+ulayIB2Q9 5CHQiTK+6y7fb7hnNmIhK8etwk5VKSFNN+dqOAKSvGIJOMiiUV2y/goH9vf7HGVi5vQ+iML8Dpg+5 Sddxp9xM5Df1U/mzGxDPb2Bt4oZE8U+Gw++ra1TUSorh+6nE+YgMyplf+BGDZxYlOsSnCpSZlOR9f qu4+B7jw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wk-KY; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 10/28] gup: Turn hpage_pincount_sub() into page_pincount_sub() Date: Mon, 10 Jan 2022 04:23:48 +0000 Message-Id: <20220110042406.499429-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5576E20012 X-Stat-Signature: ksabme9c9kod175yzwhtui7g5ec14u8c Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=n18ODInj; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641788687-183679 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove an unnecessary VM_BUG_ON by handling pages both with and without a pincount field in page_pincount_sub(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 3ed9907f3c8d..aed48de3912e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -48,12 +48,15 @@ static void page_pincount_add(struct page *page, int refs) page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); } -static void hpage_pincount_sub(struct page *page, int refs) +static int page_pincount_sub(struct page *page, int refs) { - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); VM_BUG_ON_PAGE(page != compound_head(page), page); - atomic_sub(refs, compound_pincount_ptr(page)); + if (hpage_pincount_available(page)) + atomic_sub(refs, compound_pincount_ptr(page)); + else + refs *= GUP_PIN_COUNTING_BIAS; + return refs; } /* Equivalent to calling put_page() @refs times. */ @@ -177,11 +180,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags) if (flags & FOLL_PIN) { mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED, refs); - - if (hpage_pincount_available(page)) - hpage_pincount_sub(page, refs); - else - refs *= GUP_PIN_COUNTING_BIAS; + refs = page_pincount_sub(page, refs); } put_page_refs(page, refs); From patchwork Mon Jan 10 04:23:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB94C433F5 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4AA456B0071; Sun, 9 Jan 2022 23:24:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 433416B0073; Sun, 9 Jan 2022 23:24:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AB996B0074; Sun, 9 Jan 2022 23:24:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 11ADF6B0071 for ; Sun, 9 Jan 2022 23:24:19 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B134D181843E7 for ; Mon, 10 Jan 2022 04:24:18 +0000 (UTC) X-FDA: 79013085396.06.04436FB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 8AF451C000C for ; Mon, 10 Jan 2022 04:24:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vDqLbEnHUcTnSYJT/+8mqVyuYK0UxluPC/+3yXrlqME=; b=S68kUEDi9O306wA5NqLmOLyEqz ofgAfH/kLeUxCOEyNQjHk/dUz9c6MqDDSgGn9Jcgokac4kcE7pwJPLIAgEiMI7AdxS7mhPM7fJrwh iGls2n+779gV1uiExYzSEQ04Cnqu1bU14xGITfWI36NS7yD9ubEA9vztiAdURco2svLcOnMYinUtm BWU8R+MpN978d0ZJ+cldutqVeyp4Zk/aygpZWKzalb95VVbAaCEPrD6iO04KDtjvVPkPY/CKJzpCl WcFWM/rMEbNJa70OZpu3H4Pza5h2bShSsopOEgB73Jsy+jtg3nre+A5iiMqQUN1cMjZOhZEHiGw12 2WJMM5Eg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wm-Mz; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 11/28] mm: Make compound_pincount always available Date: Mon, 10 Jan 2022 04:23:49 +0000 Message-Id: <20220110042406.499429-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 6b46j6jw19cz8ener8wdzy1baf8zs6oy X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8AF451C000C Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=S68kUEDi; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788657-690283 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move compound_pincount from the third page to the second page, which means it's available for all compound pages. That lets us delete hpage_pincount_available(). On 32-bit systems, there isn't enough space for both compound_pincount and compound_nr in the second page (it would collide with page->private, which is in use for pages in the swap cache), so revert the optimisation of storing both compound_order and compound_nr on 32-bit systems. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard Reviewed-by: Christoph Hellwig --- Documentation/core-api/pin_user_pages.rst | 18 +++++++++--------- include/linux/mm.h | 21 ++++++++------------- include/linux/mm_types.h | 7 +++++-- mm/debug.c | 14 ++++---------- mm/gup.c | 18 ++++++++---------- mm/page_alloc.c | 3 +-- mm/rmap.c | 6 ++---- 7 files changed, 37 insertions(+), 50 deletions(-) diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst index fcf605be43d0..b18416f4500f 100644 --- a/Documentation/core-api/pin_user_pages.rst +++ b/Documentation/core-api/pin_user_pages.rst @@ -55,18 +55,18 @@ flags the caller provides. The caller is required to pass in a non-null struct pages* array, and the function then pins pages by incrementing each by a special value: GUP_PIN_COUNTING_BIAS. -For huge pages (and in fact, any compound page of more than 2 pages), the -GUP_PIN_COUNTING_BIAS scheme is not used. Instead, an exact form of pin counting -is achieved, by using the 3rd struct page in the compound page. A new struct -page field, hpage_pinned_refcount, has been added in order to support this. +For compound pages, the GUP_PIN_COUNTING_BIAS scheme is not used. Instead, +an exact form of pin counting is achieved, by using the 2nd struct page +in the compound page. A new struct page field, compound_pincount, has +been added in order to support this. This approach for compound pages avoids the counting upper limit problems that are discussed below. Those limitations would have been aggravated severely by huge pages, because each tail page adds a refcount to the head page. And in -fact, testing revealed that, without a separate hpage_pinned_refcount field, +fact, testing revealed that, without a separate compound_pincount field, page overflows were seen in some huge page stress tests. -This also means that huge pages and compound pages (of order > 1) do not suffer +This also means that huge pages and compound pages do not suffer from the false positives problem that is mentioned below.:: Function @@ -264,9 +264,9 @@ place.) Other diagnostics ================= -dump_page() has been enhanced slightly, to handle these new counting fields, and -to better report on compound pages in general. Specifically, for compound pages -with order > 1, the exact (hpage_pinned_refcount) pincount is reported. +dump_page() has been enhanced slightly, to handle these new counting +fields, and to better report on compound pages in general. Specifically, +for compound pages, the exact (compound_pincount) pincount is reported. References ========== diff --git a/include/linux/mm.h b/include/linux/mm.h index f2f3400665a4..598be27d4d2e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -929,17 +929,6 @@ static inline void destroy_compound_page(struct page *page) compound_page_dtors[page[1].compound_dtor](page); } -static inline bool hpage_pincount_available(struct page *page) -{ - /* - * Can the page->hpage_pinned_refcount field be used? That field is in - * the 3rd page of the compound page, so the smallest (2-page) compound - * pages cannot support it. - */ - page = compound_head(page); - return PageCompound(page) && compound_order(page) > 1; -} - static inline int head_compound_pincount(struct page *head) { return atomic_read(compound_pincount_ptr(head)); @@ -947,7 +936,7 @@ static inline int head_compound_pincount(struct page *head) static inline int compound_pincount(struct page *page) { - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); + VM_BUG_ON_PAGE(!PageCompound(page), page); page = compound_head(page); return head_compound_pincount(page); } @@ -955,7 +944,9 @@ static inline int compound_pincount(struct page *page) static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; +#ifdef CONFIG_64BIT page[1].compound_nr = 1U << order; +#endif } /* Returns the number of pages in this potentially compound page. */ @@ -963,7 +954,11 @@ static inline unsigned long compound_nr(struct page *page) { if (!PageHead(page)) return 1; +#ifdef CONFIG_64BIT return page[1].compound_nr; +#else + return 1UL << compound_order(page); +#endif } /* Returns the number of bytes in this potentially compound page. */ @@ -1325,7 +1320,7 @@ void unpin_user_pages(struct page **pages, unsigned long npages); */ static inline bool page_maybe_dma_pinned(struct page *page) { - if (hpage_pincount_available(page)) + if (PageCompound(page)) return compound_pincount(page) > 0; /* diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c3a6e6209600..60e4595eaf63 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -150,11 +150,14 @@ struct page { unsigned char compound_dtor; unsigned char compound_order; atomic_t compound_mapcount; + atomic_t compound_pincount; +#ifdef CONFIG_64BIT unsigned int compound_nr; /* 1 << compound_order */ +#endif }; struct { /* Second tail page of compound page */ unsigned long _compound_pad_1; /* compound_head */ - atomic_t hpage_pinned_refcount; + unsigned long _compound_pad_2; /* For both global and memcg */ struct list_head deferred_list; }; @@ -311,7 +314,7 @@ static inline atomic_t *compound_mapcount_ptr(struct page *page) static inline atomic_t *compound_pincount_ptr(struct page *page) { - return &page[2].hpage_pinned_refcount; + return &page[1].compound_pincount; } /* diff --git a/mm/debug.c b/mm/debug.c index a05a39ff8fe4..7925fac2bd8e 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -92,16 +92,10 @@ static void __dump_page(struct page *page) page, page_ref_count(head), mapcount, mapping, page_to_pgoff(page), page_to_pfn(page)); if (compound) { - if (hpage_pincount_available(page)) { - pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n", - head, compound_order(head), - head_compound_mapcount(head), - head_compound_pincount(head)); - } else { - pr_warn("head:%p order:%u compound_mapcount:%d\n", - head, compound_order(head), - head_compound_mapcount(head)); - } + pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n", + head, compound_order(head), + head_compound_mapcount(head), + head_compound_pincount(head)); } #ifdef CONFIG_MEMCG diff --git a/mm/gup.c b/mm/gup.c index aed48de3912e..1282d29357b7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,9 +30,8 @@ struct follow_page_context { }; /* - * When pinning a compound page of order > 1 (which is what - * hpage_pincount_available() checks for), use an exact count to track - * it, via page_pincount_add/_sub(). + * When pinning a compound page, use an exact count to track it, via + * page_pincount_add/_sub(). * * However, be sure to *also* increment the normal page refcount field * at least once, so that the page really is pinned. That's why the @@ -42,7 +41,7 @@ static void page_pincount_add(struct page *page, int refs) { VM_BUG_ON_PAGE(page != compound_head(page), page); - if (hpage_pincount_available(page)) + if (PageHead(page)) atomic_add(refs, compound_pincount_ptr(page)); else page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); @@ -52,7 +51,7 @@ static int page_pincount_sub(struct page *page, int refs) { VM_BUG_ON_PAGE(page != compound_head(page), page); - if (hpage_pincount_available(page)) + if (PageHead(page)) atomic_sub(refs, compound_pincount_ptr(page)); else refs *= GUP_PIN_COUNTING_BIAS; @@ -129,12 +128,11 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) * * FOLL_GET: page's refcount will be incremented by @refs. * - * FOLL_PIN on compound pages that are > two pages long: page's refcount will - * be incremented by @refs, and page[2].hpage_pinned_refcount will be - * incremented by @refs * GUP_PIN_COUNTING_BIAS. + * FOLL_PIN on compound pages: page's refcount will be incremented by + * @refs, and page[1].compound_pincount will be incremented by @refs. * - * FOLL_PIN on normal pages, or compound pages that are two pages long: - * page's refcount will be incremented by @refs * GUP_PIN_COUNTING_BIAS. + * FOLL_PIN on normal pages: page's refcount will be incremented by + * @refs * GUP_PIN_COUNTING_BIAS. * * Return: head page (with refcount appropriately incremented) for success, or * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40..6b030c0cb207 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -741,8 +741,7 @@ void prep_compound_page(struct page *page, unsigned int order) set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); set_compound_order(page, order); atomic_set(compound_mapcount_ptr(page), -1); - if (hpage_pincount_available(page)) - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(compound_pincount_ptr(page), 0); } #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/mm/rmap.c b/mm/rmap.c index 163ac4e6bcee..a44a32db4803 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1187,8 +1187,7 @@ void page_add_new_anon_rmap(struct page *page, VM_BUG_ON_PAGE(!PageTransHuge(page), page); /* increment count (starts at -1) */ atomic_set(compound_mapcount_ptr(page), 0); - if (hpage_pincount_available(page)) - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(compound_pincount_ptr(page), 0); __mod_lruvec_page_state(page, NR_ANON_THPS, nr); } else { @@ -2410,8 +2409,7 @@ void hugepage_add_new_anon_rmap(struct page *page, { BUG_ON(address < vma->vm_start || address >= vma->vm_end); atomic_set(compound_mapcount_ptr(page), 0); - if (hpage_pincount_available(page)) - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(compound_pincount_ptr(page), 0); __page_set_anon_rmap(page, vma, address, 1); } From patchwork Mon Jan 10 04:23:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D1E2C4332F for ; Mon, 10 Jan 2022 04:24:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DFD296B0093; Sun, 9 Jan 2022 23:24:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D83456B0096; Sun, 9 Jan 2022 23:24:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C01886B0098; Sun, 9 Jan 2022 23:24:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 9E5F36B0093 for ; Sun, 9 Jan 2022 23:24:32 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5DCA493694 for ; Mon, 10 Jan 2022 04:24:32 +0000 (UTC) X-FDA: 79013085984.21.236D297 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 0CCB8C0005 for ; Mon, 10 Jan 2022 04:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pr6dwM9p32xFGkENVfW8ccHIrI0fREnA+ZakTJ73Pp8=; b=diLvaBxsmnGNc0FH5T2NoqikDX Q4Ygnsy6APbDxF2yhgPg/bUDT/ZZl5VKG73fA3Wb/qUY1yr/6zw06e9ozJDWRpziaMw2g2IVFIu6U QnHwvyE3ecy9hBMdhQhrtNPVBoIQRVfVsI3wVBFTXIjA17sjB/F36yzkUY1r2uMTg8eCIyjpcgs9o HERYwayFGxgm8bF0MKu0UBGFNSiWSAGYzZjJQKryikGkg7Sgy50thISU/Y4S9hK8usWe9Zha3lydG neZjS1K3jtVAYFKarLfjPxhFpKfl6hIaB5iteXtDUv2QkKsgaiFuxpgj50rTbXDia0Jtfj3fRnA79 jjHXkc7g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wo-Pi; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 12/28] mm: Add folio_put_refs() Date: Mon, 10 Jan 2022 04:23:50 +0000 Message-Id: <20220110042406.499429-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0CCB8C0005 X-Stat-Signature: f95rjwrkyahkrjs8f7zsekje6pjq99se Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=diLvaBxs; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788671-347042 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like folio_put(), but puts N references at once instead of just one. It's like put_page_refs(), but does one atomic operation instead of two, and is available to more than just gup.c. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 598be27d4d2e..bf9624ca61c3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1234,6 +1234,26 @@ static inline void folio_put(struct folio *folio) __put_page(&folio->page); } +/** + * folio_put_refs - Reduce the reference count on a folio. + * @folio: The folio. + * @refs: The number of references to reduce. + * + * If the folio's reference count reaches zero, the memory will be + * released back to the page allocator and may be used by another + * allocation immediately. Do not access the memory or the struct folio + * after calling folio_put_refs() unless you can be sure that these weren't + * the last references. + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void folio_put_refs(struct folio *folio, int refs) +{ + if (folio_ref_sub_and_test(folio, refs)) + __put_page(&folio->page); +} + static inline void put_page(struct page *page) { struct folio *folio = page_folio(page); From patchwork Mon Jan 10 04:23:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B714BC433F5 for ; Mon, 10 Jan 2022 04:25:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 406DD6B00B2; Sun, 9 Jan 2022 23:24:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DCB86B00B5; Sun, 9 Jan 2022 23:24:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA2696B00B6; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 1A7586B00B3 for ; Sun, 9 Jan 2022 23:24:48 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D22D980894 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) X-FDA: 79013086614.31.27D9B89 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 77285A0006 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w+88rodVOkiUqMds1CDDO2rnbx+ZzvbF6EDnhZLZjHk=; b=MHde1Qleuqe7Nk4xqACcAx7HEl D8kJGZeV3c227mQCZyDaXVLZZyyzsX283wSRbmVtVux8gfh65uIkUe9PKbmG7LiN8Av1rGrz0b64n wzKrvVfSE8faTN1k0nH9mOXqLhoC2As+LH2ynMSk2GovVwLBt8PhprrxJGMwurHMoQN0j2e5UdE3d matAxpEXJXnrEz0GvItCgqQ/YSIzOpNc1FmFeBXxfCgj1n7Krq20kEMmHbbFFHodTHPqMzuBEJkbu +GVmJzrwFRfYm5SfWSxrlFJy1tI2FcZk9l8FDXj+mYCMLNpfQV739wpVskuIZrwIK6yfdqEQ/AdN7 fNaNMXMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025wq-SO; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 13/28] mm: Add folio_pincount_ptr() Date: Mon, 10 Jan 2022 04:23:51 +0000 Message-Id: <20220110042406.499429-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 77285A0006 X-Stat-Signature: tzp5zozj7rj9t8aijcah6razxcfmigrf Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MHde1Qle; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788687-474667 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of compound_pincount_ptr(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm_types.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 60e4595eaf63..34c7114ea9e9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -312,6 +312,12 @@ static inline atomic_t *compound_mapcount_ptr(struct page *page) return &page[1].compound_mapcount; } +static inline atomic_t *folio_pincount_ptr(struct folio *folio) +{ + struct page *tail = &folio->page + 1; + return &tail->compound_pincount; +} + static inline atomic_t *compound_pincount_ptr(struct page *page) { return &page[1].compound_pincount; From patchwork Mon Jan 10 04:23:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81C35C433FE for ; Mon, 10 Jan 2022 04:24:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CED06B0088; Sun, 9 Jan 2022 23:24:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 659BA6B0089; Sun, 9 Jan 2022 23:24:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AB106B008A; Sun, 9 Jan 2022 23:24:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 279D06B0089 for ; Sun, 9 Jan 2022 23:24:27 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E9411180A2F81 for ; Mon, 10 Jan 2022 04:24:26 +0000 (UTC) X-FDA: 79013085732.07.3F50903 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 905E61C0003 for ; Mon, 10 Jan 2022 04:24:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8lkIQ5pzUHptX+PQh07i+5GbGN5xDvd48mKak705GKI=; b=Lxrx0VGaelZR0XiQE/gKbFnugO prDvQ//ra4faWoP9k9x+hSjkLVWobcO2IlV2gT8KfuDeRx6xvoGP2AljqzpW5E9vj7OhN4bgWIFP4 JB39/sj6YfIja85XpZXV2AKYX0aY7i7/uAjwftPYg48TULXlaI8K5hdPfxN1/2uYBeUKllI2lzo+1 z8em1rbCoqk4oZ0b7ZGaPpEfzO0OWqkEIUZEZQvILtyeMWKnKXaEwFcTa7Nsgx6NVtx95GFS4vVcF I9QuY8oq4dgp/RLADuZ6z+wEXco9L1LLv5QmFvvAHvmcCeNhqYVXG/8ATpb6AvVgO5jnhedDHobxG zXBSVyWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE8-0025ws-Uo; Mon, 10 Jan 2022 04:24:12 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 14/28] mm: Convert page_maybe_dma_pinned() to use a folio Date: Mon, 10 Jan 2022 04:23:52 +0000 Message-Id: <20220110042406.499429-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Lxrx0VGa; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 905E61C0003 X-Stat-Signature: 5jk7f6wwb496kmojybhorz6czufj6jz4 X-Rspamd-Server: rspam04 X-HE-Tag: 1641788666-1297 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace three calls to compound_head() with one. This removes the last user of compound_pincount(), so remove that helper too. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bf9624ca61c3..d3769897c8ac 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -934,13 +934,6 @@ static inline int head_compound_pincount(struct page *head) return atomic_read(compound_pincount_ptr(head)); } -static inline int compound_pincount(struct page *page) -{ - VM_BUG_ON_PAGE(!PageCompound(page), page); - page = compound_head(page); - return head_compound_pincount(page); -} - static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; @@ -1340,18 +1333,20 @@ void unpin_user_pages(struct page **pages, unsigned long npages); */ static inline bool page_maybe_dma_pinned(struct page *page) { - if (PageCompound(page)) - return compound_pincount(page) > 0; + struct folio *folio = page_folio(page); + + if (folio_test_large(folio)) + return atomic_read(folio_pincount_ptr(folio)) > 0; /* - * page_ref_count() is signed. If that refcount overflows, then - * page_ref_count() returns a negative value, and callers will avoid + * folio_ref_count() is signed. If that refcount overflows, then + * folio_ref_count() returns a negative value, and callers will avoid * further incrementing the refcount. * - * Here, for that overflow case, use the signed bit to count a little + * Here, for that overflow case, use the sign bit to count a little * bit higher via unsigned math, and thus still get an accurate result. */ - return ((unsigned int)page_ref_count(compound_head(page))) >= + return ((unsigned int)folio_ref_count(folio)) >= GUP_PIN_COUNTING_BIAS; } From patchwork Mon Jan 10 04:23:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B678C433EF for ; Mon, 10 Jan 2022 04:24:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD2646B0074; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93B7B6B0078; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A7D16B007E; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id E70AC6B0080 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A53E31816C8B2 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.28.A3BC726 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 25770180003 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3gcp8jAPpJyVoKpFTPIlauHZARxlFWOwXAJGDSUF0G0=; b=NwL/Za0kcScCPVLkGX/SBoUNFR 9vYY3W8cWSZdA7ql6pD8iJmt1qcMyUxWBoJcRxY3gwGammTlVvGKriKJRq8UpA5n1Y79Qo9Lq6q8x LGqen7wG86nmtFW02pR3n+s21BeEuGwk/+ajGfy+ANry+I5u2Z7VNk9IPOA2SHTL5tth1MEwxUspU JiILC990MGkTfNLzFaqXq1tPiEa4koChT79iA+id1Uc0NEEn4at4gcYbfOvvnjHln/f8Kt4OTgnVQ 4nJBVLyWtswepa/DIlbCEmyLb9J79XlDjyn6Z1LibcPXahZE+6FmTHKAs2opXPyXk4oZYpbtjLD0D b1VNC9bQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025wu-10; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 15/28] gup: Add try_get_folio() and try_grab_folio() Date: Mon, 10 Jan 2022 04:23:53 +0000 Message-Id: <20220110042406.499429-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 25770180003 X-Stat-Signature: 4o6oyzu6bfspc3ygzf813fk4wb5md8ch Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="NwL/Za0k"; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788659-578456 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert try_get_compound_head() into try_get_folio() and convert try_grab_compound_head() into try_grab_folio(). Also convert hpage_pincount_add() to folio_pincount_add(). Add a temporary try_grab_compound_head() wrapper around try_grab_folio() to let us convert callers individually. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 104 +++++++++++++++++++++++++------------------------- mm/internal.h | 5 +++ 2 files changed, 56 insertions(+), 53 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1282d29357b7..9e581201d679 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,21 +30,19 @@ struct follow_page_context { }; /* - * When pinning a compound page, use an exact count to track it, via - * page_pincount_add/_sub(). + * When pinning a large folio, use an exact count to track it. * - * However, be sure to *also* increment the normal page refcount field - * at least once, so that the page really is pinned. That's why the - * refcount from the earlier try_get_compound_head() is left intact. + * However, be sure to *also* increment the normal folio refcount + * field at least once, so that the folio really is pinned. + * That's why the refcount from the earlier + * try_get_folio() is left intact. */ -static void page_pincount_add(struct page *page, int refs) +static void folio_pincount_add(struct folio *folio, int refs) { - VM_BUG_ON_PAGE(page != compound_head(page), page); - - if (PageHead(page)) - atomic_add(refs, compound_pincount_ptr(page)); + if (folio_test_large(folio)) + atomic_add(refs, folio_pincount_ptr(folio)); else - page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1)); + folio_ref_add(folio, refs * (GUP_PIN_COUNTING_BIAS - 1)); } static int page_pincount_sub(struct page *page, int refs) @@ -76,75 +74,70 @@ static void put_page_refs(struct page *page, int refs) } /* - * Return the compound head page with ref appropriately incremented, + * Return the folio with ref appropriately incremented, * or NULL if that failed. */ -static inline struct page *try_get_compound_head(struct page *page, int refs) +static inline struct folio *try_get_folio(struct page *page, int refs) { - struct page *head; + struct folio *folio; retry: - head = compound_head(page); - - if (WARN_ON_ONCE(page_ref_count(head) < 0)) + folio = page_folio(page); + if (WARN_ON_ONCE(folio_ref_count(folio) < 0)) return NULL; - if (unlikely(!page_cache_add_speculative(head, refs))) + if (unlikely(!folio_ref_try_add_rcu(folio, refs))) return NULL; /* - * At this point we have a stable reference to the head page; but it - * could be that between the compound_head() lookup and the refcount - * increment, the compound page was split, in which case we'd end up - * holding a reference on a page that has nothing to do with the page + * At this point we have a stable reference to the folio; but it + * could be that between calling page_folio() and the refcount + * increment, the folio was split, in which case we'd end up + * holding a reference on a folio that has nothing to do with the page * we were given anymore. - * So now that the head page is stable, recheck that the pages still - * belong together. + * So now that the folio is stable, recheck that the page still + * belongs to this folio. */ - if (unlikely(compound_head(page) != head)) { - put_page_refs(head, refs); + if (unlikely(page_folio(page) != folio)) { + folio_put_refs(folio, refs); goto retry; } - return head; + return folio; } /** - * try_grab_compound_head() - attempt to elevate a page's refcount, by a - * flags-dependent amount. - * - * Even though the name includes "compound_head", this function is still - * appropriate for callers that have a non-compound @page to get. - * + * try_grab_folio() - Attempt to get or pin a folio. * @page: pointer to page to be grabbed - * @refs: the value to (effectively) add to the page's refcount + * @refs: the value to (effectively) add to the folio's refcount * @flags: gup flags: these are the FOLL_* flag values. * * "grab" names in this file mean, "look at flags to decide whether to use - * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcount. + * FOLL_PIN or FOLL_GET behavior, when incrementing the folio's refcount. * * Either FOLL_PIN or FOLL_GET (or neither) must be set, but not both at the * same time. (That's true throughout the get_user_pages*() and * pin_user_pages*() APIs.) Cases: * - * FOLL_GET: page's refcount will be incremented by @refs. + * FOLL_GET: folio's refcount will be incremented by @refs. * - * FOLL_PIN on compound pages: page's refcount will be incremented by - * @refs, and page[1].compound_pincount will be incremented by @refs. + * FOLL_PIN on large folios: folio's refcount will be incremented by + * @refs, and its compound_pincount will be incremented by @refs. * - * FOLL_PIN on normal pages: page's refcount will be incremented by + * FOLL_PIN on single-page folios: folio's refcount will be incremented by * @refs * GUP_PIN_COUNTING_BIAS. * - * Return: head page (with refcount appropriately incremented) for success, or - * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's - * considered failure, and furthermore, a likely bug in the caller, so a warning - * is also emitted. + * Return: The folio containing @page (with refcount appropriately + * incremented) for success, or NULL upon failure. If neither FOLL_GET + * nor FOLL_PIN was set, that's considered failure, and furthermore, + * a likely bug in the caller, so a warning is also emitted. */ -struct page *try_grab_compound_head(struct page *page, - int refs, unsigned int flags) +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { if (flags & FOLL_GET) - return try_get_compound_head(page, refs); + return try_get_folio(page, refs); else if (flags & FOLL_PIN) { + struct folio *folio; + /* * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a * right zone, so fail and let the caller fall back to the slow @@ -158,21 +151,26 @@ struct page *try_grab_compound_head(struct page *page, * CAUTION: Don't use compound_head() on the page before this * point, the result won't be stable. */ - page = try_get_compound_head(page, refs); - if (!page) + folio = try_get_folio(page, refs); + if (!folio) return NULL; - page_pincount_add(page, refs); - mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, - refs); + folio_pincount_add(folio, refs); + node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs); - return page; + return folio; } WARN_ON_ONCE(1); return NULL; } +struct page *try_grab_compound_head(struct page *page, + int refs, unsigned int flags) +{ + return &try_grab_folio(page, refs, flags)->page; +} + static void put_compound_head(struct page *page, int refs, unsigned int flags) { if (flags & FOLL_PIN) { @@ -196,7 +194,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags) * @flags: gup flags: these are the FOLL_* flag values. * * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same - * time. Cases: please see the try_grab_compound_head() documentation, with + * time. Cases: please see the try_grab_folio() documentation, with * "refs=1". * * Return: true for success, or if no action was required (if neither FOLL_PIN diff --git a/mm/internal.h b/mm/internal.h index 26af8a5a5be3..9a72d1ecdab4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -723,4 +723,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); +/* + * mm/gup.c + */ +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); + #endif /* __MM_INTERNAL_H */ From patchwork Mon Jan 10 04:23:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDD3CC433FE for ; Mon, 10 Jan 2022 04:24:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 522416B0080; Sun, 9 Jan 2022 23:24:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 484A66B0082; Sun, 9 Jan 2022 23:24:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FB546B0083; Sun, 9 Jan 2022 23:24:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 146ED6B0080 for ; Sun, 9 Jan 2022 23:24:23 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C4DF996F11 for ; Mon, 10 Jan 2022 04:24:22 +0000 (UTC) X-FDA: 79013085564.09.9C301E0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 6CEE840003 for ; Mon, 10 Jan 2022 04:24:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VLn5Ly3DUFPdVtS0Qgb2wjVa3EHIQfZkAoJn3cqa2FI=; b=WGawqvxoO+CJRv74HiwyOdh+PN b+iQoWlITUT1w+6C4IlF/R++bv9in4Be37KvHIuECtLahc6W0KIgt/GAMMqp7P7WpHE5mIrDwf8sb 3P9ICkI1fOtX8ZV4dS98NLmbQ3egwI/s4PJGcnr8Uyuy3Xe9dO0EQ+qWDdNxy0V+9orxqUSEwOdWb jIdk+NCjOkxSLoIa/SNHILdCRpzaiw9sS9cZC045N+jicqjrxbru+BnZsLIvtu1ZfVlWiDX7OdBG6 oDZ8rGVSiyu4DimzjxWlKuHI2T1zfLh9RCKaBvOV3Vk5kNcOW5GRc1nl6ks9KoTLb3NNF4HomPZld 2kCdk9AQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025ww-3i; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 16/28] mm: Remove page_cache_add_speculative() and page_cache_get_speculative() Date: Mon, 10 Jan 2022 04:23:54 +0000 Message-Id: <20220110042406.499429-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WGawqvxo; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6CEE840003 X-Stat-Signature: u5fuuj8xj49534neqhqrcw56m8bhzfs5 X-HE-Tag: 1641788662-61640 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These wrappers have no more callers, so delete them. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 7 +++---- include/linux/pagemap.h | 11 ----------- 2 files changed, 3 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d3769897c8ac..b249156f7cf1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1291,10 +1291,9 @@ static inline void put_page(struct page *page) * applications that don't have huge page reference counts, this won't be an * issue. * - * Locking: the lockless algorithm described in page_cache_get_speculative() - * and page_cache_gup_pin_speculative() provides safe operation for - * get_user_pages and page_mkclean and other calls that race to set up page - * table entries. + * Locking: the lockless algorithm described in folio_try_get_rcu() + * provides safe operation for get_user_pages(), page_mkclean() and + * other calls that race to set up page table entries. */ #define GUP_PIN_COUNTING_BIAS (1U << 10) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 704cb1b4b15d..4a63176b6417 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -283,17 +283,6 @@ static inline struct inode *folio_inode(struct folio *folio) return folio->mapping->host; } -static inline bool page_cache_add_speculative(struct page *page, int count) -{ - VM_BUG_ON_PAGE(PageTail(page), page); - return folio_ref_try_add_rcu((struct folio *)page, count); -} - -static inline bool page_cache_get_speculative(struct page *page) -{ - return page_cache_add_speculative(page, 1); -} - /** * folio_attach_private - Attach private data to a folio. * @folio: Folio to attach data to. From patchwork Mon Jan 10 04:23:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F3F8C433F5 for ; Mon, 10 Jan 2022 04:24:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45DA86B008C; Sun, 9 Jan 2022 23:24:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E42E6B0093; Sun, 9 Jan 2022 23:24:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 262FD6B0095; Sun, 9 Jan 2022 23:24:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 058B56B008C for ; Sun, 9 Jan 2022 23:24:31 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BBCDE96F0F for ; Mon, 10 Jan 2022 04:24:30 +0000 (UTC) X-FDA: 79013085900.08.A3677FA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 69D1018000A for ; Mon, 10 Jan 2022 04:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Fb1nGJRKTWWZNwZ6nVvnMjBwiTsGQjxg+7diVdo/Ofc=; b=Dmsv6SlX9hxCR2dO33GsQIoQlw yIjfiUE/WRcLyJ7IjHBPc/JcqVXtuQYDNDvtGXi48grWo9YT8gUI/nmO5sGTVS13s+GDsjcop2UpS KbPsO1/RLw+Q8vlrvP0klzrRiJxitQQJsPEd6hCOm+wHihLZvovsyu6czW9xZrV5dntD+Ip9Fv2xn VwMphye2h29mC/1n3wonvq+9dKiNbz7VIhAKFfVxMk3cVVb+izq2ODz+CJM/OWZnmuyYuxAgWUY/D f4vktvA96JVvGLjqngotNfvJAw75IOBJIutxW/IWMPmVLlsvJAic9D0OSQgPAQ6hh6n+bYnE0fLgJ ealpxBSw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025wy-6A; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 17/28] gup: Add gup_put_folio() Date: Mon, 10 Jan 2022 04:23:55 +0000 Message-Id: <20220110042406.499429-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 69D1018000A X-Stat-Signature: 6m5orzctq9rgr5uqa1ssyy5c1w6r83d4 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dmsv6SlX; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788670-507702 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert put_compound_head() to gup_put_folio() and hpage_pincount_sub() to folio_pincount_sub(). This removes the last call to put_page_refs(), so delete it. Add a temporary put_compound_head() wrapper which will be deleted by the end of this series. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 42 ++++++++++++++---------------------------- 1 file changed, 14 insertions(+), 28 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9e581201d679..719252fa0402 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -45,34 +45,15 @@ static void folio_pincount_add(struct folio *folio, int refs) folio_ref_add(folio, refs * (GUP_PIN_COUNTING_BIAS - 1)); } -static int page_pincount_sub(struct page *page, int refs) +static int folio_pincount_sub(struct folio *folio, int refs) { - VM_BUG_ON_PAGE(page != compound_head(page), page); - - if (PageHead(page)) - atomic_sub(refs, compound_pincount_ptr(page)); + if (folio_test_large(folio)) + atomic_sub(refs, folio_pincount_ptr(folio)); else refs *= GUP_PIN_COUNTING_BIAS; return refs; } -/* Equivalent to calling put_page() @refs times. */ -static void put_page_refs(struct page *page, int refs) -{ -#ifdef CONFIG_DEBUG_VM - if (VM_WARN_ON_ONCE_PAGE(page_ref_count(page) < refs, page)) - return; -#endif - - /* - * Calling put_page() for each ref is unnecessarily slow. Only the last - * ref needs a put_page(). - */ - if (refs > 1) - page_ref_sub(page, refs - 1); - put_page(page); -} - /* * Return the folio with ref appropriately incremented, * or NULL if that failed. @@ -171,15 +152,20 @@ struct page *try_grab_compound_head(struct page *page, return &try_grab_folio(page, refs, flags)->page; } -static void put_compound_head(struct page *page, int refs, unsigned int flags) +static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { - mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED, - refs); - refs = page_pincount_sub(page, refs); + node_stat_mod_folio(folio, NR_FOLL_PIN_RELEASED, refs); + refs = folio_pincount_sub(folio, refs); } - put_page_refs(page, refs); + folio_put_refs(folio, refs); +} + +static void put_compound_head(struct page *page, int refs, unsigned int flags) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + gup_put_folio((struct folio *)page, refs, flags); } /** @@ -220,7 +206,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) */ void unpin_user_page(struct page *page) { - put_compound_head(compound_head(page), 1, FOLL_PIN); + gup_put_folio(page_folio(page), 1, FOLL_PIN); } EXPORT_SYMBOL(unpin_user_page); From patchwork Mon Jan 10 04:23:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D866C433EF for ; Mon, 10 Jan 2022 04:24:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 999796B0073; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EE1F6B0074; Sun, 9 Jan 2022 23:24:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CBE16B0078; Sun, 9 Jan 2022 23:24:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 592846B0073 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0336B96F11 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.18.48670CB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 8958840003 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gfxLedy5iy4I1tA8ZKlC8DvCnDAEDbxXKKrSeENY8cY=; b=JYCKGH1t+KQP7QX9HjamnWk5W2 mF/7x01TlcS93WH/d+/4Muv2r6t8WSpkVek6GeJL4C4uinGkDtDqksIAv98QraviByTXyG1hUmFcC o9Ah5t5TYuDzWB0c7fPsRId9euRKMDY6m29n97X+/Kfxm8Ubq1iYjHPa4LvvPx2ezmlzSKWhe2tfF wSdnBxdJqdbIBrJhavYNKpwOkVpafajUtsBR68f9Q7FE8GIGJgvjSrSSfFte8Nx/OOGDb1ZuuyAIv YqudSeZxTK7Z0UjRKtKUnhH0/BNlPrA0Kndwv0mV85kAjM4UkjSa3al2haF7Z15986OEdpXKtGmXA mMZp5low==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025x0-8q; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 18/28] hugetlb: Use try_grab_folio() instead of try_grab_compound_head() Date: Mon, 10 Jan 2022 04:23:56 +0000 Message-Id: <20220110042406.499429-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8958840003 X-Stat-Signature: gpuj87u1wwun3hayb997i1fdbzed1r56 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JYCKGH1t; dmarc=none; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788659-135106 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: follow_hugetlb_page() only cares about success or failure, so it doesn't need to know the type of the returned pointer, only whether it's NULL or not. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 3 --- mm/gup.c | 2 +- mm/hugetlb.c | 7 +++---- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b249156f7cf1..c103c6401ecd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1195,9 +1195,6 @@ static inline void get_page(struct page *page) } bool __must_check try_grab_page(struct page *page, unsigned int flags); -struct page *try_grab_compound_head(struct page *page, int refs, - unsigned int flags); - static inline __must_check bool try_get_page(struct page *page) { diff --git a/mm/gup.c b/mm/gup.c index 719252fa0402..20703de2f107 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -146,7 +146,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) return NULL; } -struct page *try_grab_compound_head(struct page *page, +static inline struct page *try_grab_compound_head(struct page *page, int refs, unsigned int flags) { return &try_grab_folio(page, refs, flags)->page; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index abcd1785c629..ab67b13c4a71 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6072,7 +6072,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, if (pages) { /* - * try_grab_compound_head() should always succeed here, + * try_grab_folio() should always succeed here, * because: a) we hold the ptl lock, and b) we've just * checked that the huge page is present in the page * tables. If the huge page is present, then the tail @@ -6081,9 +6081,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * any way. So this page must be available at this * point, unless the page refcount overflowed: */ - if (WARN_ON_ONCE(!try_grab_compound_head(pages[i], - refs, - flags))) { + if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, + flags))) { spin_unlock(ptl); remainder = 0; err = -ENOMEM; From patchwork Mon Jan 10 04:23:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 771F7C433F5 for ; Mon, 10 Jan 2022 04:24:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 680B26B007E; Sun, 9 Jan 2022 23:24:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E0906B0080; Sun, 9 Jan 2022 23:24:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 436366B0082; Sun, 9 Jan 2022 23:24:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0125.hostedemail.com [216.40.44.125]) by kanga.kvack.org (Postfix) with ESMTP id 190CE6B007E for ; Sun, 9 Jan 2022 23:24:22 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BC5DD824C421 for ; Mon, 10 Jan 2022 04:24:21 +0000 (UTC) X-FDA: 79013085522.09.CBD1B33 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 59EC94000D for ; Mon, 10 Jan 2022 04:24:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fC4mfcf/XAHzYtj0vYWgXRylAg3qW7XjyC6jR63n0ws=; b=G8FxiLvPsae9GR/t85Spn5ZHFx YOtLGi7JPPDzlItCRlMqM6B76CuawBJ6HkD3SK7XQsfLpIrVcPhRbhsBvt0CCRGHMGkW7MJXtimh3 vnqAYCyMjelK5u8jFMEkFjMxjzOL7WoHidac+q474yFVCn47qhdjhoFhIdMEpIFd6t9LZV3Q6scZv +d5KpY89Mw0mw09jWK8SF+Ix0odXuP9dWx+xHAz7W8mgpPaz1hCCnPs+/RNEP1G97LlLqlnz6aqX3 4Ny9Lsn0lGlLjRQWNF4iiaiYJ73Q47X8bemQGmMp0HFLgAvtvFZdjZgTcIHNs+EPBU6zaylj6iih9 IEzUEUAg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025x2-Al; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 19/28] gup: Convert try_grab_page() to call try_grab_folio() Date: Mon, 10 Jan 2022 04:23:57 +0000 Message-Id: <20220110042406.499429-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 59EC94000D X-Stat-Signature: yauhjnzhpxmqeutnjepotaaimwijjmku Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=G8FxiLvP; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641788661-769583 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: try_grab_page() only cares about success or failure, not about the type returned. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 20703de2f107..c3e514172eaf 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -192,7 +192,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) if (!(flags & (FOLL_GET | FOLL_PIN))) return true; - return try_grab_compound_head(page, 1, flags); + return try_grab_folio(page, 1, flags); } /** From patchwork Mon Jan 10 04:23:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C032FC433F5 for ; Mon, 10 Jan 2022 04:24:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 666B66B0081; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E4BE6B007B; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 150106B007B; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 95B156B0078 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 540A796F15 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.09.6B0E4C4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id B3128A0006 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yZ4KQo4q1BMM1IW+AzT2B5hxEwiTFmWyBI9LyNwadcs=; b=Y5qROpafFS6IdPE7SWUIpy6y8Z Ds+tDsyfKz6WZBH31pIe6DOFpJl+sL1ipeyGzZS6M32jWbWlVpPjrdc0wyfd3tfgdzi96/xLaCYPK 6IWJzByu6T19rwB9OaCPv52ko4Rp38fup0pf0Dn/rNQA93YVC5YF+L2YbELkCpRTT9VVzRlW2Set/ YCOnWIPsNbA16eeJW5Q/hsx4elwOPo+uFT9qegKtEVyIg7uDimB17Irt6Tsu6pozdXSx4r9XT13LR HDCAr9UF4wB8jm9nDjxjU2PPIK4G+ApcUMyf20j05eU8DoKnc8U15y26j3J/PkR6sHGJBD2QPxycG 630C9lUg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025x4-Ce; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 20/28] gup: Convert gup_pte_range() to use a folio Date: Mon, 10 Jan 2022 04:23:58 +0000 Message-Id: <20220110042406.499429-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B3128A0006 X-Stat-Signature: ju1hmxsect84z7hfet95gcepyk8w398r Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y5qROpaf; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1641788659-62723 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We still call try_grab_folio() once per PTE; a future patch could optimise to just adjust the reference count for each page within the folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index c3e514172eaf..27cc097ec05d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2235,7 +2235,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ptem = ptep = pte_offset_map(&pmd, addr); do { pte_t pte = ptep_get_lockless(ptep); - struct page *head, *page; + struct page *page; + struct folio *folio; /* * Similar to the PMD case below, NUMA hinting must take slow @@ -2262,22 +2263,20 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); - head = try_grab_compound_head(page, 1, flags); - if (!head) + folio = try_grab_folio(page, 1, flags); + if (!folio) goto pte_unmap; if (unlikely(page_is_secretmem(page))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_compound_head(head, 1, flags); + gup_put_folio(folio, 1, flags); goto pte_unmap; } - VM_BUG_ON_PAGE(compound_head(page) != head, page); - /* * We need to make the page accessible if and only if we are * going to access its content (the FOLL_PIN case). Please @@ -2291,10 +2290,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, goto pte_unmap; } } - SetPageReferenced(page); + folio_set_referenced(folio); pages[*nr] = page; (*nr)++; - } while (ptep++, addr += PAGE_SIZE, addr != end); ret = 1; From patchwork Mon Jan 10 04:23:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B66FC433F5 for ; Mon, 10 Jan 2022 04:24:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2BCF6B0075; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C271D6B007E; Sun, 9 Jan 2022 23:24:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 791076B0075; Sun, 9 Jan 2022 23:24:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 5D9C46B0074 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 16EC396F0E for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.16.68BD8ED Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id BBA64140008 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dlgtKTjkERgxYua7qA7Yoi6vlUOwWisaGlBoko6Kg7w=; b=MdYehaqZTNoBT0pJr/dBzKu0xk rhxJqipWHBLzxGzgjGBSI9ARSQ1RlA4UFGaT5vyh+ZLD3n7X2WtXi/hIBHEVGqogTYY7hy7cYaNbD L5MM96NFsVLNhjaHZpA7rpoJPoiVDLQaQ+clb3yED1pcwzO53aJrART1VpQnwzz3IlH8mHxrQnI4v HCGbkVFR1Hd/dDw/RZ/wB1DfOJHhjE7+qw3/DKruUDpZ+rY+erQhJUTjkB1tFsoQIt9v4WCQiF9mS GSAMnbkin/NrUmK334/OwCMLSJDzanEXMKDv7Mwe92ldpljQ6RNp0MiNFKkdnOmL2uqPkJBfzkTE6 4x4BRuug==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025x6-F5; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 21/28] gup: Convert gup_hugepte() to use a folio Date: Mon, 10 Jan 2022 04:23:59 +0000 Message-Id: <20220110042406.499429-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: BBA64140008 X-Stat-Signature: rz4b3g4fqe4u1gja5zpu3ubhxr133e3d Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MdYehaqZ; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-HE-Tag: 1641788659-151015 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There should be little to no effect from this patch; just removing uses of some old APIs. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 27cc097ec05d..250326458df6 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2428,7 +2428,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, struct page **pages, int *nr) { unsigned long pte_end; - struct page *head, *page; + struct page *page; + struct folio *folio; pte_t pte; int refs; @@ -2444,21 +2445,20 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - head = pte_page(pte); - page = nth_page(head, (addr & (sz-1)) >> PAGE_SHIFT); + page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(head, refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Mon Jan 10 04:24:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3744AC433FE for ; Mon, 10 Jan 2022 04:25:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0AD2B6B00AE; Sun, 9 Jan 2022 23:24:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F0C816B00B3; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CC656B00B2; Sun, 9 Jan 2022 23:24:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0091.hostedemail.com [216.40.44.91]) by kanga.kvack.org (Postfix) with ESMTP id E843F6B00AE for ; Sun, 9 Jan 2022 23:24:47 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9BE91181C49D9 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) X-FDA: 79013086614.07.1F7C73F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP id 4E1BC20004 for ; Mon, 10 Jan 2022 04:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Tv81wfblYI2u4OnQaOMT5hu8NAr/0b4Pr52a1X1WL0g=; b=diks2I8M9hwZoO5nHFOXQp5WyE eA0oBnejOYqo+gENL+UsfMVy2fpxjWcrb7cx8F1VccqE0JvMXMYxPIwO1yv1SwfSxeISuecUznhZ7 bAIhQh40BWNiigAhbgDK/3iP4kW1Lnu4yIh/wLpKya93/tiE/PYCK52JWgDPxwa5XkP+PYAQL7W0m hLYS7ATN46P4pWbEIuy8Rm5w77AT4mexrKdF3SaxgC0ebQqzikpT8VZrNc77UhRWc0OZNbNp/Rqif D3rJTMFywPuaEVUm8fKfWuQftRHYCeN6D0Hg7QNvMqMprECBqONl3IDrBpubXzBu0TXHe840r9AYj F5VLmCtQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025x8-HW; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 22/28] gup: Convert gup_huge_pmd() to use a folio Date: Mon, 10 Jan 2022 04:24:00 +0000 Message-Id: <20220110042406.499429-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4E1BC20004 X-Stat-Signature: duijhnbc1bg3hjwfnjrbdxwfsss3kgdb Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=diks2I8M; spf=none (imf31.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788687-516450 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 250326458df6..a006bce2d47b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2492,7 +2492,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { - struct page *head, *page; + struct page *page; + struct folio *folio; int refs; if (!pmd_access_permitted(orig, flags & FOLL_WRITE)) @@ -2508,17 +2509,17 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pmd_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Mon Jan 10 04:24:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B79FC433FE for ; Mon, 10 Jan 2022 04:24:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56FAF6B0098; Sun, 9 Jan 2022 23:24:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CFFD6B0099; Sun, 9 Jan 2022 23:24:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 349826B009A; Sun, 9 Jan 2022 23:24:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 1C0246B0098 for ; Sun, 9 Jan 2022 23:24:34 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CAF4C824C421 for ; Mon, 10 Jan 2022 04:24:33 +0000 (UTC) X-FDA: 79013086026.12.0111890 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 6E24040007 for ; Mon, 10 Jan 2022 04:24:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=toFUQqgYtl0v327PRslx3eBHdHJmUYmTsDw9nmAaME0=; b=dM3u7vcFyE2WKYv1dSy8MDTHG5 IGAL8q8VFYIIRashMmorLNTm3sfc4dsBwvsqXdwH3y6KCv4JHhMrUkNUaRhwrNh5yh725n0YhK8uw uW0u2BPiVjZpRxGA5+PijRHAsruBIUkNi6bZ6q+KJ4SsxA8TzXWqvvxvCsEhpH3lPDBG40DST1uQU kbK2dhclA8f1ozFQx3bUSlmcnxiaW/oifphqGtxqcIiXCiRKXu7tMuaANJJXGhBHcHxpfplm/Ptis Fx5YfJgoXVKWnN7dbFWK9ykw1dmIks3XqCyIjI/EA5z4yJI/wEO7F4IKRUdt4ZvZa29a/EBWNGFiz MrAhv56g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025xA-Jy; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 23/28] gup: Convert gup_huge_pud() to use a folio Date: Mon, 10 Jan 2022 04:24:01 +0000 Message-Id: <20220110042406.499429-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6E24040007 X-Stat-Signature: 45y7c6qxh4wyqkri5xtznmsyo6eyw5eq Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dM3u7vcF; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641788673-246311 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a006bce2d47b..7b7bf8361558 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2527,7 +2527,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { - struct page *head, *page; + struct page *page; + struct folio *folio; int refs; if (!pud_access_permitted(orig, flags & FOLL_WRITE)) @@ -2543,17 +2544,17 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pud_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pud_val(orig) != pud_val(*pudp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Mon Jan 10 04:24:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02268C433EF for ; Mon, 10 Jan 2022 04:24:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 377DF6B0085; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25E426B0083; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 012D56B0082; Sun, 9 Jan 2022 23:24:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id A2CFD6B007D for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5B526181843E7 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.08.FE39F8F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 0396D20010 for ; Mon, 10 Jan 2022 04:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FrPzWZ09xs9aa5csC+CTNKkHIKuSwjgWq1vjKtgzbWE=; b=PDZZlizvSkKJQNj2x7tsRWpb1N hdVKt7y+Xxh4qzyI+mWWsfjz8avSgFEZji3flVPQAM8N9XJFoZnAIjvu0gfC6AYOJS5I9+g7jBzhf +EcIZ/DFNZhcIBP6lqxrI9kuL1XbW49yqz7xGkLy4Jwwu4b+cLJym2/ff8s4QJMm00XgLqQGrsFy2 a6OeonVlXtwEvdKeN3+UTDGgJk8ikxpgZEHOHj8fC1C8EdX/yQ/+VNxk4EQnHstSUcksjT1ScuMjs N0aPifks38xolz+Si80N3UlHe2ElKqal3R/CjOy2CilLpPvco/iPkHGviOgvnv8LZopjAAEoVjon3 8DosvRHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025xC-MQ; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 24/28] gup: Convert gup_huge_pgd() to use a folio Date: Mon, 10 Jan 2022 04:24:02 +0000 Message-Id: <20220110042406.499429-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0396D20010 X-Stat-Signature: fd3pgs6cpk4h9kezddmhuwk6sp37z4sa Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PDZZlizv; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1641788659-742666 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new folio-based APIs. This was the last user of try_grab_compound_head(), so remove it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 7b7bf8361558..b5786e83c418 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -146,12 +146,6 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) return NULL; } -static inline struct page *try_grab_compound_head(struct page *page, - int refs, unsigned int flags) -{ - return &try_grab_folio(page, refs, flags)->page; -} - static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { @@ -2563,7 +2557,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, struct page **pages, int *nr) { int refs; - struct page *head, *page; + struct page *page; + struct folio *folio; if (!pgd_access_permitted(orig, flags & FOLL_WRITE)) return 0; @@ -2573,17 +2568,17 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); refs = record_subpages(page, addr, end, pages + *nr); - head = try_grab_compound_head(pgd_page(orig), refs, flags); - if (!head) + folio = try_grab_folio(page, refs, flags); + if (!folio) return 0; if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) { - put_compound_head(head, refs, flags); + gup_put_folio(folio, refs, flags); return 0; } *nr += refs; - SetPageReferenced(head); + folio_set_referenced(folio); return 1; } From patchwork Mon Jan 10 04:24:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89089C43217 for ; Mon, 10 Jan 2022 04:24:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E23596B0082; Sun, 9 Jan 2022 23:24:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D5DB46B0083; Sun, 9 Jan 2022 23:24:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB2126B0087; Sun, 9 Jan 2022 23:24:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id A1B106B0082 for ; Sun, 9 Jan 2022 23:24:23 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6A69B96F15 for ; Mon, 10 Jan 2022 04:24:23 +0000 (UTC) X-FDA: 79013085606.05.64AF7E2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 0FBB91C0003 for ; Mon, 10 Jan 2022 04:24:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SzJAO82OEH7GJ/mx7wGbrx//DOWsumHlBiQh3Qh2tnQ=; b=fjA0ppGWoAmzrPEL5HhRkIA+hr fzlhnHNr4mbg+FEjXHmiirYvrOledQFtXWc+8V+wEmsSXldqHGRyDqFahNY1GgB+Djy/o6DrumHiW a9LAONzmOJGdPQ7c1X+jl+BpOYhqLCvUzgUkFc5bKyNcnzdG1wEL80ExG+BOgn/L3HoDTLwJ00VOW hhsvdm2fk05IAB25GVH39Ip69aba3Bk3Ba2n7Zx7f0L9KpUOCn6Dn9tZ9Gaa2nSJrlxOzyUSIkITB zzm50sqecdqw8qZaSbFuY/V8Qovp1Z8QLFT4/kSUZZ8KBmhTI7AWTA9RGWJC0tgQscXCShYQFp65Z wSTTeGFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025xE-OM; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 25/28] gup: Convert compound_next() to gup_folio_next() Date: Mon, 10 Jan 2022 04:24:03 +0000 Message-Id: <20220110042406.499429-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0FBB91C0003 X-Stat-Signature: us5sujiinfbgqp6yrpqx54qorxuucd7e Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fjA0ppGW; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788662-210955 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert both callers to work on folios instead of pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index b5786e83c418..0cf2d5fd8d2d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -220,20 +220,20 @@ static inline struct page *compound_range_next(unsigned long i, return page; } -static inline struct page *compound_next(unsigned long i, +static inline struct folio *gup_folio_next(unsigned long i, unsigned long npages, struct page **list, unsigned int *ntails) { - struct page *page; + struct folio *folio; unsigned int nr; - page = compound_head(list[i]); + folio = page_folio(list[i]); for (nr = i + 1; nr < npages; nr++) { - if (compound_head(list[nr]) != page) + if (page_folio(list[nr]) != folio) break; } *ntails = nr - i; - return page; + return folio; } /** @@ -261,17 +261,17 @@ static inline struct page *compound_next(unsigned long i, void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, bool make_dirty) { - unsigned long index; - struct page *head; - unsigned int ntails; + unsigned long i; + struct folio *folio; + unsigned int nr; if (!make_dirty) { unpin_user_pages(pages, npages); return; } - for (index = 0; index < npages; index += ntails) { - head = compound_next(index, npages, pages, &ntails); + for (i = 0; i < npages; i += nr) { + folio = gup_folio_next(i, npages, pages, &nr); /* * Checking PageDirty at this point may race with * clear_page_dirty_for_io(), but that's OK. Two key @@ -292,9 +292,12 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, * written back, so it gets written back again in the * next writeback cycle. This is harmless. */ - if (!PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + if (!folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_pages_dirty_lock); @@ -347,9 +350,9 @@ EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); */ void unpin_user_pages(struct page **pages, unsigned long npages) { - unsigned long index; - struct page *head; - unsigned int ntails; + unsigned long i; + struct folio *folio; + unsigned int nr; /* * If this WARN_ON() fires, then the system *might* be leaking pages (by @@ -359,9 +362,9 @@ void unpin_user_pages(struct page **pages, unsigned long npages) if (WARN_ON(IS_ERR_VALUE(npages))) return; - for (index = 0; index < npages; index += ntails) { - head = compound_next(index, npages, pages, &ntails); - put_compound_head(head, ntails, FOLL_PIN); + for (i = 0; i < npages; i += nr) { + folio = gup_folio_next(i, npages, pages, &nr); + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_pages); From patchwork Mon Jan 10 04:24:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2DD7C433FE for ; Mon, 10 Jan 2022 04:24:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 963896B007B; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 655F16B0080; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 371586B0078; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id CD8676B0074 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 837F3824C421 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.04.3464B41 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 285BB1C0008 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Z2gQhd94zLvZOxg2P3olHAv1sEvF2BSvVZSPWfTVDGI=; b=SZFQzxmRdm+Os236gX9J2RY9g8 ckaQRRqi/4gGgNiUzDOpo+Mjlm1JMQ98qgXAULO+b56mRwhJWsQPtarcgcozdlMQtgJPjUrqev1ee dkhmh9+NEytrAX2GgYwm34QOpOrOYIGR6IRK72602NpYlhbosHIwe3m5WeOXbQbjDr2DXAtZFsTv5 Wf1tYpEl641n2TRVis0AorZG7PFLpKfda9BjAiHgxleT6FYlwiSLRg7ZOMPlHmCv192Sre5GH6XTj 3QzhO5KG4nhoQc8iIjTzVKrlSItQNzoJQKIkOQU0vBw1vWD+e4ghX4G5GfqLC1r5M6sRjkwQbMhE6 qrVmW3Hw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025xG-S7; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 26/28] gup: Convert compound_range_next() to gup_folio_range_next() Date: Mon, 10 Jan 2022 04:24:04 +0000 Message-Id: <20220110042406.499429-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 285BB1C0008 X-Stat-Signature: 17zdmsfz9dnybe5xyebiqu6i94fi6oqm Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SZFQzxmR; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788660-881240 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert the only caller to work on folios instead of pages. This removes the last caller of put_compound_head(), so delete it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 4 ++-- mm/gup.c | 38 ++++++++++++++++++-------------------- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c103c6401ecd..1ddb0a55b5ca 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -216,10 +216,10 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) -#define page_nth(head, tail) (page_to_pfn(tail) - page_to_pfn(head)) +#define folio_nth(folio, page) (page_to_pfn(page) - folio_pfn(folio)) #else #define nth_page(page,n) ((page) + (n)) -#define page_nth(head, tail) ((tail) - (head)) +#define folio_nth(folio, tail) ((tail) - &(folio)->page) #endif /* to align the pointer to the (next) page boundary */ diff --git a/mm/gup.c b/mm/gup.c index 0cf2d5fd8d2d..1cdd5f2887a8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -156,12 +156,6 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) folio_put_refs(folio, refs); } -static void put_compound_head(struct page *page, int refs, unsigned int flags) -{ - VM_BUG_ON_PAGE(PageTail(page), page); - gup_put_folio((struct folio *)page, refs, flags); -} - /** * try_grab_page() - elevate a page's refcount by a flag-dependent amount * @@ -204,20 +198,21 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); -static inline struct page *compound_range_next(unsigned long i, +static inline struct folio *gup_folio_range_next(unsigned long i, unsigned long npages, struct page *start, unsigned int *ntails) { - struct page *next, *page; + struct page *next; + struct folio *folio; unsigned int nr = 1; next = nth_page(start, i); - page = compound_head(next); - if (PageHead(page)) + folio = page_folio(next); + if (folio_test_large(folio)) nr = min_t(unsigned int, npages - i, - compound_nr(page) - page_nth(page, next)); + folio_nr_pages(folio) - folio_nth(folio, next)); *ntails = nr; - return page; + return folio; } static inline struct folio *gup_folio_next(unsigned long i, @@ -326,15 +321,18 @@ EXPORT_SYMBOL(unpin_user_pages_dirty_lock); void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty) { - unsigned long index; - struct page *head; - unsigned int ntails; + unsigned long i; + struct folio *folio; + unsigned int nr; - for (index = 0; index < npages; index += ntails) { - head = compound_range_next(index, npages, page, &ntails); - if (make_dirty && !PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + for (i = 0; i < npages; i += nr) { + folio = gup_folio_range_next(i, npages, page, &nr); + if (make_dirty && !folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); From patchwork Mon Jan 10 04:24:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FA99C433F5 for ; Mon, 10 Jan 2022 04:24:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD53A6B0078; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B3B9E6B007E; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5987C6B0074; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id E81286B0081 for ; Sun, 9 Jan 2022 23:24:20 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A3ECE180CB049 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.29.65ECF8F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 53334A0004 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cvgoV6iZTRFNzicbxb1xEs3MoVjFdOdlhKJq+daLZeY=; b=QaV5vrd9M3+Jx4m/wixZx8OTTL Kxj5Bb1AaFaoQo8k3WogjzA1Vyddx1okmixBW/P1DTs4XgAk4mAe6MWzihj5eYcleH78bpYhTPdQH DuN67oRLI0aUgReH/FnnUnU4UZ6Zd39EadpnRXBwsH0iIoBWZQAAg/HhQbGbqheiMK9IqVriQkYnj hTfD3AGBFXanP8VBqimr3O3Bdroe0JsrERS2+hBC7+oX1kA+tNQ/Q1cWWg6lOrzfxFAaf9iakUqQe 75J2OWyZzKXjai/c5Bw5stgexuKiLxbhMhqEDnT2HWHUWu0X4jPrXmxYpy+aAMf5HBHFUZ9URN+qG OcgIuu0g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mE9-0025xI-V4; Mon, 10 Jan 2022 04:24:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 27/28] mm: Add isolate_lru_folio() Date: Mon, 10 Jan 2022 04:24:05 +0000 Message-Id: <20220110042406.499429-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: wsx8fds6c4ojcmyzb8pfwk3b66y4mzon X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 53334A0004 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QaV5vrd9; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788660-261986 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn isolate_lru_page() into a wrapper around isolate_lru_folio(). TestClearPageLRU() would have always failed on a tail page, so returning -EBUSY is the same behaviour. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- arch/powerpc/include/asm/mmu_context.h | 1 - mm/folio-compat.c | 8 +++++ mm/internal.h | 3 +- mm/vmscan.c | 43 ++++++++++++-------------- 4 files changed, 29 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 9ba6b585337f..b9cab0a11421 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -21,7 +21,6 @@ extern void destroy_context(struct mm_struct *mm); #ifdef CONFIG_SPAPR_TCE_IOMMU struct mm_iommu_table_group_mem_t; -extern int isolate_lru_page(struct page *page); /* from internal.h */ extern bool mm_iommu_preregistered(struct mm_struct *mm); extern long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries, diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 749555a232a8..782e766cd1ee 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -7,6 +7,7 @@ #include #include #include +#include "internal.h" struct address_space *page_mapping(struct page *page) { @@ -151,3 +152,10 @@ int try_to_release_page(struct page *page, gfp_t gfp) return filemap_release_folio(page_folio(page), gfp); } EXPORT_SYMBOL(try_to_release_page); + +int isolate_lru_page(struct page *page) +{ + if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) + return -EBUSY; + return isolate_lru_folio((struct folio *)page); +} diff --git a/mm/internal.h b/mm/internal.h index 9a72d1ecdab4..8b90db90e7f2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -157,7 +157,8 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ -extern int isolate_lru_page(struct page *page); +int isolate_lru_page(struct page *page); +int isolate_lru_folio(struct folio *folio); extern void putback_lru_page(struct page *page); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); diff --git a/mm/vmscan.c b/mm/vmscan.c index fb9584641ac7..ac2f5b76cdb2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2168,45 +2168,40 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, } /** - * isolate_lru_page - tries to isolate a page from its LRU list - * @page: page to isolate from its LRU list + * isolate_lru_folio - Try to isolate a folio from its LRU list. + * @folio: Folio to isolate from its LRU list. * - * Isolates a @page from an LRU list, clears PageLRU and adjusts the - * vmstat statistic corresponding to whatever LRU list the page was on. + * Isolate a @folio from an LRU list and adjust the vmstat statistic + * corresponding to whatever LRU list the folio was on. * - * Returns 0 if the page was removed from an LRU list. - * Returns -EBUSY if the page was not on an LRU list. - * - * The returned page will have PageLRU() cleared. If it was found on - * the active list, it will have PageActive set. If it was found on - * the unevictable list, it will have the PageUnevictable bit set. That flag + * The folio will have its LRU flag cleared. If it was found on the + * active list, it will have the Active flag set. If it was found on the + * unevictable list, it will have the Unevictable flag set. These flags * may need to be cleared by the caller before letting the page go. * - * The vmstat statistic corresponding to the list on which the page was - * found will be decremented. - * - * Restrictions: + * Context: * * (1) Must be called with an elevated refcount on the page. This is a - * fundamental difference from isolate_lru_pages (which is called + * fundamental difference from isolate_lru_pages() (which is called * without a stable reference). - * (2) the lru_lock must not be held. - * (3) interrupts must be enabled. + * (2) The lru_lock must not be held. + * (3) Interrupts must be enabled. + * + * Return: 0 if the folio was removed from an LRU list. + * -EBUSY if the folio was not on an LRU list. */ -int isolate_lru_page(struct page *page) +int isolate_lru_folio(struct folio *folio) { - struct folio *folio = page_folio(page); int ret = -EBUSY; - VM_BUG_ON_PAGE(!page_count(page), page); - WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); + VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio); - if (TestClearPageLRU(page)) { + if (folio_test_clear_lru(folio)) { struct lruvec *lruvec; - get_page(page); + folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, folio); unlock_page_lruvec_irq(lruvec); ret = 0; } From patchwork Mon Jan 10 04:24:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12708172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C12BDC433EF for ; Mon, 10 Jan 2022 04:24:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11F686B007D; Sun, 9 Jan 2022 23:24:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D63666B0080; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A2876B0082; Sun, 9 Jan 2022 23:24:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 074176B007D for ; Sun, 9 Jan 2022 23:24:21 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B35E3824C428 for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) X-FDA: 79013085480.26.4A3E8F1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 5C2A7C000E for ; Mon, 10 Jan 2022 04:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bF3YwsJlDdayaosPsW7OCN1/4CCh76lSoZ3+feMmOkc=; b=PQk/7QjPqGUvXtrFs0Kr76GObb Xng+45VgzfG8GyAOUqUyRYL+YY0Khbhmk3/VWOl13W9NIKy2s9O7y0iZBYM4LlIDEA4SvuqHJGkwi 85c5ycIXougd1QOFneGEldcJQhSiMm/Xtu4pnGjK20d3hXYGKnFctxHQwWtGUG5dXWVItZ8lqqSkB bPcdmokKh7Q4sCD7wcdLHuR3+2VWirUQ8rUzwDF8NtvWwG+vIc0L6dFZu6Lw7TW0ejqjEo15pXiGK 8SkQfW8Y6XX/KmtV8BDF6iArC0FmXD1TEXPzz6tXqNQ1rvSniEvUx6bDIMtVHm0Knx8fPxDmUzRKX 803c6OaA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6mEA-0025xL-1X; Mon, 10 Jan 2022 04:24:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Christoph Hellwig , William Kucharski , linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH v2 28/28] gup: Convert check_and_migrate_movable_pages() to use a folio Date: Mon, 10 Jan 2022 04:24:06 +0000 Message-Id: <20220110042406.499429-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110042406.499429-1-willy@infradead.org> References: <20220110042406.499429-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5C2A7C000E X-Stat-Signature: wyofyi1ig8us6g9gujnrqkajjg1661fr Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="PQk/7QjP"; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641788660-123072 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Switch from head pages to folios. This removes an assumption that THPs are the only way to have a high-order page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- mm/gup.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1cdd5f2887a8..b2d109626c44 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1801,41 +1801,41 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, bool drain_allow = true; LIST_HEAD(movable_page_list); long ret = 0; - struct page *prev_head = NULL; - struct page *head; + struct folio *folio, *prev_folio = NULL; struct migration_target_control mtc = { .nid = NUMA_NO_NODE, .gfp_mask = GFP_USER | __GFP_NOWARN, }; for (i = 0; i < nr_pages; i++) { - head = compound_head(pages[i]); - if (head == prev_head) + folio = page_folio(pages[i]); + if (folio == prev_folio) continue; - prev_head = head; + prev_folio = folio; /* * If we get a movable page, since we are going to be pinning * these entries, try to move them out if possible. */ - if (!is_pinnable_page(head)) { - if (PageHuge(head)) { - if (!isolate_huge_page(head, &movable_page_list)) + if (!is_pinnable_page(&folio->page)) { + if (folio_test_hugetlb(folio)) { + if (!isolate_huge_page(&folio->page, + &movable_page_list)) isolation_error_count++; } else { - if (!PageLRU(head) && drain_allow) { + if (!folio_test_lru(folio) && drain_allow) { lru_add_drain_all(); drain_allow = false; } - if (isolate_lru_page(head)) { + if (isolate_lru_folio(folio)) { isolation_error_count++; continue; } - list_add_tail(&head->lru, &movable_page_list); - mod_node_page_state(page_pgdat(head), + list_add_tail(&folio->lru, &movable_page_list); + node_stat_mod_folio(folio, NR_ISOLATED_ANON + - page_is_file_lru(head), - thp_nr_pages(head)); + folio_is_file_lru(folio), + folio_nr_pages(folio)); } } }