From patchwork Sun Jan 2 21:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD8C9C433EF for ; Sun, 2 Jan 2022 21:57:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED27B6B0074; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA9576B0073; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D70186B0078; Sun, 2 Jan 2022 16:57:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id BDE076B0073 for ; Sun, 2 Jan 2022 16:57:36 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6DB278249980 for ; Sun, 2 Jan 2022 21:57:36 +0000 (UTC) X-FDA: 78986709312.05.9DA51E7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 867C1180002 for ; Sun, 2 Jan 2022 21:57:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=929agv0r8S5vJjGS58k1PTYSR4s8wgn1Za10SWEinyw=; b=t3j3faXEGTZGma3GnXVHyJzcJa aCXf/Dage6s/QjTASZLgZGZ6eKBwXTpm0sYCGnl9ojs13Y0mxXMtqNdfqmxUR/bLfJ+rAXBYQ/APM BCly3OrzLIn+67A7KQhhX9kSZDozwe2wvaYxNeGzNFXfyq+/velx9vKPcx51f3wvWgNYOSqtqXFDW m8gQk3XUDMqSgCSDzoMz7VjbzLAGTej8x1bD+Njt3VDEYOSgmBw54l2rVUTVMWUPLLamBCtXdmDHC pxbqMZbMv7X5oz7AZ72oB1jtGmU42YADlne+T3cYwYMPAf5x9g3LxlpgCNROieebkN3NcfaTaUfxO Wm+FUPeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoE-F9; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 08/17] gup: Add try_grab_folio() Date: Sun, 2 Jan 2022 21:57:20 +0000 Message-Id: <20220102215729.2943705-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 867C1180002 X-Stat-Signature: oxmp11xf95cbnxgrctcgp7nnz6trc7hu Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t3j3faXE; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-HE-Tag: 1641160648-530423 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: try_grab_compound_head() is turned into a call to try_grab_folio(). Convert the two callers who only care about a boolean success/fail. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- include/linux/mm.h | 4 +--- mm/gup.c | 25 +++++++++++++------------ mm/hugetlb.c | 7 +++---- 3 files changed, 17 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 602de23482ef..4e763a590c9c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1202,9 +1202,7 @@ static inline void get_page(struct page *page) } bool __must_check try_grab_page(struct page *page, unsigned int flags); -struct page *try_grab_compound_head(struct page *page, int refs, - unsigned int flags); - +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); static inline __must_check bool try_get_page(struct page *page) { diff --git a/mm/gup.c b/mm/gup.c index 6d827f7d66d8..2307b2917055 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -76,12 +76,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) } /** - * try_grab_compound_head() - attempt to elevate a page's refcount, by a + * try_grab_folio() - attempt to elevate a page's refcount, by a * flags-dependent amount. - * - * Even though the name includes "compound_head", this function is still - * appropriate for callers that have a non-compound @page to get. - * * @page: pointer to page to be grabbed * @refs: the value to (effectively) add to the page's refcount * @flags: gup flags: these are the FOLL_* flag values. @@ -102,16 +98,15 @@ static inline struct folio *try_get_folio(struct page *page, int refs) * FOLL_PIN on normal pages, or compound pages that are two pages long: * page's refcount will be incremented by @refs * GUP_PIN_COUNTING_BIAS. * - * Return: head page (with refcount appropriately incremented) for success, or + * Return: folio (with refcount appropriately incremented) for success, or * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's * considered failure, and furthermore, a likely bug in the caller, so a warning * is also emitted. */ -struct page *try_grab_compound_head(struct page *page, - int refs, unsigned int flags) +struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { if (flags & FOLL_GET) - return &try_get_folio(page, refs)->page; + return try_get_folio(page, refs); else if (flags & FOLL_PIN) { struct folio *folio; @@ -150,13 +145,19 @@ struct page *try_grab_compound_head(struct page *page, node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, refs); - return &folio->page; + return folio; } WARN_ON_ONCE(1); return NULL; } +static inline struct page *try_grab_compound_head(struct page *page, + int refs, unsigned int flags) +{ + return &try_grab_folio(page, refs, flags)->page; +} + static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { @@ -188,7 +189,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags) * @flags: gup flags: these are the FOLL_* flag values. * * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same - * time. Cases: please see the try_grab_compound_head() documentation, with + * time. Cases: please see the try_grab_folio() documentation, with * "refs=1". * * Return: true for success, or if no action was required (if neither FOLL_PIN @@ -200,7 +201,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) if (!(flags & (FOLL_GET | FOLL_PIN))) return true; - return try_grab_compound_head(page, 1, flags); + return try_grab_folio(page, 1, flags); } /** diff --git a/mm/hugetlb.c b/mm/hugetlb.c index abcd1785c629..ab67b13c4a71 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6072,7 +6072,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, if (pages) { /* - * try_grab_compound_head() should always succeed here, + * try_grab_folio() should always succeed here, * because: a) we hold the ptl lock, and b) we've just * checked that the huge page is present in the page * tables. If the huge page is present, then the tail @@ -6081,9 +6081,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * any way. So this page must be available at this * point, unless the page refcount overflowed: */ - if (WARN_ON_ONCE(!try_grab_compound_head(pages[i], - refs, - flags))) { + if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, + flags))) { spin_unlock(ptl); remainder = 0; err = -ENOMEM;