From patchwork Fri Feb 4 19:57:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12735588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40794C433F5 for ; Fri, 4 Feb 2022 20:21:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 278E48D000A; Fri, 4 Feb 2022 15:21:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2040C8D0007; Fri, 4 Feb 2022 15:21:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0077E8D000A; Fri, 4 Feb 2022 15:21:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0125.hostedemail.com [216.40.44.125]) by kanga.kvack.org (Postfix) with ESMTP id D15298D0007 for ; Fri, 4 Feb 2022 15:21:38 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 90237998E1 for ; Fri, 4 Feb 2022 20:21:38 +0000 (UTC) X-FDA: 79106217876.06.910EA16 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 10D4A20003 for ; Fri, 4 Feb 2022 20:21:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NpQTZRmbyZ8ig7SC3y4xH9qVn8a+1Pb7Y1x76z9zPRc=; b=aRLETaj7P4lPBQxBSFPwi4eY26 bFKvswEOwAbJQfhZbeRPFYhgmzH+g8O+lRfRMheKKCpIiTiKTABPGsvvSxH4EYrrg8BwQ7LrPlGlq sbY0LVfNVK3y/sPfdD7dv/CBAXGeXs0HfdI55v+UAX175XhxtLI6odZ4aznlbMWiMc5Z86aHrh9Tk HtrDbr0ak//EC2YSeOYEltNnXYHeCnuKBulzCYJt0mmsqGJfPXIbot4C5KRIrPl4T0zxhb2GrxMK0 bQu3oo9x0rda2lyrco7SQnVN9lAoR4H8Duj+X0i8fRyTzHC2GgKp1Kiax++ioGB3tgjLGDX9Kl94V WcqQ5Oaw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jU-007LlZ-SD; Fri, 04 Feb 2022 19:59:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Christoph Hellwig , John Hubbard , Jason Gunthorpe , William Kucharski Subject: [PATCH 14/75] mm: Turn page_maybe_dma_pinned() into folio_maybe_dma_pinned() Date: Fri, 4 Feb 2022 19:57:51 +0000 Message-Id: <20220204195852.1751729-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 10D4A20003 X-Stat-Signature: e7wknfrecdaebat4jurrd4pxeu59w11m Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aRLETaj7; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1644006097-760519 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace three calls to compound_head() with one. This removes the last user of compound_pincount(), so remove that helper too. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard Reviewed-by: Jason Gunthorpe Reviewed-by: William Kucharski --- include/linux/mm.h | 49 ++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 26 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d5f0f2cfd552..a29dacec7294 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -901,13 +901,6 @@ static inline int head_compound_pincount(struct page *head) return atomic_read(compound_pincount_ptr(head)); } -static inline int compound_pincount(struct page *page) -{ - VM_BUG_ON_PAGE(!PageCompound(page), page); - page = compound_head(page); - return head_compound_pincount(page); -} - static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; @@ -1280,48 +1273,52 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, void unpin_user_pages(struct page **pages, unsigned long npages); /** - * page_maybe_dma_pinned - Report if a page is pinned for DMA. - * @page: The page. + * folio_maybe_dma_pinned - Report if a folio may be pinned for DMA. + * @folio: The folio. * - * This function checks if a page has been pinned via a call to + * This function checks if a folio has been pinned via a call to * a function in the pin_user_pages() family. * - * For non-huge pages, the return value is partially fuzzy: false is not fuzzy, + * For small folios, the return value is partially fuzzy: false is not fuzzy, * because it means "definitely not pinned for DMA", but true means "probably * pinned for DMA, but possibly a false positive due to having at least - * GUP_PIN_COUNTING_BIAS worth of normal page references". + * GUP_PIN_COUNTING_BIAS worth of normal folio references". * - * False positives are OK, because: a) it's unlikely for a page to get that many - * refcounts, and b) all the callers of this routine are expected to be able to - * deal gracefully with a false positive. + * False positives are OK, because: a) it's unlikely for a folio to + * get that many refcounts, and b) all the callers of this routine are + * expected to be able to deal gracefully with a false positive. * - * For huge pages, the result will be exactly correct. That's because we have - * more tracking data available: the 3rd struct page in the compound page is - * used to track the pincount (instead using of the GUP_PIN_COUNTING_BIAS - * scheme). + * For large folios, the result will be exactly correct. That's because + * we have more tracking data available: the compound_pincount is used + * instead of the GUP_PIN_COUNTING_BIAS scheme. * * For more information, please see Documentation/core-api/pin_user_pages.rst. * * Return: True, if it is likely that the page has been "dma-pinned". * False, if the page is definitely not dma-pinned. */ -static inline bool page_maybe_dma_pinned(struct page *page) +static inline bool folio_maybe_dma_pinned(struct folio *folio) { - if (PageCompound(page)) - return compound_pincount(page) > 0; + if (folio_test_large(folio)) + return atomic_read(folio_pincount_ptr(folio)) > 0; /* - * page_ref_count() is signed. If that refcount overflows, then - * page_ref_count() returns a negative value, and callers will avoid + * folio_ref_count() is signed. If that refcount overflows, then + * folio_ref_count() returns a negative value, and callers will avoid * further incrementing the refcount. * - * Here, for that overflow case, use the signed bit to count a little + * Here, for that overflow case, use the sign bit to count a little * bit higher via unsigned math, and thus still get an accurate result. */ - return ((unsigned int)page_ref_count(compound_head(page))) >= + return ((unsigned int)folio_ref_count(folio)) >= GUP_PIN_COUNTING_BIAS; } +static inline bool page_maybe_dma_pinned(struct page *page) +{ + return folio_maybe_dma_pinned(page_folio(page)); +} + static inline bool is_cow_mapping(vm_flags_t flags) { return (flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;