From patchwork Sun Jan 2 21:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12702372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C9D2C433EF for ; Sun, 2 Jan 2022 21:57:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3ADF26B0082; Sun, 2 Jan 2022 16:57:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 310C36B0083; Sun, 2 Jan 2022 16:57:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02D6D6B0085; Sun, 2 Jan 2022 16:57:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id C74566B0082 for ; Sun, 2 Jan 2022 16:57:49 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8DACD87CBC for ; Sun, 2 Jan 2022 21:57:49 +0000 (UTC) X-FDA: 78986709858.16.1B10870 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 5695140004 for ; Sun, 2 Jan 2022 21:57:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=KfDUYH1goyPtRjvKMEhHtFz87m5PVbcjSEBMTVJFkl4=; b=u59sVK9Frl7TsTek9ipIhgin45 FvuLb9xJZ0FE4IaxjBK/fpYYQ8pvcahY8YN2o3Bv3/PzfhQuitOHP9OShlRzXh5R1FXrKhXqBvDcS feOoSZYaBrJri4zx+Kt5M1oJCTssy4sURDRyko6ImSyW4IOfwnZkEHlkUR3bTDMIqOuNnpo1knjLj 5iyzvvv8nL8Iy9Z5FShqYu8VIG+ufr4tY2C/SYNKRnT6zaEONSRL9MOLNz981EEuPSJUasneqcS7x ZahxbbKiU1tlm4QDGr1PPGdxZsrD2ZRqr3PYo9DSoj3B8JIhHqhlAvn0jZnJ6VkwBj36S8Z7VhJz4 FcFG8V6Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLo6-4I; Sun, 02 Jan 2022 21:57:32 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 04/17] mm: Convert page_maybe_dma_pinned() to use a folio Date: Sun, 2 Jan 2022 21:57:16 +0000 Message-Id: <20220102215729.2943705-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=u59sVK9F; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Queue-Id: 5695140004 X-Stat-Signature: syugdsqbs86pw1zzu66dqkz8fgn59185 X-Rspamd-Server: rspam04 X-HE-Tag: 1641160665-96692 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replaces three calls to compound_head() with one. This removes the last user of compound_pincount(), so remove that helper too. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard --- include/linux/mm.h | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 269b5484d66e..00dcea53bb96 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -947,13 +947,6 @@ static inline int head_compound_pincount(struct page *head) return atomic_read(compound_pincount_ptr(head)); } -static inline int compound_pincount(struct page *page) -{ - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); - page = compound_head(page); - return head_compound_pincount(page); -} - static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; @@ -1347,18 +1340,20 @@ void unpin_user_pages(struct page **pages, unsigned long npages); */ static inline bool page_maybe_dma_pinned(struct page *page) { - if (hpage_pincount_available(page)) - return compound_pincount(page) > 0; + struct folio *folio = page_folio(page); + + if (folio_pincount_available(folio)) + return atomic_read(folio_pincount_ptr(folio)) > 0; /* * page_ref_count() is signed. If that refcount overflows, then * page_ref_count() returns a negative value, and callers will avoid * further incrementing the refcount. * - * Here, for that overflow case, use the signed bit to count a little + * Here, for that overflow case, use the sign bit to count a little * bit higher via unsigned math, and thus still get an accurate result. */ - return ((unsigned int)page_ref_count(compound_head(page))) >= + return ((unsigned int)folio_ref_count(folio)) >= GUP_PIN_COUNTING_BIAS; }