From patchwork Sun Jan 2 21:57:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12702365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC256C4332F for ; Sun, 2 Jan 2022 21:57:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BB836B0075; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 643066B0078; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B2EE6B007B; Sun, 2 Jan 2022 16:57:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 16F0C6B0078 for ; Sun, 2 Jan 2022 16:57:37 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A1DF5181AC9CC for ; Sun, 2 Jan 2022 21:57:36 +0000 (UTC) X-FDA: 78986709312.27.78FEA89 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id DAC23A0008 for ; Sun, 2 Jan 2022 21:57:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=gAjRxseV+t/LIhWsenv/zeG83hgwWvidXx2i63TeJpM=; b=wE1l6Qv4BHorPkUj+TuiySwEc+ 4r2XQJkP9EKMgNlqAOjStbDxg23VStHWxpZl9uzwJpF4trkt1cXl/oFIHKTKqvD8uPKD7lAJLH43+ Wpvcf8TV0rgQUcpP2vFIDzM1nrKZXZPS4mTXMETNtVtChBsRAnAPGJsrSxFjmXjA8VmigaK52n/c0 wriweOGoFFEOds4MRkeqWJ2OMWE+fcg3ewk3nqAtzZb70EJXL/hXjYHKgckZPqkVcOpXZxBPjWeT6 MIQ9c1DcmEqWx8qXVROHG4NuHwNnf+8+XX882o2Ga1oVAf1kto9Vh7gxgU3hH3Ai74xnLN6cZYBFf XkZARJ1g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r6-00CLoQ-Vw; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio() Date: Sun, 2 Jan 2022 21:57:26 +0000 Message-Id: <20220102215729.2943705-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: fqya8ytoydt6robaruaaqyma5c9xk3fk X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DAC23A0008 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wE1l6Qv4; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160639-13802 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro can be considerably simplified by returning the folio from gup_folio_next() instead of void from compound_next(). Convert both callers to work on folios instead of pages. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 47 ++++++++++++++++++++++++----------------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 7bd1e4a2648a..eaffa6807609 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -239,31 +239,29 @@ static inline void compound_range_next(unsigned long i, unsigned long npages, __i < __npages; __i += __ntails, \ compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) -static inline void compound_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct folio *gup_folio_next(unsigned long i, + unsigned long npages, struct page **list, unsigned int *ntails) { - struct page *page; + struct folio *folio; unsigned int nr; if (i >= npages) - return; + return NULL; - page = compound_head(list[i]); + folio = page_folio(list[i]); for (nr = i + 1; nr < npages; nr++) { - if (compound_head(list[nr]) != page) + if (page_folio(list[nr]) != folio) break; } - *head = page; *ntails = nr - i; + return folio; } -#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_next(__i, __npages, __list, &(__head), &(__ntails))) +#define gup_for_each_folio(__i, __list, __npages, __folio, __ntails) \ + for (__i = 0; \ + (__folio = gup_folio_next(__i, __npages, __list, &(__ntails))) != NULL; \ + __i += __ntails) /** * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages @@ -291,15 +289,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, bool make_dirty) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; if (!make_dirty) { unpin_user_pages(pages, npages); return; } - for_each_compound_head(index, pages, npages, head, ntails) { + gup_for_each_folio(index, pages, npages, folio, nr) { /* * Checking PageDirty at this point may race with * clear_page_dirty_for_io(), but that's OK. Two key @@ -320,9 +318,12 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, * written back, so it gets written back again in the * next writeback cycle. This is harmless. */ - if (!PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + if (!folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_pages_dirty_lock); @@ -375,8 +376,8 @@ EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); void unpin_user_pages(struct page **pages, unsigned long npages) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; /* * If this WARN_ON() fires, then the system *might* be leaking pages (by @@ -386,8 +387,8 @@ void unpin_user_pages(struct page **pages, unsigned long npages) if (WARN_ON(IS_ERR_VALUE(npages))) return; - for_each_compound_head(index, pages, npages, head, ntails) - put_compound_head(head, ntails, FOLL_PIN); + gup_for_each_folio(index, pages, npages, folio, nr) + gup_put_folio(folio, nr, FOLL_PIN); } EXPORT_SYMBOL(unpin_user_pages);