From patchwork Sun Jan 2 21:57:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12702367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4899C433EF for ; Sun, 2 Jan 2022 21:57:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F45A6B007B; Sun, 2 Jan 2022 16:57:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77BCB6B007D; Sun, 2 Jan 2022 16:57:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61CD46B007E; Sun, 2 Jan 2022 16:57:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 322396B007D for ; Sun, 2 Jan 2022 16:57:38 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C0FC9181AC9C6 for ; Sun, 2 Jan 2022 21:57:37 +0000 (UTC) X-FDA: 78986709354.23.9177BE6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id EC8BF40003 for ; Sun, 2 Jan 2022 21:57:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bMEns1jMi8fBdm6AnOYnvP1ksSGlqXXNLa7fiGYvARg=; b=TXp7M7UwuLlN0Efe6AirsweSbZ OlrYttJwhIFYjIJcZIVoN2bW1ZZGIL++PzAVhO24YuUAwIEg68bbqwhxsnqMtc63H8ohynisj0cVo 89gi2+TiB5Swz6p4AT/QqiVlQ0u9RdeDfuiJunc8dp5+8egv0TAOdTQEW4L/NIZ+mKoU86TypUdwS xvFkG2oCTiAwKiYGF4xi0rBvzgEx3rRO6MDRVPr/TWxmOjtjykYPlZjv2mQb5X62QuGVET5rn383v QHa7VJAGD3C/DsxyNqAnNBYp21gfU9DQHFNWduxXWHolKil2ekzaju0+h+/UoJ2BcRmPm5uKXZczq rvbjiz8Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n48r7-00CLoj-42; Sun, 02 Jan 2022 21:57:33 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , John Hubbard , Andrew Morton Subject: [PATCH 15/17] gup: Convert for_each_compound_range() to gup_for_each_folio_range() Date: Sun, 2 Jan 2022 21:57:27 +0000 Message-Id: <20220102215729.2943705-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220102215729.2943705-1-willy@infradead.org> References: <20220102215729.2943705-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EC8BF40003 X-Stat-Signature: 51ziqgsw15w3gx84qy4mk7n65r95uhri Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TXp7M7Uw; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1641160637-80394 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This macro can be considerably simplified by returning the folio from gup_folio_range_next() instead of void from compound_next(). Convert the only caller to work on folios instead of pages. This removes the last caller of put_compound_head(), so delete it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard --- mm/gup.c | 50 +++++++++++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index eaffa6807609..76717e05413d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -165,12 +165,6 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) folio_put_refs(folio, refs); } -static void put_compound_head(struct page *page, int refs, unsigned int flags) -{ - VM_BUG_ON_PAGE(PageTail(page), page); - gup_put_folio((struct folio *)page, refs, flags); -} - /** * try_grab_page() - elevate a page's refcount by a flag-dependent amount * @@ -213,31 +207,30 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); -static inline void compound_range_next(unsigned long i, unsigned long npages, - struct page **list, struct page **head, - unsigned int *ntails) +static inline struct folio *gup_folio_range_next(unsigned long i, + unsigned long npages, struct page **list, unsigned int *ntails) { - struct page *next, *page; + struct page *next; + struct folio *folio; unsigned int nr = 1; if (i >= npages) - return; + return NULL; next = *list + i; - page = compound_head(next); - if (PageCompound(page) && compound_order(page) >= 1) - nr = min_t(unsigned int, - page + compound_nr(page) - next, npages - i); + folio = page_folio(next); + if (folio_test_large(folio)) + nr = min_t(unsigned int, npages - i, + &folio->page + folio_nr_pages(folio) - next); - *head = page; *ntails = nr; + return folio; } -#define for_each_compound_range(__i, __list, __npages, __head, __ntails) \ - for (__i = 0, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails)); \ - __i < __npages; __i += __ntails, \ - compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) +#define gup_for_each_folio_range(__i, __list, __npages, __folio, __ntails) \ + for (__i = 0; \ + (__folio = gup_folio_range_next(__i, __npages, __list, &(__ntails))) != NULL; \ + __i += __ntails) static inline struct folio *gup_folio_next(unsigned long i, unsigned long npages, struct page **list, unsigned int *ntails) @@ -353,13 +346,16 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty) { unsigned long index; - struct page *head; - unsigned int ntails; + struct folio *folio; + unsigned int nr; - for_each_compound_range(index, &page, npages, head, ntails) { - if (make_dirty && !PageDirty(head)) - set_page_dirty_lock(head); - put_compound_head(head, ntails, FOLL_PIN); + gup_for_each_folio_range(index, &page, npages, folio, nr) { + if (make_dirty && !folio_test_dirty(folio)) { + folio_lock(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + } + gup_put_folio(folio, nr, FOLL_PIN); } } EXPORT_SYMBOL(unpin_user_page_range_dirty_lock);