From patchwork Mon Jul 12 03:06:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12369929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82EBBC07E99 for ; Mon, 12 Jul 2021 03:51:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3388D60FF3 for ; Mon, 12 Jul 2021 03:51:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3388D60FF3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 575E36B009A; Sun, 11 Jul 2021 23:51:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 524F26B009B; Sun, 11 Jul 2021 23:51:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C6066B009C; Sun, 11 Jul 2021 23:51:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 1017F6B009A for ; Sun, 11 Jul 2021 23:51:37 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2E7208248047 for ; Mon, 12 Jul 2021 03:51:36 +0000 (UTC) X-FDA: 78352561392.33.8A0A0D8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id D9063600199D for ; Mon, 12 Jul 2021 03:51:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hCPsVqXxjm0N/FU0q9P/FrCS5OJdQGJhzhL+7mb+OQo=; b=Qth0IQe7Yh7frQmyKCuou98xqR LodHIIl2qD0cd46WhfEFDM5UJzkwJjCEfsYReT1BPXEkWPbsAKt9fnmYO1IN6U0SypihCMtTaMsDE Ynt3rUpo8uVkFf+GKxhFioPb3Q8sPgNb18cCDEkiBNyApoYhsvuLraXsoPAsUVSNN3zLbriQkLzQK eO62NVJ2KzDKt8iVTmuR6kwLLCkACn1Cz4ImY4yH39+ZssLNtA0wOsCbnGMwyaej/kUTucRvLyrOM hvETOgmcQ/Eo+iGZDwUQ67Iz2uTHH7LMTmAfhUuA4A9+paVZMD2rTjeHlgFPMWx5HGyBy83T01ruS u5tsb+Cg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2mxl-00GpqW-Lo; Mon, 12 Jul 2021 03:50:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v13 081/137] mm/lru: Convert __pagevec_lru_add_fn to take a folio Date: Mon, 12 Jul 2021 04:06:05 +0100 Message-Id: <20210712030701.4000097-82-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qth0IQe7; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: ubdzbq3h1rjk66fw5ce7gi6cxi7prpas X-Rspamd-Queue-Id: D9063600199D X-Rspamd-Server: rspam01 X-HE-Tag: 1626061895-231911 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This saves five calls to compound_head(), totalling 60 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- mm/swap.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index fe177a16de84..42851b9d6316 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1001,17 +1001,18 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) +static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) { - int was_unevictable = TestClearPageUnevictable(page); - int nr_pages = thp_nr_pages(page); + int was_unevictable = folio_test_clear_unevictable_flag(folio); + int nr_pages = folio_nr_pages(folio); - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_lru(folio), folio); /* - * Page becomes evictable in two ways: + * Folio becomes evictable in two ways: * 1) Within LRU lock [munlock_vma_page() and __munlock_pagevec()]. - * 2) Before acquiring LRU lock to put the page to correct LRU and then + * 2) Before acquiring LRU lock to put the folio on the correct LRU + * and then * a) do PageLRU check with lock [check_move_unevictable_pages] * b) do PageLRU check before lock [clear_page_mlock] * @@ -1020,10 +1021,10 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) * * #0: __pagevec_lru_add_fn #1: clear_page_mlock * - * SetPageLRU() TestClearPageMlocked() + * folio_set_lru_flag() folio_test_clear_mlocked_flag() * smp_mb() // explicit ordering // above provides strict * // ordering - * PageMlocked() PageLRU() + * folio_mlocked() folio_lru() * * * if '#1' does not observe setting of PG_lru by '#0' and fails @@ -1034,21 +1035,21 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) * looking at the same page) and the evictable page will be stranded * in an unevictable LRU. */ - SetPageLRU(page); + folio_set_lru_flag(folio); smp_mb__after_atomic(); - if (page_evictable(page)) { + if (folio_evictable(folio)) { if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { - ClearPageActive(page); - SetPageUnevictable(page); + folio_clear_active_flag(folio); + folio_set_unevictable_flag(folio); if (!was_unevictable) __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } - add_page_to_lru_list(page, lruvec); - trace_mm_lru_insertion(page); + folio_add_to_lru_list(folio, lruvec); + trace_mm_lru_insertion(&folio->page); } /* @@ -1062,11 +1063,10 @@ void __pagevec_lru_add(struct pagevec *pvec) unsigned long flags = 0; for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; - struct folio *folio = page_folio(page); + struct folio *folio = page_folio(pvec->pages[i]); lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); + __pagevec_lru_add_fn(folio, lruvec); } if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags);