From patchwork Wed Jun 30 04:00:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12351209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 586D1C11F65 for ; Wed, 30 Jun 2021 04:12:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 059EF614A7 for ; Wed, 30 Jun 2021 04:12:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 059EF614A7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E4EB8D0177; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BAB38D0171; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 582AF8D0177; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 36DC58D0171 for ; Wed, 30 Jun 2021 00:12:36 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2347923E58 for ; Wed, 30 Jun 2021 04:12:36 +0000 (UTC) X-FDA: 78309068712.17.F9F2819 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id CC0FBE001687 for ; Wed, 30 Jun 2021 04:12:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=k2cIxuyIWNq/4pAHuKjXPfaEbxUqkyWNYHWDk+v+iO4=; b=CqBPJcxFJ0HP3oRMWS9y2/Spj+ kAIZV8s2nVejVxTxIvnacW3NYPMIPlrBZWZaA+5VpWEQeJvlsUvVowI4XnWvGClWkUPDh7RfbXmvj bZPKRrbd+eh1tVy3xrHua2/s3X/y/Ofn+mAALk3BrcfjB4Nq3ezAHJqFVf8aVcuJgp/ZhQE9Jrqv5 0rx9Gxbmrp2wG5T+SBCuztQeBRwkwOS4f696HjjCSqa+LgC0iUuw2nWA+tCbdpQgWQIBMkwx2IZfy mYufz88FocyF5ihPH2TNDg1PGVkzH7Z9HrDWwNi/EcRrNl0obQV6MVUrFXZPJQIhADYFgXQx9BMlq YJ14X0QA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyRZa-004rSI-Kk; Wed, 30 Jun 2021 04:11:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Date: Wed, 30 Jun 2021 05:00:33 +0100 Message-Id: <20210630040034.1155892-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> References: <20210630040034.1155892-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CqBPJcxF; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: 5td1z7bfp3f57s4t7h6c9dy5c3o3gw65 X-Rspamd-Queue-Id: CC0FBE001687 X-HE-Tag: 1625026355-16287 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of relock_page_lruvec_irq() and folio_lruvec_relock_irqsave(), which are retained as compatibility wrappers. Also convert lruvec_holds_page_lru_lock() to folio_lruvec_holds_lru_lock(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 31 ++++++++++++++++++++++--------- mm/vmscan.c | 2 +- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b21a77669277..e6b5e8fbf770 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1545,40 +1545,53 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, } /* Test requires a stable page->memcg binding, see page_memcg() */ -static inline bool page_matches_lruvec(struct page *page, struct lruvec *lruvec) +static inline bool folio_matches_lruvec(struct folio *folio, + struct lruvec *lruvec) { - return lruvec_pgdat(lruvec) == page_pgdat(page) && - lruvec_memcg(lruvec) == page_memcg(page); + return lruvec_pgdat(lruvec) == folio_pgdat(folio) && + lruvec_memcg(lruvec) == folio_memcg(folio); } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irq(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, struct lruvec *locked_lruvec) { if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irq(locked_lruvec); } - return lock_page_lruvec_irq(page); + return folio_lruvec_lock_irq(folio); } /* Don't lock again iff page's lruvec locked */ -static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, struct lruvec *locked_lruvec, unsigned long *flags) { if (locked_lruvec) { - if (page_matches_lruvec(page, locked_lruvec)) + if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; unlock_page_lruvec_irqrestore(locked_lruvec, *flags); } - return lock_page_lruvec_irqsave(page, flags); + return folio_lruvec_lock_irqsave(folio, flags); } +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, + struct lruvec *locked_lruvec) +{ + return folio_lruvec_relock_irq(page_folio(page), locked_lruvec); +} + +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + return folio_lruvec_relock_irqsave(page_folio(page), locked_lruvec, + flags); +} #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/vmscan.c b/mm/vmscan.c index d7c3cb8688dd..a8d8f4673451 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2063,7 +2063,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, * All pages were isolated from the same lruvec (and isolation * inhibits memcg migration). */ - VM_BUG_ON_PAGE(!page_matches_lruvec(page, lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages;