From patchwork Mon May 30 07:49:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12864352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB55BC433EF for ; Mon, 30 May 2022 07:50:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BF718D0006; Mon, 30 May 2022 03:50:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56B9B8D0001; Mon, 30 May 2022 03:50:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4311E8D0006; Mon, 30 May 2022 03:50:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 335548D0001 for ; Mon, 30 May 2022 03:50:33 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 1245E12115C for ; Mon, 30 May 2022 07:50:33 +0000 (UTC) X-FDA: 79521637146.09.2C12263 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf15.hostedemail.com (Postfix) with ESMTP id 70F63A0026 for ; Mon, 30 May 2022 07:50:10 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id s68so9438717pgs.10 for ; Mon, 30 May 2022 00:50:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=2lWVMQ4OH/ac/xcyLrM2cbUXGSinjLfD4xqq9FUvko6fE3Ndz9Fwe10KdO/ZMwW3Gk 7MlSYaSGBpbCqkL3KEJHSM1sVt4SWyXoo1O0/amdBG/7ZL3O0i3K1sD6L68KMKfFRLMA uXHJHC++Pv7FxsKTjboIcMFe5xETp4/Q26H3bQXUA9t9IzX1KeMFpjUDU/XeAyN6ZDsa iYr3PRXd+4StN2hEjFWb/eb+zQqvkhzzFZvg9sTQ/ttztnFqkbH/QGoL2nBQaBzi42cF hjMZvtcQadKm1UJTOGWbrWWCd8VwkzaH+Ghc+AqlzZxgGn/9JcFERz9GP4VCwzwjAXSh S5aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=ur7OHqInn7Scx+Oudm7WIHhILscSWFpr8e0nZq6HVr/C2beOiow4tKdS3d+jYtEGPl 6MNUF05FnEYsDwWj2ec7XH6uW8wyBmLyQ1/yL/6oDRLRhL5fHwOyCSaNtsM3jmoYuxxT aA2p/op1MHdzkG1jd0YTyf76bd7YyjJDqQfstL7zryvQ1+KRpfSghn3k0+0tu1b8uoZ/ iw455xsSRgZyeT2eqEDNRvzkqZVXwHEPyNRGeJwVDeBvO5IvhQiAWe8sivvvXVWf4drn j2InYysQNtqt37OvAGLRbIF44vGrEBCWzo5Bp4FbU/ZK6pI48DvTqZzpw5B0hjr/8SoO ZTlA== X-Gm-Message-State: AOAM533MYobfMWQvl4eAJBz9R2LCqTnK3dEaXb247Ivksm9cRTDLQ7v/ RDj4SmGn+ZoucqOuCgLEEs23/mEj+oVN3w== X-Google-Smtp-Source: ABdhPJz5vQlTwdV9raF7OHl26nMrY/j/IHnExQCWOCI9eOPyN02ThtPAZX7YHxztg+1OGcmlYSK0QA== X-Received: by 2002:a63:2360:0:b0:3fb:ee61:82cf with SMTP id u32-20020a632360000000b003fbee6182cfmr6489036pgm.574.1653897031530; Mon, 30 May 2022 00:50:31 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:31 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 02/11] mm: rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore} Date: Mon, 30 May 2022 15:49:10 +0800 Message-Id: <20220530074919.46352-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 70F63A0026 X-Stat-Signature: xh5jwf1o7ekpttmeooguggfzt68n4795 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=2lWVMQ4O; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1653897010-350195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is weird to use folio_lruvec_lock() variants and unlock_page_lruvec() variants together, e.g. locking folio and unlocking page. So rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 12 ++++++------ mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 16 ++++++++-------- mm/vmscan.c | 4 ++-- 6 files changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0833be256134..6d7f97cc3fd4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1538,17 +1538,17 @@ static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1570,7 +1570,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } return folio_lruvec_lock_irq(folio); @@ -1584,7 +1584,7 @@ static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; - unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + lruvec_unlock_irqrestore(locked_lruvec, *flags); } return folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index fe915db6149b..4f155df6b39c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -874,7 +874,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (!(low_pfn % SWAP_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -987,7 +987,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1070,7 +1070,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; @@ -1129,7 +1129,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } put_page(page); @@ -1145,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } putback_movable_pages(&cc->migratepages); @@ -1177,7 +1177,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (page) { SetPageLRU(page); put_page(page); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 910a138e9859..b17b9d25d045 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2404,7 +2404,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); diff --git a/mm/mlock.c b/mm/mlock.c index 716caf851043..6649f3dda56e 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_pagevec(struct pagevec *pvec) } if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/swap.c b/mm/swap.c index 7e320ec08c6a..0a8ee33116c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -87,7 +87,7 @@ static void __page_cache_release(struct page *page) lruvec = folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } /* See comment on PageMlocked in release_pages() */ if (unlikely(PageMlocked(page))) { @@ -209,7 +209,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, SetPageLRU(page); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } @@ -369,7 +369,7 @@ static void folio_activate(struct folio *folio) if (folio_test_clear_lru(folio)) { lruvec = folio_lruvec_lock_irq(folio); __folio_activate(folio, lruvec); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } } @@ -915,7 +915,7 @@ void release_pages(struct page **pages, int nr) * same lruvec. The lock is held only if lruvec != NULL. */ if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } @@ -925,7 +925,7 @@ void release_pages(struct page **pages, int nr) if (is_zone_device_page(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } if (put_devmap_managed_page(page)) @@ -940,7 +940,7 @@ void release_pages(struct page **pages, int nr) if (PageCompound(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } __put_compound_page(page); @@ -974,7 +974,7 @@ void release_pages(struct page **pages, int nr) list_add(&page->lru, &pages_to_free); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); @@ -1060,7 +1060,7 @@ void __pagevec_lru_add(struct pagevec *pvec) __pagevec_lru_add_fn(folio, lruvec); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 8c6054e06087..a611ccf03c9b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2171,7 +2171,7 @@ int folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret = 0; } @@ -4806,7 +4806,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); }