From patchwork Tue Apr 15 02:45:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 14051357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68930C369B4 for ; Tue, 15 Apr 2025 02:46:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18592280137; Mon, 14 Apr 2025 22:46:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 10CFE2800C2; Mon, 14 Apr 2025 22:46:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA180280137; Mon, 14 Apr 2025 22:46:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C76A22800C2 for ; Mon, 14 Apr 2025 22:46:19 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6694C1CC3F4 for ; Tue, 15 Apr 2025 02:46:20 +0000 (UTC) X-FDA: 83334739320.26.32C61CE Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf23.hostedemail.com (Postfix) with ESMTP id 80537140004 for ; Tue, 15 Apr 2025 02:46:18 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=IkA0jX7U; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744685178; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=brxtynYJLZUnqxef6aOOLSxck7i4XWxbG91ObFKf1mY=; b=seWYNCT91D0DNqJptXgTLedH3bSW0dzzSA+t4wygjhFphbkb3HEMsi0fBBtrOpWdTTnQMP m3KbEE77b5TEi4dvRV+tc474MNV+kofBHMqju8W/IyRqQvW05Y5mvGv1Jkjks06CJOM+zh AN/K2VvyXC3Pv+HlUInjMULL+BRjS7c= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=IkA0jX7U; spf=pass (imf23.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744685178; a=rsa-sha256; cv=none; b=LOMbWA5z3CWg/qMQpXjU6QqMnBozOg0/SzzjPydREl0R47n+RouZt4oA1e9odqfNRRDXch /zGfuLCIonPQ/87OD2n9GXekrF4RVNiaHREVXJIFchvq4N1GT2zol1Oq0e6hhtVn2RCMc4 cYdjtE6h7YYq2jACCgPt47d/CEeyO0c= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-2264aefc45dso74524015ad.0 for ; Mon, 14 Apr 2025 19:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1744685177; x=1745289977; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=brxtynYJLZUnqxef6aOOLSxck7i4XWxbG91ObFKf1mY=; b=IkA0jX7UgdL1DpukE2xiBODK/J4yGRXtc4JUqDe3KzW1kNJEx0Qvhp81wAi28QyxrT csZXZUHSd72CalV6U5efLG9w8jFv6JK9Ht7VKmmX9/Nml9sCJ3RhPBZUMHCZp88Bw0a0 NXaDGeoSukt4q9jytPPl3ldaDXGPzo5hIum9oBhUpS9o4+6n+6aF9PmYI/KskwdgKVX6 KWPakQEmpLMsjGMk1nHPKJ39Bq0smINpwGzrh7IUt6zhdaNrfRbGUi3zRxz0qNIQwhLt BcyhHpfLpCemSLFv/HjRgsMDFyxpy5QcGVe6t5PnnEDaHfBPZmnyXqg5gz4QPjS+q53D 8EhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744685177; x=1745289977; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=brxtynYJLZUnqxef6aOOLSxck7i4XWxbG91ObFKf1mY=; b=HKp8XW0PqJ1odU1FjmNrXyTpFGspGb0BagdS6+DNfpD+DAJTbAH+ZfXJ7dtF0bH/80 UPy5HsHOGZFWmJ7O984/UpMVQWnYHEmLHpuecMqPKpRPpRdQYj9if9Umn0smDImnwJY9 rirfoj6vExfK9yvVYQOz4yY12yeDlHHzYo5UH6Rj18GKq9qwYcfCOyKyRJhzptG2qU5h 4+5nGz+zMbq6OYQwNDX+AE1E0uji4ks62Aq0DB4uodSv+uFUeb01nNpz2cJKj15HTDk7 1H3lo9wCxIOHbzIW7da0vOJYSCMO1TpA7wDnbDaEFIqUOKY5aFRG4rBqj1WDDMWHIg9+ wVyw== X-Forwarded-Encrypted: i=1; AJvYcCV90Sar0QbsWLxbdQ/Po4j71+tZUPf4wzl9r36PLOg7fXsrI7L/WkU2LW0Oj/NHRid46fygl8ZDEA==@kvack.org X-Gm-Message-State: AOJu0YwK5KWrIjv5JIi0MUM2nR2wt3Jnot2Oogv9tv9I9AJ8KcwFEBiK 0mbbWEt32GlNmZaudu84GCo4V+U9koQtq1a4Fgpz+wtbUP1upfh0fcfK6OBZ4MI= X-Gm-Gg: ASbGncuJvzVmEGYMMkZOvb49v0gDHlqkbTZjds/6ESwXYYOrPbgz8qUEV5lMPirP+8T z5x4Q0qkDc6Pma6nSLTo5JPSb9LRGcOcmOb4sjHlOQbz4D5hNhj31kctHxLZFD96IYQVOMWUHZg FrHC55YwAAc5mckSluZ2mz9BKe9HwmVhBUuKKfsSL2401fAOH26gKkCcAHvJzYtIaYG9GNsSSOP IO1kei0lDTmSqEsANSw+wl2ssXSITAWXj7Y7b+s7D6xytM6tBonyjbSF3yE6zlt/boVMicUUbhx qjpziMCZJiMoswsaphhKCMCVyVoRZedasXpWkCM2QSataAP6T79Z5z4x2yn1F7t/oRwjOO7m X-Google-Smtp-Source: AGHT+IHyyAeyjZ49bEZ0MTKd+BqhH6rBa5vP3SZXknSqiAeNZVb3i/S33ow8Ug4oRyTybyiTWON2CQ== X-Received: by 2002:a17:902:db0b:b0:21f:164d:93fe with SMTP id d9443c01a7336-22bea50832bmr206493445ad.53.1744685177428; Mon, 14 Apr 2025 19:46:17 -0700 (PDT) Received: from PXLDJ45XCM.bytedance.net ([61.213.176.5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22ac7ccac49sm106681185ad.217.2025.04.14.19.46.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 14 Apr 2025 19:46:16 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, david@fromorbit.com, zhengqi.arch@bytedance.com, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, hamzamahfooz@linux.microsoft.com, apais@linux.microsoft.com, Muchun Song Subject: [PATCH RFC 04/28] mm: rename unlock_page_lruvec_irq and its variants Date: Tue, 15 Apr 2025 10:45:08 +0800 Message-Id: <20250415024532.26632-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250415024532.26632-1-songmuchun@bytedance.com> References: <20250415024532.26632-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 80537140004 X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: x18zi4akg6e5krxm9itdpbrxqt4mky4b X-HE-Tag: 1744685178-690638 X-HE-Meta: U2FsdGVkX19UpdmBYNjjjhkLwoxVjQjaYQ02z74qK76P4xOwpx2hFpPulZbVpj1oujKW/52wH3cTRE3ch38SfuyWoeNJLObr++qfX4aTL1iOy5q4Sh8uAlKA10ocBDBxBS8wim8TzuDddWYjJtbNUO67wa3/sxWtjaiXf8rmbo2JiAd5y4t71/sBmVkCqdTc4paG9c3BXWdbzEVFha20+SH33Yo7eQV7azLI9mfVOcctyE0pS2Psr4rqu1Blyq8oIShTcTlIEav33X780+vRISfYOoAoyZPZOefdaIleu/3/fFOOD512JPMiu8OC6q4gVt4ViAgojNy+2h37LuC7voBn3vTd6zItQKZNLsJGdIoAgMLp7x/V3OsoYVqloR0HZ1KBhdqnBMvIj1+rLfmftwSL/2sDBoH1twaUuoRjLKCtirZPKu2Z0E6PJt9vQRg1pCLWa3KSNAvaUwfp5dA+olOhIBChI75gaIkNOgdpIxT/WcG745BpVa5zAbPtp6KkxGeXreKe/UIFQ8/moWT2vnfS/k0Cy/02qR63CNDD533sFDOdmWLMptQVUQlKYZ35jVP4oBiiIG7jo8g4rUlqWihZRgnNLeT4qHIq7NQa5HY3Zy3NSsvT3UGRh8M8Qb1TnxYvd3ywKxf/pPFatjIz1RkITtzRSsXmLL5ePt3mEGgWaCv2qpHdo1nQ8STB1ru0AnJAnWNUQJ7KaCHFhlKNWFza0LYGZFx3TwNIwxCaTqDX3ZFO7f2ozg2q3cVpki3XshLceLPkq5RihOfEE2oGQ4eK8gTiX0g2/G3mi7m7oqg6cwNcC8JBtIqYOe0Jds+aUtgo03oYNodHcYvHjYvm1tBVcx6AqL8Wqhwjuf4AI993DC5fIDFED/xJykIbTnT8e1VppLsBubYYjHUzi+Wfa4yScx6EMqxeRWsd2bugLlApljgQfZ9ZvyWmiBAFWo6pZc1lMbKy/9ZsYySY0vH rn3K26hm jIXEPmcoQ8dFiFW6V9DbVRoCnfYDijBgY90KOOm07YN91Nk4CPhxJT8p75LvFukW8T84sAo5hqiFwDksfgPNp4NEyZOC1DoVZAyZemr1BoVtbX1b4s0siGDWu+ctOWBnQfLkG9P6mHOColWWzAJ2lFTgE/k6eQ5ENkz5PyxeXzbvvT1RiM5fnNuaE1+UnmUsedCD/jC1qSdVv352Ebj+tjOKUzZkp86DNzsxP/Mr+pqyojggQLah57REQh1Qg1JWivtNFEFINiLU0qlVaTWGpugY6ZbbW13o4j+fAZmW0llTMOOgGQgvGcM4iySooRkonJYGcmV8EV7RKPahqPOUpexIPHNzqKTU6gCUO/cMjbctIF8Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It is inappropriate to use folio_lruvec_lock() variants in conjunction with unlock_page_lruvec() variants, as this involves the inconsistent operation of locking a folio while unlocking a page. To rectify this, the functions unlock_page_lruvec{_irq, _irqrestore} are renamed to lruvec_unlock{_irq, _irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Acked-by: Johannes Weiner --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 14 +++++++------- mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 12 ++++++------ mm/vmscan.c | 4 ++-- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 53364526d877..a045819bcf40 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1510,17 +1510,17 @@ static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1542,7 +1542,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } return folio_lruvec_lock_irq(folio); @@ -1556,7 +1556,7 @@ static inline void folio_lruvec_relock_irqsave(struct folio *folio, if (folio_matches_lruvec(folio, *lruvecp)) return; - unlock_page_lruvec_irqrestore(*lruvecp, *flags); + lruvec_unlock_irqrestore(*lruvecp, *flags); } *lruvecp = folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index 139f00c0308a..ce45d633ddad 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -946,7 +946,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (!(low_pfn % COMPACT_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -997,7 +997,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } /* for alloc_contig case */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1089,7 +1089,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1194,7 +1194,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; @@ -1262,7 +1262,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } folio_put(folio); @@ -1278,7 +1278,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked = NULL; } putback_movable_pages(&cc->migratepages); @@ -1310,7 +1310,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2a47682d1ab7..df66aa4bc4c2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3605,7 +3605,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, folio_ref_unfreeze(origin_folio, 1 + ((mapping || swap_cache) ? folio_nr_pages(origin_folio) : 0)); - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); if (swap_cache) xa_unlock(&swap_cache->i_pages); diff --git a/mm/mlock.c b/mm/mlock.c index 3cb72b579ffd..86cad963edb7 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch) } if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folios_put(fbatch); } diff --git a/mm/swap.c b/mm/swap.c index 77b2d5997873..ee19e171857d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -91,7 +91,7 @@ static void page_cache_release(struct folio *folio) __page_cache_release(folio, &lruvec, &flags); if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } void __folio_put(struct folio *folio) @@ -171,7 +171,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); folios_put(fbatch); } @@ -343,7 +343,7 @@ void folio_activate(struct folio *folio) lruvec = folio_lruvec_lock_irq(folio); lru_activate(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } #endif @@ -953,7 +953,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) if (folio_is_zone_device(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } if (folio_ref_sub_and_test(folio, nr_refs)) @@ -967,7 +967,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) /* hugetlb has its own memcg */ if (folio_test_hugetlb(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec = NULL; } free_huge_folio(folio); @@ -981,7 +981,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) j++; } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); if (!j) { folio_batch_reinit(folios); return; diff --git a/mm/vmscan.c b/mm/vmscan.c index b620d74b0f66..a76b3cee043d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1847,7 +1847,7 @@ bool folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec = folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret = true; } @@ -7681,7 +7681,7 @@ void check_move_unevictable_folios(struct folio_batch *fbatch) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); }