From patchwork Wed Jun 14 14:36:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13280115 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 799B2EB64D9 for ; Wed, 14 Jun 2023 14:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED3F76B0078; Wed, 14 Jun 2023 10:21:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E840B6B007B; Wed, 14 Jun 2023 10:21:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4D3B8E0001; Wed, 14 Jun 2023 10:21:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C39ED6B0078 for ; Wed, 14 Jun 2023 10:21:09 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 90053140699 for ; Wed, 14 Jun 2023 14:21:09 +0000 (UTC) X-FDA: 80901565458.02.7F8930A Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf14.hostedemail.com (Postfix) with ESMTP id 3CEFA100005 for ; Wed, 14 Jun 2023 14:21:05 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686752467; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=hvRoqTLwTrMFvITsL359/T0j7iS0bacS1rvbDurexQE=; b=o6wqTWLYQKemW48b2fkncF/YULU/7jq3+iKAeYT4RygsUu/VehqJbi09WZYMtYd8j1AAcB PV/E/ROh674d+c98DYhrQpVhQEs3XzsgaxQ/MDd5X6K3Ai6ghN0NS9NUB3jN8DhisPwiRI Rsu03/EwyyuoFpZTiNZDgT1rGbeHT2Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686752467; a=rsa-sha256; cv=none; b=pnvwzip0fFaJchi622E68k0f/OBplL9G3m3XUqFLChi3IGIx8Gec3daHZBK8RQ+Nu6nxdc GEnY+5YFNk03I39gE9vY92Vnl6pZTfx9B6WQuU/KJghx9xOQLW3j3iQCOsralMql8AQIkm wHMC/3QMUiaPycekKH2iOl/BbIWaZz4= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Qh6yf60RmzTkfs; Wed, 14 Jun 2023 22:20:26 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 14 Jun 2023 22:20:57 +0800 From: Kefeng Wang To: Andrew Morton , CC: Johannes Weiner , , , Kefeng Wang Subject: [PATCH] mm: kill lock|unlock_page_memcg() Date: Wed, 14 Jun 2023 22:36:12 +0800 Message-ID: <20230614143612.62575-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 3CEFA100005 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: a73wqsrqfwjmcp9zxj361ucuc6mh89e7 X-HE-Tag: 1686752465-871517 X-HE-Meta: U2FsdGVkX1/pUjc3WOoN0B6Qp3RsTT9DGSAgLkJdmwTrhYuv4II+n/zhb6a+9oJU2B3QmfX0hHLTyl9uPDY/ykzzEtk7t1enurLwpJ5tuC3N8if49O657Rb/5FJmCEBahkJVcrS9wb5BWMKiuZ9eT9nLfORCvlxRmAeEhOSpC8Fs8ONiu6yMOohlpFvVHK1LpqhLBLBDCB5uUVvS8GtMnLbOMwGsajtOccp1KsH/IxxTiXsPzCTbvEujEy1NqMlt2OkrgC6pbc72TaOvrptz1oUS7BZIqVLqnh1ZagU0w8SV1fNdndrHJFpIE8W7juNWlv0GL4/iaQ2uSVEuwl83FiMKszTIxE/YcX/86blTplYlNPeraHa71qsxgqRj6F13CApT6KhmL5vMPsfGLMOCDwzclPc+yz0QO1UH1439zSDS0Ip5MM9dkUf2EEHW+gC0IrElT7NjB8b0OilWZMTLWIEv3jlrH5ias/Bxd81WExD9d99r+rpzm2tl6s6oUtE4h3+IZvFV2w6Ii54fP/BqJZ8krToQJgcDw1a8WbYzTa2AdmWl9vCsxpHjUAcPqsPTtgSLq9X782hWJFf4UnHH9CbXWfqf0F+d1FNheSuAuHiLjOb3RIUsFw6RY15gDgfIwFRowh1kJsK6WBnyO/Phsr1pHMcw3L3hv5XzU6oQ2K0GOhPC2vrFVgIxvwafZGEnAmDJd/1deZmbQrKdJIu7WUnU8nkpEEXKb4onIheulTvoHZ//SNbjmwCFTZjZpjl2+f4YmBHNSIXg47poJAqWNMH8JK8+KirDG49GmL14vSNeao0VqmARGNxSjMNc2r7cEJ+gYJOzsLYCjVvFWwSzsqcLGUDFdseJbbOPb+B/Wf0qxl1sUOYf6r0Gdo/zZRHBa/mdg4q9MX0ZjC7o5/zZvdxDldRHVfJwSKO7KPt9kQM2ksCi326dRKYwQfpiuXO7PylKx9/OgyRysTe69Jx 5gq/X7wW Lb9LUg8upOUKEbhnPbZYMwXmMSBIEmfgKDh4+/a9FkeaRrQUrB/sY5VP7DTEf5MJSShKtKzD03d9kTGnE2iP7CR3iG77Uuu5mmyP+QYjPdxu3sV7G5ZojKO70+gOUAIDBXGXJXOtUqgcSMUSMMZh8uMKzNhCgDnf/dzlNJB0wPqkqNWi4E+f7S9r/AeLcRIb5OZQl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit c7c3dec1c9db ("mm: rmap: remove lock_page_memcg()"), no more user, kill lock_page_memcg() and unlock_page_memcg(). Signed-off-by: Kefeng Wang Acked-by: Johannes Weiner Reviewed-by: Matthew Wilcox (Oracle) --- Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- include/linux/memcontrol.h | 12 +----------- mm/filemap.c | 2 +- mm/memcontrol.c | 18 ++++-------------- mm/page-writeback.c | 6 +++--- 5 files changed, 10 insertions(+), 30 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 47d1d7d932a8..fabaad3fd9c2 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -297,7 +297,7 @@ Lock order is as follows:: Page lock (PG_locked bit of page->flags) mm->page_table_lock or split pte_lock - lock_page_memcg (memcg->move_lock) + folio_memcg_lock (memcg->move_lock) mapping->i_pages lock lruvec->lru_lock. diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 00a88cf947e1..c3d3a0c09315 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -419,7 +419,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) * * - the folio lock * - LRU isolation - * - lock_page_memcg() + * - folio_memcg_lock() * - exclusive reference * - mem_cgroup_trylock_pages() * @@ -949,8 +949,6 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); void folio_memcg_lock(struct folio *folio); void folio_memcg_unlock(struct folio *folio); -void lock_page_memcg(struct page *page); -void unlock_page_memcg(struct page *page); void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); @@ -1438,14 +1436,6 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } -static inline void lock_page_memcg(struct page *page) -{ -} - -static inline void unlock_page_memcg(struct page *page) -{ -} - static inline void folio_memcg_lock(struct folio *folio) { } diff --git a/mm/filemap.c b/mm/filemap.c index 294ad6de2d09..3b73101f9f86 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -117,7 +117,7 @@ * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) * ->inode->i_lock (page_remove_rmap->set_page_dirty) - * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) + * ->memcg->move_lock (page_remove_rmap->folio_memcg_lock) * bdi.wb->list_lock (zap_pte_range->set_page_dirty) * ->inode->i_lock (zap_pte_range->set_page_dirty) * ->private_lock (zap_pte_range->block_dirty_folio) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93056918e956..cf06b1c9b3bb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2148,17 +2148,12 @@ void folio_memcg_lock(struct folio *folio) * When charge migration first begins, we can have multiple * critical sections holding the fast-path RCU lock and one * holding the slowpath move_lock. Track the task who has the - * move_lock for unlock_page_memcg(). + * move_lock for folio_memcg_unlock(). */ memcg->move_lock_task = current; memcg->move_lock_flags = flags; } -void lock_page_memcg(struct page *page) -{ - folio_memcg_lock(page_folio(page)); -} - static void __folio_memcg_unlock(struct mem_cgroup *memcg) { if (memcg && memcg->move_lock_task == current) { @@ -2186,11 +2181,6 @@ void folio_memcg_unlock(struct folio *folio) __folio_memcg_unlock(folio_memcg(folio)); } -void unlock_page_memcg(struct page *page) -{ - folio_memcg_unlock(page_folio(page)); -} - struct memcg_stock_pcp { local_lock_t stock_lock; struct mem_cgroup *cached; /* this never be root cgroup */ @@ -2866,7 +2856,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - folio_memcg_lock() * - exclusive reference * - mem_cgroup_trylock_pages() */ @@ -5829,7 +5819,7 @@ static int mem_cgroup_move_account(struct page *page, * with (un)charging, migration, LRU putback, or anything else * that would rely on a stable page's memory cgroup. * - * Note that lock_page_memcg is a memcg lock, not a page lock, + * Note that folio_memcg_lock is a memcg lock, not a page lock, * to save space. As soon as we switch page's memory cgroup to a * new memcg that isn't locked, the above state can change * concurrently again. Make sure we're truly done with it. @@ -6320,7 +6310,7 @@ static void mem_cgroup_move_charge(void) { lru_add_drain_all(); /* - * Signal lock_page_memcg() to take the memcg's move_lock + * Signal folio_memcg_lock() to take the memcg's move_lock * while we're moving its pages to another memcg. Then wait * for already started RCU-only updates to finish. */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index db7943999007..1d17fb1ec863 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2597,7 +2597,7 @@ EXPORT_SYMBOL(noop_dirty_folio); /* * Helper function for set_page_dirty family. * - * Caller must hold lock_page_memcg(). + * Caller must hold folio_memcg_lock(). * * NOTE: This relies on being atomic wrt interrupts. */ @@ -2631,7 +2631,7 @@ static void folio_account_dirtied(struct folio *folio, /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold lock_page_memcg(). + * Caller must hold folio_memcg_lock(). */ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) { @@ -2650,7 +2650,7 @@ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * - * The caller must hold lock_page_memcg(). Most callers have the folio + * The caller must hold folio_memcg_lock(). Most callers have the folio * locked. A few have the folio blocked from truncation through other * means (eg zap_vma_pages() has it mapped and is holding the page table * lock). This can also be called from mark_buffer_dirty(), which I