From patchwork Fri Feb 28 11:38:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingxiang Zeng X-Patchwork-Id: 13996267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 995DDC19776 for ; Fri, 28 Feb 2025 11:39:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15700280004; Fri, 28 Feb 2025 06:39:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E062280001; Fri, 28 Feb 2025 06:39:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9B91280004; Fri, 28 Feb 2025 06:39:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CD9BF280001 for ; Fri, 28 Feb 2025 06:39:09 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7D23AA07AF for ; Fri, 28 Feb 2025 11:39:09 +0000 (UTC) X-FDA: 83169157218.28.2256664 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf24.hostedemail.com (Postfix) with ESMTP id A1D4518000E for ; Fri, 28 Feb 2025 11:39:07 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ifg2lTlV; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of jingxiangzeng.cas@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=jingxiangzeng.cas@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740742747; a=rsa-sha256; cv=none; b=YCwYPfEe1+usCcs8L2CW0wwoAo2B8HViZSTZO5QvKm233pR1VemzlRMemZrU1IdmR1H8oj EYxA06KoAeHtcG6AsIWbzxwFblnFqHMWdu6+Selv2vvwFc993ugLX/bkxCGqHo71eFn68E iCz4Pj4Ki1HXe4D5lVxW45hscZzwciM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ifg2lTlV; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of jingxiangzeng.cas@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=jingxiangzeng.cas@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740742747; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=CB7r08TEt3J6kADyR9jRER1ztM0sCF+s6/xC3flevS4=; b=VVcUtyD3H0oMx8nBUQ1/2xls7HplDlsli+ns/Mye6MjVViiNjDGp89WlTuQvZzg4OsEQp3 eO2U7FvNzs5Qvcbhli0dPAdtV4dEC/imjtK2MB7tbbzAhmjuTmO2Fdmzf6hmBP52YcfDOw ivlbIIo2abo9x8sl8SJd3BQaxpZSJfM= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-22339936bbfso30730045ad.1 for ; Fri, 28 Feb 2025 03:39:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740742746; x=1741347546; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:message-id:date :subject:cc:to:from:from:to:cc:subject:date:message-id:reply-to; bh=CB7r08TEt3J6kADyR9jRER1ztM0sCF+s6/xC3flevS4=; b=Ifg2lTlVfkkPUJWDGPUArvRYcUvG/+Bk/6EXKG8jPlh7p7lfDixL7WnRshR44Z3Kjj TvatCqNHtWNxfdAHU0/FruSXl3LManWVr7DWBvXfu8aWehZVtTFV4PiTH27UAp68xD9b 2LvQVFLhXKD5Xx0eBLH5bm9W2+xDKpLjgaZTx/Geq56HirjWuFQ7jfYjDFH7L1e63RKB Cl2nrugCzX91/SjMYgdOWL4mjYmieyts8e7OGmc2URTW1SFNcFelanR3N/R58buewmTP aSFtbEPC6aO//UfIKtAZIKF2sBfgC1YrcYmDg6BbmypiTs8FuIvquoU90W3Tg96ylAob wBbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740742746; x=1741347546; h=content-transfer-encoding:mime-version:reply-to:message-id:date :subject:cc:to:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=CB7r08TEt3J6kADyR9jRER1ztM0sCF+s6/xC3flevS4=; b=Sy93+M3KBe6OTjvfT6rS4gHIV9tG3Wm01S48LWUD2tJhzJ3Ni2U3LXojV8+uXmBFLi 8ZDRdb8lxVa9YsFOzHtnv8Qf9ijXX+Xk0KZYAVt04RodFInw1u4Y5G65gL5AzSGtjRCD FS3JAhXO2tGWU5Wsg0YhFaIzftEL6EgxJcw20WKjycLpZOCpHg9rNRv0C4iSL6T1+ttI 8lrCurAv42eSJkU8/DExz4sNtsxmooBHdB3Zka6cY8Xev3BT/y1qepNbBERJWFjrmL9p Cug9MO/oAQfe5Snjc08LGLAe9mA3s3qjxzJb83SByd92KuxQTO4v2jFSFE4UyVXzmS6q sknQ== X-Gm-Message-State: AOJu0YyiJAVI46A9KC1B101Mc2PbzFkIYcr5P3w8Vh/wofkQHag642BU zSB121BRs/nJ2KXE6eECa+eBOfcekdVgFajmaY6yFO5XCtFoyyR9K3VdrYcO X-Gm-Gg: ASbGncvwV+eZFY9OFNnPvsyUIL3O/TmDjWVm8lFf9uXuJ+LATbMf6VA8bCxcCuqhvUM 3p6YzPeRGTx+TamvW0Wk963zwkt3kxJEXZiYUvLIA2WYBFBh30hxLjUnOKXAnMY1PlmNn0TEj4c hEGqpITmS5vDDNK1mIxEI97Q3kk+NdRPZTZNqpXwdH2MeyJKsG0Pe8KFSpyABiVqsCjkmh+QqCD /GI0jFCTBIBgcydd19Kqe+JSX2z8VyZ+0s8bdyTCz+4tGB2jbPdoXs3sF1tgHkTwpCvHcwXjxUm JMx8erYSdtPH2HZZzjNjYqXshkVeqibeGccONJN8DuNv2zLrE2A8PgDEh/kpOhk+RQ3B6w== X-Google-Smtp-Source: AGHT+IGfMGfYO8d09kKvOEbK387YgFtF+hXNZ3eKoGsg3fkusgf7B11MZH5aDagGL7Zl+BWKqXGMJA== X-Received: by 2002:a05:6a00:2e03:b0:732:6231:f2a3 with SMTP id d2e1a72fcca58-734ac3385bbmr6363705b3a.3.1740742745888; Fri, 28 Feb 2025 03:39:05 -0800 (PST) Received: from localhost.localdomain ([14.22.11.162]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7349fe4ccb0sm3608645b3a.65.2025.02.28.03.39.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2025 03:39:05 -0800 (PST) From: Jingxiang Zeng To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, david@redhat.com, vbabka@suse.cz, muchun.song@linux.dev, chengming.zhou@linux.dev, kasong@tencent.com, lkp@intel.com, Zeng Jingxiang Subject: [PATCH] mm/list_lru: allocate on first insert instead of allocation Date: Fri, 28 Feb 2025 19:38:36 +0800 Message-ID: <20250228113836.136318-1-jingxiangzeng.cas@gmail.com> X-Mailer: git-send-email 2.43.5 Reply-To: Jingxiang Zeng MIME-Version: 1.0 X-Rspamd-Queue-Id: A1D4518000E X-Stat-Signature: tcgysh9ezbe9r547gxnkyfpkwtaa8fgf X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1740742747-892439 X-HE-Meta: U2FsdGVkX18wRCbiAbkvVGUhYnF6gl9tI0XACHHnbAmMx60Qyb2FTtmwFKCwwFr+lHwkNS2NT7bCqD9iZUTxRvC4DYnEWBdVvF760+JWb0yhWm488dKYkXw3udslgTJInbZ7uTS0h+CF7NLUsVaNtn7rFNvNrNCjgeEOtlhPV4A1qemPY441AHZuSadMzisv2gAWspBxGyCgoJHl3oXqs15PYc6TNJA3alG/qj1EKj4Azs192dH7BT+f0CwMrCuJ+BLOlWlktcp1FZDiVTHi9y8zSMwMTSRGxCBcclf3YOkrcZm/h9RzbzkCmwibqrRwCyGshIFhV/h4e5L6XTJbYU1Uld1fEQ8nny2yIFHRM8yTnjqRGYw7Pej9R5dLJxU3dLDoiWCiHkNCI2yUL6OkYsebfIX9zYznWHE/d7JJVXPf5ZpuUvJvsloyxg3oZbmUyWf00ckdIK0Ook6bmvpRL79eiLs51pgXGxfzLb1ebuwIx7qiNY8aRubXD1lw/5y9NWASAMVGabftCkdmwJHiFF00sd8UwZ3YjMtwpXzt8bju7ccHuhDX1DWnkuHd/vfuQgfOnc+DiTbZDAf/zwHf08XXm2A7mOnFucf7E3L/KeDGLSBJgSSTSK89jJys3e7CLzNun5cQN0E8VENkyTU84FYvZidk04/cwinJYu9plcAwdrN0q3lmOtf5QdqPWVHLXkWWLiV0sjPetQZ/bvsOtPwZocVrgmNErP2cu9LiDOI3T66/A5PBCyFk2YRAtiaB9IMMdcKPux2xu8XC8nnWrDB2KwzHxD+XqW2KtZvXJpBiwN56oKyxkoMQJvRKGe1cZ5w/PwwX7iC5GqPGTGKX3h0d0STd3fXn2DgYoFfrcz3jUdJuvopdKZwokzA2SlRY6m86n9ELNQSseeTRB+lwKCz6Zyu5gNOu2qv+WWVyynYQA30PcGfkpnX1/NmzArbM3VB+PsRyCp99zRnHslF 4Pci6M6p CPm0U0Zyb3RW4rIgNxkUxukcYp0ukkPn+pYC5yrOdwHexWtbSh90KXLhJ8434O7xn2tMNxR2dxKctThw2jeTHHVRM6iOxQ/dRI4RwLINKuBQeCCMKb6jWEs5fVDUiq3WcanhpfjRkQq986GtTI1sk290jbuQl8Pd5ShJQ/UhjfZSFQYpE0NGRqDrBxIS8Wq4PT0jUJfR0ehyP75C7QHrOs86nzLvNgx8scLG03OGGMSeeD7wnygRyMeDueWysvqrMEwRgSVzdv5NYLj3ekJ7Mm6/kO9+lEHhykny6ML6+JSag0RXbiB45Qa20KYGXUXjbJ2jK3JlDN0pa0uodIL1fztZeDvy5UocNqwTcz/0NhAsUC2nwxkQQPH9r2xPYNilwApkwfP0NBdeOgqde4ab1xsaieqL542Qtq2z29IWpw8aizMa1opNdhL9+Qw9HkfPB+yptyOj+HXDJXrcjlq0CO2zGeay4XrfPpWKCD2dhwUgiV1Mw41hyXj3JiKDLSx/+FyC3wveVKPDic6xScFlko1FMqUh5Ghq3zt+cZpGfvXJcvKgGm5oKry2QzBvw15T7wYDoUgP7P51Wj5/LgfrLXDJQt0jIvotftGoI54YfkF5UFzKUklyEZJUZduXHKZ8hUxOfJYl+vCg2qsBDIoK4BdyPWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zeng Jingxiang It is observed that each time the memcg_slab_post_alloc_hook function and the zswap_store function are executed, xa_load will be executed once through the following path, which adds unnecessary overhead. This patch optimizes this part of the code. When a new mlru is inserted into list_lru, xa_load is only executed once, and other slab requests of the same type will not be executed repeatedly. __memcg_slab_post_alloc_hook ->memcg_list_lru_alloc ->->memcg_list_lru_allocated ->->->xa_load zswap_store ->memcg_list_lru_alloc ->->memcg_list_lru_allocated ->->->xa_load We created 1,000,000 negative dentry on test machines with 10, 1,000, and 10,000 cgroups for performance testing, and then used the bcc funclatency tool to capture the time consumption of the kmem_cache_alloc_lru_noprof function. The performance improvement ranged from 3.3% to 6.2%: 10 cgroups, 3.3% performance improvement. without the patch: avg = 1375 nsecs, total: 1375684993 nsecs, count: 1000000 with the patch: avg = 1331 nsecs, total: 1331625726 nsecs, count: 1000000 1000 cgroups, 3.7% performance improvement. without the patch: avg = 1364 nsecs, total: 1364564848 nsecs, count: 1000000 with the patch: avg = 1315 nsecs, total: 1315150414 nsecs, count: 1000000 10000 cgroups, 6.2% performance improvement. without the patch: avg = 1385 nsecs, total: 1385361153 nsecs, count: 1000002 with the patch: avg = 1304 nsecs, total: 1304531155 nsecs, count: 1000000 Signed-off-by: Zeng Jingxiang Suggested-by: Kairui Song --- include/linux/list_lru.h | 2 -- mm/list_lru.c | 22 +++++++++++++++------- mm/memcontrol.c | 16 ++-------------- mm/slab.h | 4 ++-- mm/slub.c | 20 +++++++++----------- mm/zswap.c | 9 --------- 6 files changed, 28 insertions(+), 45 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index fe739d35a864..04d4b051f618 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -79,8 +79,6 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker return list_lru_init_memcg(lru, shrinker); } -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, - gfp_t gfp); void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent); /** diff --git a/mm/list_lru.c b/mm/list_lru.c index 490473af3122..c5a5d61ac946 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -49,6 +49,8 @@ static int lru_shrinker_id(struct list_lru *lru) return lru->shrinker_id; } +static int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru); + static inline struct list_lru_one * list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) { @@ -84,6 +86,9 @@ lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, spin_unlock_irq(&l->lock); else spin_unlock(&l->lock); + } else { + if (!memcg_list_lru_alloc(memcg, lru)) + goto again; } /* * Caller may simply bail out if raced with reparenting or @@ -93,7 +98,6 @@ lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, rcu_read_unlock(); return NULL; } - VM_WARN_ON(!css_is_dying(&memcg->css)); memcg = parent_mem_cgroup(memcg); goto again; } @@ -506,18 +510,16 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg, return idx < 0 || xa_load(&lru->xa, idx); } -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, - gfp_t gfp) +static int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru) { unsigned long flags; struct list_lru_memcg *mlru = NULL; - struct mem_cgroup *pos, *parent; + struct mem_cgroup *pos, *parent, *cur; XA_STATE(xas, &lru->xa, 0); if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru)) return 0; - gfp &= GFP_RECLAIM_MASK; /* * Because the list_lru can be reparented to the parent cgroup's * list_lru, we should make sure that this cgroup and all its @@ -536,11 +538,13 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, } if (!mlru) { - mlru = memcg_init_list_lru_one(lru, gfp); + mlru = memcg_init_list_lru_one(lru, GFP_KERNEL); if (!mlru) return -ENOMEM; } xas_set(&xas, pos->kmemcg_id); + /* We could be scanning items in another memcg */ + cur = set_active_memcg(pos); do { xas_lock_irqsave(&xas, flags); if (!xas_load(&xas) && !css_is_dying(&pos->css)) { @@ -549,12 +553,16 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, mlru = NULL; } xas_unlock_irqrestore(&xas, flags); - } while (xas_nomem(&xas, gfp)); + } while (xas_nomem(&xas, GFP_KERNEL)); + set_active_memcg(cur); } while (pos != memcg && !css_is_dying(&pos->css)); if (unlikely(mlru)) kfree(mlru); + if (css_is_dying(&pos->css)) + return -EBUSY; + return xas_error(&xas); } #else diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 16f3bdbd37d8..583e2587c17b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2966,8 +2966,8 @@ static inline size_t obj_full_size(struct kmem_cache *s) return s->size + sizeof(struct obj_cgroup *); } -bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - gfp_t flags, size_t size, void **p) +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, + size_t size, void **p) { struct obj_cgroup *objcg; struct slab *slab; @@ -2994,18 +2994,6 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, flags &= gfp_allowed_mask; - if (lru) { - int ret; - struct mem_cgroup *memcg; - - memcg = get_mem_cgroup_from_objcg(objcg); - ret = memcg_list_lru_alloc(memcg, lru, flags); - css_put(&memcg->css); - - if (ret) - return false; - } - if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) return false; diff --git a/mm/slab.h b/mm/slab.h index e9fd9bf0bfa6..3b20298d2ea1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -598,8 +598,8 @@ static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) } #ifdef CONFIG_MEMCG -bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - gfp_t flags, size_t size, void **p); +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, + size_t size, void **p); void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects, struct slabobj_ext *obj_exts); #endif diff --git a/mm/slub.c b/mm/slub.c index 184fd2b14758..545c4b5f2bf2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2153,8 +2153,8 @@ alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); static __fastpath_inline -bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - gfp_t flags, size_t size, void **p) +bool memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, + size_t size, void **p) { if (likely(!memcg_kmem_online())) return true; @@ -2162,7 +2162,7 @@ bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, if (likely(!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT))) return true; - if (likely(__memcg_slab_post_alloc_hook(s, lru, flags, size, p))) + if (likely(__memcg_slab_post_alloc_hook(s, flags, size, p))) return true; if (likely(size == 1)) { @@ -2241,12 +2241,11 @@ bool memcg_slab_post_charge(void *p, gfp_t flags) return true; } - return __memcg_slab_post_alloc_hook(s, NULL, flags, 1, &p); + return __memcg_slab_post_alloc_hook(s, flags, 1, &p); } #else /* CONFIG_MEMCG */ static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, gfp_t flags, size_t size, void **p) { @@ -4085,9 +4084,8 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) } static __fastpath_inline -bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - gfp_t flags, size_t size, void **p, bool init, - unsigned int orig_size) +bool slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, size_t size, + void **p, bool init, unsigned int orig_size) { unsigned int zero_size = s->object_size; bool kasan_init = init; @@ -4135,7 +4133,7 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, alloc_tagging_slab_alloc_hook(s, p[i], flags); } - return memcg_slab_post_alloc_hook(s, lru, flags, size, p); + return memcg_slab_post_alloc_hook(s, flags, size, p); } /* @@ -4174,7 +4172,7 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list * In case this fails due to memcg_slab_post_alloc_hook(), * object is set to NULL */ - slab_post_alloc_hook(s, lru, gfpflags, 1, &object, init, orig_size); + slab_post_alloc_hook(s, gfpflags, 1, &object, init, orig_size); return object; } @@ -5135,7 +5133,7 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, * memcg and kmem_cache debug support and memory initialization. * Done outside of the IRQ disabled fastpath loop. */ - if (unlikely(!slab_post_alloc_hook(s, NULL, flags, size, p, + if (unlikely(!slab_post_alloc_hook(s, flags, size, p, slab_want_init_on_alloc(flags, s), s->object_size))) { return 0; } diff --git a/mm/zswap.c b/mm/zswap.c index 10f2a16e7586..178728a936ed 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1562,15 +1562,6 @@ bool zswap_store(struct folio *folio) if (!pool) goto put_objcg; - if (objcg) { - memcg = get_mem_cgroup_from_objcg(objcg); - if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { - mem_cgroup_put(memcg); - goto put_pool; - } - mem_cgroup_put(memcg); - } - for (index = 0; index < nr_pages; ++index) { struct page *page = folio_page(folio, index);