From patchwork Wed Feb 21 19:40:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13566187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1A1FC5478A for ; Wed, 21 Feb 2024 19:41:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 541866B0098; Wed, 21 Feb 2024 14:41:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CDA06B0099; Wed, 21 Feb 2024 14:41:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E4EB6B009A; Wed, 21 Feb 2024 14:41:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 01EF86B0098 for ; Wed, 21 Feb 2024 14:41:14 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C6D6B80BD3 for ; Wed, 21 Feb 2024 19:41:14 +0000 (UTC) X-FDA: 81816829668.16.8F0F4ED Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf06.hostedemail.com (Postfix) with ESMTP id 2CE6C18001D for ; Wed, 21 Feb 2024 19:41:12 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=p+ltIFrJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 32FHWZQYKCAs352pymrzzrwp.nzxwty58-xxv6lnv.z2r@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=32FHWZQYKCAs352pymrzzrwp.nzxwty58-xxv6lnv.z2r@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708544473; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WxhfLGit1QGZvv88CDz2b9TMcGpnJbteJyGythFonXg=; b=gPJvYPBg96vA+me+HWDcpzK0xWzByitHGvI/o1y+PfSW+UzgBT2RoOpnaDO+0pMjOuiz2V pG9a7Iwn94s3wjzNtGwfpDNx8KrkEmI33lJTXT8A+BcpsYl57221HVIkejZ1ty5OyO+PCE QOyizSWdS0AsSlPLT7HzIkJl7hV5ZaY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=p+ltIFrJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 32FHWZQYKCAs352pymrzzrwp.nzxwty58-xxv6lnv.z2r@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=32FHWZQYKCAs352pymrzzrwp.nzxwty58-xxv6lnv.z2r@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708544473; a=rsa-sha256; cv=none; b=bHgqngxChghSzwZozUuG16rpzDGQCmy00eweKDRfmfQRcOQBN6WAFxJGbXy6F9fEglO73c tu07ahZ4Ad3hV7ig4ZLjNwDUZilPZKSYUl9joxEerwKw65opfgPKushWigux8deWOwgq+S opT7OgB4UGmcue3Wlu6mkBbmcwYrFBQ= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b5d1899eso213215276.0 for ; Wed, 21 Feb 2024 11:41:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708544472; x=1709149272; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WxhfLGit1QGZvv88CDz2b9TMcGpnJbteJyGythFonXg=; b=p+ltIFrJzSupYx8jHsKQp8/pa6d2cXe9Xa9sSpbrKeAVuAOdThTqGK7wcjarih1WBP uliMjmf7Y3hQacnJGBE8TdaH7r1KaXbgg5Xk+kslO/dp4fyqA7K0bU2caq3a8qsfyIWg B9oBEev/tcErASV3nptbaZMgesN0l36e+3rI+TqK9BQYZDbtnw8vo5bZRDw4PitiaGAD lxX2p3AgcKTS109ihNjGdqIolS+6ha0n98rSyaXANxwOcmDr7EpXB5mFgFOE/Tr+HFVD SV8M+Nr5kQkedsdNm5YDwJqsZK9xLDHVFF1ph32GkxAzilxkTCl0dpORExkbT3+IuHtk bYlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708544472; x=1709149272; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WxhfLGit1QGZvv88CDz2b9TMcGpnJbteJyGythFonXg=; b=Uq5mYbi8+cylZ5UhKpiUQ/jx/i4RqRPtqXC0wK7psJIH/7bwMLLesT31IBT3aA2P+o 88oq+T7duIxUEQ/gjzIzfSQL+UBx7P0Gn68S6vh2S8pjO6hUQAzMhengo0c4VelucEO8 ocRsY3B74hYYkj82BKLD8CGTB013tTJQM5ptYsdcrHT45NJd61dQwvcuV8HkkzDqE4PU pVPURt0UNoo+6+26VkdqSR7ybzF9laJ6Mq5rDaZRBsHA0B5YauMRWhwgJUSsf5P7TYBy uz6elzMkRvYa2HNExqJ0ujrlPpoZ4vf/D9nQmX/H6z3vAeW0IA6ne29B1yaevNiswB5+ MHbQ== X-Forwarded-Encrypted: i=1; AJvYcCUrNFAZJbDBIVTVjp8HcWLSJ/AaOOrnU+2bgbpGTNPLpPJDL3h4/WhST15VUMfCiDgtRnbu59jCVvQagP971kUu3cI= X-Gm-Message-State: AOJu0YyJiFcZbhLChQ1wunOUXopWs8xdJ0ccL6rET2P0B69kUs/ARZ3Z oINPaSO1Fc754dGQ/dZfLUjALUudB9LVa4VUkSZesWSlIAqxoPHmdBKY1i21UHXv4P8JbMTRMjx Mlw== X-Google-Smtp-Source: AGHT+IFHcVmtqPOIGTS9q5X2yZq2RumAgxDeMWYEbPcJp3ccn54KA7yETMVtGjEab6EzCxk/TJs7FGlLDMQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:953b:9a4e:1e10:3f07]) (user=surenb job=sendgmr) by 2002:a05:6902:124b:b0:dc7:7655:46ce with SMTP id t11-20020a056902124b00b00dc7765546cemr24975ybu.2.1708544472027; Wed, 21 Feb 2024 11:41:12 -0800 (PST) Date: Wed, 21 Feb 2024 11:40:20 -0800 In-Reply-To: <20240221194052.927623-1-surenb@google.com> Mime-Version: 1.0 References: <20240221194052.927623-1-surenb@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240221194052.927623-8-surenb@google.com> Subject: [PATCH v4 07/36] mm: introduce slabobj_ext to support slab object extensions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, penguin-kernel@i-love.sakura.ne.jp, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Rspamd-Queue-Id: 2CE6C18001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: inranrqhko8teeb1p4wotmkb33bemiof X-HE-Tag: 1708544472-227939 X-HE-Meta: U2FsdGVkX1+yfX85oG5/2ViWqjAIBkNn7HKgaX8BprTpiIoPRJjkZb/eSk/EBEj5I/j0iosuo4yrwqtvJAy5YfLJh0Kz8p8ll3eFTpbfzrSPOSWk/Tmfz8tHuryNtxPkB9tbghDsigp93EC1qmvd+yi8I2VbwnoKQvHvBTOZlpbKB+d6tztRcBrwwCilzs8wCs7qNHiMeoSMILRKeDTBFDSNyk0cjcxHX02beLu1zSXTmEi3XMawmDkcZAGshcmVd6SAjIEJuxfxTV9abT/Y6BO0UI/fkQLgmxcSTAvA4H25RbmtySlqQLkIPQHydkpQD3Who3xTHWhm61w/UefCzJGm+4IqMMTkF5Khc6ZS+m+lwQwMCuMfZAmuWWXZFPGDMmJy+PBgeAwBQdtzhlKwC09XRdLHkWi1lYTAOd7CmP1z8MgXo+FdojcNLfutioaC9x+eLD4wT/PZlrTE4v4OtPIiB1QMywIHOL6bBkoUNA6u9LM/QbYjGBV75dON6Rx4NWRQminjHaQ9vov+KAbf3FIGxZ6tN9EEcQuLtMPGMTUQke1IrOnwYrj1+1fGvLXqyQwTw6TEZY5fY9tmAmyIf2Rg3DhUlxW9vnLxJ5DQr0znMAM6IGdzj1klLq7G+7j1lnxm9tQkuNXohXmhSe+cqbEon4mwEDPGS+eVx76eyM9haRYs1kZ7Sqyu/+oY9i5c47T4xQjA7yCPn0kFllGpJ8AcW8eAmLf/w7ptX+pDc/DcyYOQfTGuKL/G4WsR8fkF6RFGP8g3+UZ4mOF3752kIEbQku0V9TTRiAeYn8sDwAJrgMZKqJNwO+8zblVW9iPjaCk0lsd+VASYRijDxo37qGvYtK4ADJBVT0bMs1jRDa4tBSKVzD5sTj9gKsCJmfyhVlTpjfN3FQW+I7XY4Wb3ilpMkzpzW3fsaKUaJPFC+JeId40ugK48JeFVVitS2LGBk/u/hq9enj9/7xfTwEP J+50Pzi1 rsV9nznqBN762+KF+Wmf0PxZY1AIgIniMqvJS/vmKGtS3nkUbmvc8dicLffri8UG08T2xRsw0SHj8E2ASfS+7CU4dit3BrtiMX+yuyueGrhmt1Wd0P5hFW60dDD1LQkK27U57odEeb1TXScktZND9EfE1qV8NSAETqoptZmwNhJIYi6UONZGWb+joq9uvdEt7NarL/IUWpBKggHefnYpD12znekq0C+7W6p5ow8KkOrpOr4KI8ML4c7uFisR/335pmcR3HlY3KFxVlnIj/G7hhNsvftb+FkEOyNbwmDyHvcn215n5YqiHNp8rYHYPWoFVl7q3H3SgDj5qcTRPV4WkzAoVVSpw6W+6XoQBtx78/uNuD02+uyOs+IQKbBvXEf1kD+aLRXvHU9nAfQ2+NhbZwrQnaX/wJ0nb9/friSiTdhTb00S/+SSQKf4jHK46Y16ezvGhQB81HJrSeQdyjMbdR6WFVQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently slab pages can store only vectors of obj_cgroup pointers in page->memcg_data. Introduce slabobj_ext structure to allow more data to be stored for each slab object. Wrap obj_cgroup into slabobj_ext to support current functionality while allowing to extend slabobj_ext in the future. Signed-off-by: Suren Baghdasaryan Reviewed-by: Pasha Tatashin --- include/linux/memcontrol.h | 20 +++++-- include/linux/mm_types.h | 4 +- init/Kconfig | 4 ++ mm/kfence/core.c | 14 ++--- mm/kfence/kfence.h | 4 +- mm/memcontrol.c | 56 +++---------------- mm/page_owner.c | 2 +- mm/slab.h | 62 +++++++++++++-------- mm/slub.c | 109 +++++++++++++++++++++++++++---------- 9 files changed, 156 insertions(+), 119 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..eb1dc181e412 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -348,8 +348,8 @@ struct mem_cgroup { extern struct mem_cgroup *root_mem_cgroup; enum page_memcg_data_flags { - /* page->memcg_data is a pointer to an objcgs vector */ - MEMCG_DATA_OBJCGS = (1UL << 0), + /* page->memcg_data is a pointer to an slabobj_ext vector */ + MEMCG_DATA_OBJEXTS = (1UL << 0), /* page has been accounted as a non-slab kernel page */ MEMCG_DATA_KMEM = (1UL << 1), /* the next bit after the last actual flag */ @@ -387,7 +387,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -408,7 +408,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -505,7 +505,7 @@ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) */ unsigned long memcg_data = READ_ONCE(folio->memcg_data); - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) return NULL; if (memcg_data & MEMCG_DATA_KMEM) { @@ -551,7 +551,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob static inline bool folio_memcg_kmem(struct folio *folio) { VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); - VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio); return folio->memcg_data & MEMCG_DATA_KMEM; } @@ -1633,6 +1633,14 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +/* + * Extended information for slab objects stored as an array in page->memcg_data + * if MEMCG_DATA_OBJEXTS is set. + */ +struct slabobj_ext { + struct obj_cgroup *objcg; +} __aligned(8); + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8b611e13153e..9ff97f4e74c5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -169,7 +169,7 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif @@ -306,7 +306,7 @@ struct folio { }; atomic_t _mapcount; atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif #if defined(WANT_PAGE_VIRTUAL) diff --git a/init/Kconfig b/init/Kconfig index 8426d59cc634..fe5f5e75bd3f 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -929,6 +929,9 @@ config NUMA_BALANCING_DEFAULT_ENABLED If set, automatic NUMA balancing will be enabled if running on a NUMA machine. +config SLAB_OBJ_EXT + bool + menuconfig CGROUPS bool "Control Group support" select KERNFS @@ -962,6 +965,7 @@ config MEMCG bool "Memory controller" select PAGE_COUNTER select EVENTFD + select SLAB_OBJ_EXT help Provides control over the memory footprint of tasks in a cgroup. diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 8350f5c06f2e..964b8482275b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -595,9 +595,9 @@ static unsigned long kfence_init_pool(void) continue; __folio_set_slab(slab_folio(slab)); -#ifdef CONFIG_MEMCG - slab->memcg_data = (unsigned long)&kfence_metadata_init[i / 2 - 1].objcg | - MEMCG_DATA_OBJCGS; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | + MEMCG_DATA_OBJEXTS; #endif } @@ -645,8 +645,8 @@ static unsigned long kfence_init_pool(void) if (!i || (i % 2)) continue; -#ifdef CONFIG_MEMCG - slab->memcg_data = 0; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = 0; #endif __folio_clear_slab(slab_folio(slab)); } @@ -1139,8 +1139,8 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); -#ifdef CONFIG_MEMCG - KFENCE_WARN_ON(meta->objcg); +#ifdef CONFIG_MEMCG_KMEM + KFENCE_WARN_ON(meta->obj_exts.objcg); #endif /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index f46fbb03062b..084f5f36e8e7 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -97,8 +97,8 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; -#ifdef CONFIG_MEMCG - struct obj_cgroup *objcg; +#ifdef CONFIG_MEMCG_KMEM + struct slabobj_ext obj_exts; #endif }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ed40f9d3a27..7021639d2a6f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2977,13 +2977,6 @@ void mem_cgroup_commit_charge(struct folio *folio, struct mem_cgroup *memcg) } #ifdef CONFIG_MEMCG_KMEM -/* - * The allocated objcg pointers array is not accounted directly. - * Moreover, it should not come from DMA buffer and is not readily - * reclaimable. So those GFP bits should be masked off. - */ -#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ - __GFP_ACCOUNT | __GFP_NOFAIL) /* * mod_objcg_mlstate() may be called with irq enabled, so @@ -3003,62 +2996,27 @@ static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, rcu_read_unlock(); } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab) -{ - unsigned int objects = objs_per_slab(s, slab); - unsigned long memcg_data; - void *vec; - - gfp &= ~OBJCGS_CLEAR_MASK; - vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - slab_nid(slab)); - if (!vec) - return -ENOMEM; - - memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS; - if (new_slab) { - /* - * If the slab is brand new and nobody can yet access its - * memcg_data, no synchronization is required and memcg_data can - * be simply assigned. - */ - slab->memcg_data = memcg_data; - } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { - /* - * If the slab is already in use, somebody can allocate and - * assign obj_cgroups in parallel. In this case the existing - * objcg vector should be reused. - */ - kfree(vec); - return 0; - } - - kmemleak_not_leak(vec); - return 0; -} - static __always_inline struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) { /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in - * slab->memcg_data. + * slab->obj_exts. */ if (folio_test_slab(folio)) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; struct slab *slab; unsigned int off; slab = folio_slab(folio); - objcgs = slab_objcgs(slab); - if (!objcgs) + obj_exts = slab_obj_exts(slab); + if (!obj_exts) return NULL; off = obj_to_index(slab->slab_cache, slab, p); - if (objcgs[off]) - return obj_cgroup_memcg(objcgs[off]); + if (obj_exts[off].objcg) + return obj_cgroup_memcg(obj_exts[off].objcg); return NULL; } @@ -3066,7 +3024,7 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) /* * folio_memcg_check() is used here, because in theory we can encounter * a folio where the slab flag has been cleared already, but - * slab->memcg_data has not been freed yet + * slab->obj_exts has not been freed yet * folio_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 5634e5d890f8..262aa7d25f40 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -377,7 +377,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, if (!memcg_data) goto out_unlock; - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) ret += scnprintf(kbuf + ret, count - ret, "Slab cache page\n"); diff --git a/mm/slab.h b/mm/slab.h index 54deeb0428c6..7f19b0a2acd8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -87,8 +87,8 @@ struct slab { unsigned int __unused; atomic_t __page_refcount; -#ifdef CONFIG_MEMCG - unsigned long memcg_data; +#ifdef CONFIG_SLAB_OBJ_EXT + unsigned long obj_exts; #endif }; @@ -97,8 +97,8 @@ struct slab { SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); -#ifdef CONFIG_MEMCG -SLAB_MATCH(memcg_data, memcg_data); +#ifdef CONFIG_SLAB_OBJ_EXT +SLAB_MATCH(memcg_data, obj_exts); #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <= sizeof(struct page)); @@ -541,42 +541,60 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla return false; } -#ifdef CONFIG_MEMCG_KMEM +#ifdef CONFIG_SLAB_OBJ_EXT + /* - * slab_objcgs - get the object cgroups vector associated with a slab + * slab_obj_exts - get the pointer to the slab object extension vector + * associated with a slab. * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the slab, + * Returns a pointer to the object extension vector associated with the slab, * or NULL if no such vector has been associated yet. */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { - unsigned long memcg_data = READ_ONCE(slab->memcg_data); + unsigned long obj_exts = READ_ONCE(slab->obj_exts); - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), +#ifdef CONFIG_MEMCG + VM_BUG_ON_PAGE(obj_exts && !(obj_exts & MEMCG_DATA_OBJEXTS), slab_page(slab)); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); +#else + return (struct slabobj_ext *)obj_exts; +#endif } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab); -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, - enum node_stat_item idx, int nr); -#else /* CONFIG_MEMCG_KMEM */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); + +#else /* CONFIG_SLAB_OBJ_EXT */ + +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { return NULL; } -static inline int memcg_alloc_slab_cgroups(struct slab *slab, - struct kmem_cache *s, gfp_t gfp, - bool new_slab) +static inline int alloc_slab_obj_exts(struct slab *slab, + struct kmem_cache *s, gfp_t gfp, + bool new_slab) { return 0; } -#endif /* CONFIG_MEMCG_KMEM */ + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + return NULL; +} + +#endif /* CONFIG_SLAB_OBJ_EXT */ + +#ifdef CONFIG_MEMCG_KMEM +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, + enum node_stat_item idx, int nr); +#endif size_t __ksize(const void *objp); diff --git a/mm/slub.c b/mm/slub.c index d31b03a8d9d5..76fb600fbc80 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -683,10 +683,10 @@ static inline bool __slab_update_freelist(struct kmem_cache *s, struct slab *sla if (s->flags & __CMPXCHG_DOUBLE) { ret = __update_freelist_fast(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } else { ret = __update_freelist_slow(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } if (likely(ret)) return true; @@ -710,13 +710,13 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, if (s->flags & __CMPXCHG_DOUBLE) { ret = __update_freelist_fast(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); } else { unsigned long flags; local_irq_save(flags); ret = __update_freelist_slow(slab, freelist_old, counters_old, - freelist_new, counters_new); + freelist_new, counters_new); local_irq_restore(flags); } if (likely(ret)) @@ -1881,13 +1881,72 @@ static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; } -#ifdef CONFIG_MEMCG_KMEM -static inline void memcg_free_slab_cgroups(struct slab *slab) +#ifdef CONFIG_SLAB_OBJ_EXT + +/* + * The allocated objcg pointers array is not accounted directly. + * Moreover, it should not come from DMA buffer and is not readily + * reclaimable. So those GFP bits should be masked off. + */ +#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ + __GFP_ACCOUNT | __GFP_NOFAIL) + +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) { - kfree(slab_objcgs(slab)); - slab->memcg_data = 0; + unsigned int objects = objs_per_slab(s, slab); + unsigned long obj_exts; + void *vec; + + gfp &= ~OBJCGS_CLEAR_MASK; + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + if (!vec) + return -ENOMEM; + + obj_exts = (unsigned long)vec; +#ifdef CONFIG_MEMCG + obj_exts |= MEMCG_DATA_OBJEXTS; +#endif + if (new_slab) { + /* + * If the slab is brand new and nobody can yet access its + * obj_exts, no synchronization is required and obj_exts can + * be simply assigned. + */ + slab->obj_exts = obj_exts; + } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + /* + * If the slab is already in use, somebody can allocate and + * assign slabobj_exts in parallel. In this case the existing + * objcg vector should be reused. + */ + kfree(vec); + return 0; + } + + kmemleak_not_leak(vec); + return 0; } +static inline void free_slab_obj_exts(struct slab *slab) +{ + struct slabobj_ext *obj_exts; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + return; + + kfree(obj_exts); + slab->obj_exts = 0; +} +#else /* CONFIG_SLAB_OBJ_EXT */ +static inline void free_slab_obj_exts(struct slab *slab) +{ +} +#endif /* CONFIG_SLAB_OBJ_EXT */ + +#ifdef CONFIG_MEMCG_KMEM static inline size_t obj_full_size(struct kmem_cache *s) { /* @@ -1966,15 +2025,15 @@ static void __memcg_slab_post_alloc_hook(struct kmem_cache *s, if (likely(p[i])) { slab = virt_to_slab(p[i]); - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, false)) { + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } off = obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; + slab_obj_exts(slab)[off].objcg = objcg; mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { @@ -1995,18 +2054,18 @@ void memcg_slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects, - struct obj_cgroup **objcgs) + struct slabobj_ext *obj_exts) { for (int i = 0; i < objects; i++) { struct obj_cgroup *objcg; unsigned int off; off = obj_to_index(s, slab, p[i]); - objcg = objcgs[off]; + objcg = obj_exts[off].objcg; if (!objcg) continue; - objcgs[off] = NULL; + obj_exts[off].objcg = NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); @@ -2018,16 +2077,16 @@ static __fastpath_inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; if (!memcg_kmem_online()) return; - objcgs = slab_objcgs(slab); - if (likely(!objcgs)) + obj_exts = slab_obj_exts(slab); + if (likely(!obj_exts)) return; - __memcg_slab_free_hook(s, slab, p, objects, objcgs); + __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } static inline @@ -2038,15 +2097,6 @@ void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, obj_cgroup_uncharge(objcg, objects * obj_full_size(s)); } #else /* CONFIG_MEMCG_KMEM */ -static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) -{ - return NULL; -} - -static inline void memcg_free_slab_cgroups(struct slab *slab) -{ -} - static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, struct obj_cgroup **objcgp, @@ -2314,7 +2364,7 @@ static __always_inline void account_slab(struct slab *slab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_online() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_slab_cgroups(slab, s, gfp, true); + alloc_slab_obj_exts(slab, s, gfp, true); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -2323,8 +2373,7 @@ static __always_inline void account_slab(struct slab *slab, int order, static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { - if (memcg_kmem_online()) - memcg_free_slab_cgroups(slab); + free_slab_obj_exts(slab); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order));