From patchwork Sat Nov 12 11:20:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B757C433FE for ; Sat, 12 Nov 2022 10:33:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1F126B007D; Sat, 12 Nov 2022 05:33:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCF986B007E; Sat, 12 Nov 2022 05:33:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A97A38E0002; Sat, 12 Nov 2022 05:33:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9A9466B007D for ; Sat, 12 Nov 2022 05:33:14 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4A74BAB117 for ; Sat, 12 Nov 2022 10:33:14 +0000 (UTC) X-FDA: 80124427908.19.72A5C04 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf18.hostedemail.com (Postfix) with ESMTP id 2DD7F1C0003 for ; Sat, 12 Nov 2022 10:33:12 +0000 (UTC) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4N8X2w3FJyzRp04; Sat, 12 Nov 2022 18:32:56 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v3 2/3] mm/slub: Refactor __kmem_cache_create() Date: Sat, 12 Nov 2022 19:20:54 +0800 Message-ID: <20221112112055.1111078-3-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112112055.1111078-1-liushixin2@huawei.com> References: <20221112112055.1111078-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668249193; a=rsa-sha256; cv=none; b=iSVYY6afvYjI0qsoATwn+B1YyV0Oy4ElluZ3euvbWxIv4gTO03GzbdgmDtAlUoznV701bo gABSe4OE4QJEzkECDYCdxJiPc1lvzsrLUH7Hn5vx8sxthIyHnP8YQ8nZeQdwHX7hY7Djur duSR5/5Xip+Spq3grYEg5xmDAtXMbxo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668249193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=90ANJqgOdPu0wHv5XfKko3Pg1F/mhKrEsE/exsdCtZU=; b=yQ36EIKfgl+nTAnR3daPW8EzzBuf9g51/EnyS+hfIsZkfZ52RiWELf8YFf6ZDzSd5RMZrJ NYiulz+QhQqc0vmFUwO7/Tr60XIoERl807BUisUTzhdcTczTKAfKxCJ2Uw66Fw+dcvOiXk vCYQxMgMiB0Pza3Y5oIa0cllo7SLX9g= X-Stat-Signature: awg4aiiax77xj15qudkki1dfatg9ra1m X-Rspamd-Queue-Id: 2DD7F1C0003 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam11 X-HE-Tag: 1668249192-246083 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Separate sysfs_slab_add() and debugfs_slab_add() from __kmem_cache_create() can help to fix a memory leak about kobject. After this patch, we can fix the memory leak naturally by calling kobject_put() to free kobject and associated kmem_cache when sysfs_slab_add() failed. Besides, after that, we can easy to provide sysfs and debugfs support for other allocators too. Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 11 ++++++++++ mm/slab_common.c | 12 +++++++++++ mm/slub.c | 44 +++++++--------------------------------- 3 files changed, 30 insertions(+), 37 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..26d56c4c74d1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,9 +144,14 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS +int sysfs_slab_add(struct kmem_cache *); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else +static inline int sysfs_slab_add(struct kmem_cache *s) +{ + return 0; +} static inline void sysfs_slab_unlink(struct kmem_cache *s) { } @@ -155,6 +160,12 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) +void debugfs_slab_add(struct kmem_cache *); +#else +static inline void debugfs_slab_add(struct kmem_cache *s) { } +#endif + void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, diff --git a/mm/slab_common.c b/mm/slab_common.c index e5f430a17d95..55e2cf064dfe 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -234,6 +234,18 @@ static struct kmem_cache *create_cache(const char *name, if (err) goto out_free_name; +#ifdef SLAB_SUPPORTS_SYSFS + /* Mutex is not taken during early boot */ + if (slab_state >= FULL) { + err = sysfs_slab_add(s); + if (err) { + slab_kmem_cache_release(s); + return ERR_PTR(err); + } + debugfs_slab_add(s); + } +#endif + s->refcount = 1; list_add(&s->list, &slab_caches); return s; diff --git a/mm/slub.c b/mm/slub.c index ba94eb6fda78..a1ad759753ce 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -299,20 +299,12 @@ struct track { enum track_item { TRACK_ALLOC, TRACK_FREE }; #ifdef CONFIG_SYSFS -static int sysfs_slab_add(struct kmem_cache *); static int sysfs_slab_alias(struct kmem_cache *, const char *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; } static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p) { return 0; } #endif -#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) -static void debugfs_slab_add(struct kmem_cache *); -#else -static inline void debugfs_slab_add(struct kmem_cache *s) { } -#endif - static inline void stat(const struct kmem_cache *s, enum stat_item si) { #ifdef CONFIG_SLUB_STATS @@ -4297,7 +4289,7 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) +int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name); #ifdef CONFIG_SLAB_FREELIST_HARDENED @@ -4900,30 +4892,6 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, return s; } -int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) -{ - int err; - - err = kmem_cache_open(s, flags); - if (err) - return err; - - /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; - - err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } - - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); - - return 0; -} - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { @@ -5913,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -static int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s) { int err; const char *name; @@ -6236,10 +6204,13 @@ static const struct file_operations slab_debugfs_fops = { .release = slab_debug_trace_release, }; -static void debugfs_slab_add(struct kmem_cache *s) +void debugfs_slab_add(struct kmem_cache *s) { struct dentry *slab_cache_dir; + if (!(s->flags & SLAB_STORE_USER)) + return; + if (unlikely(!slab_debugfs_root)) return; @@ -6264,8 +6235,7 @@ static int __init slab_debugfs_init(void) slab_debugfs_root = debugfs_create_dir("slab", NULL); list_for_each_entry(s, &slab_caches, list) - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); + debugfs_slab_add(s); return 0;