From patchwork Sat Nov 12 11:20:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7B91C4332F for ; Sat, 12 Nov 2022 10:33:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D8876B0071; Sat, 12 Nov 2022 05:33:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DC158E0002; Sat, 12 Nov 2022 05:33:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 519176B0074; Sat, 12 Nov 2022 05:33:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 394606B0071 for ; Sat, 12 Nov 2022 05:33:07 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 105C31A0E65 for ; Sat, 12 Nov 2022 10:33:07 +0000 (UTC) X-FDA: 80124427614.15.F99D7D8 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf08.hostedemail.com (Postfix) with ESMTP id F296B16000B for ; Sat, 12 Nov 2022 10:33:05 +0000 (UTC) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4N8WzS6dx3zJndw; Sat, 12 Nov 2022 18:29:56 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:00 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v3 1/3] mm/slab_common: Move cache_name to create_cache() Date: Sat, 12 Nov 2022 19:20:53 +0800 Message-ID: <20221112112055.1111078-2-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112112055.1111078-1-liushixin2@huawei.com> References: <20221112112055.1111078-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668249186; a=rsa-sha256; cv=none; b=sO+S0zCNvVsm3Pau+zyfUFJuHdZfhNBo9cvpmeP+rFdf8QgF0fxvILaQHX4LjTIiomFsvQ YqG03XpeBk0SDFNi8kXaysJSpCTgmjW/3h2asW7dKf2ks/CgHqVT5YSrl4gLZHgv+ANTKq ncodoaxO0aHtln4JY+3cJY4zyy5r6Rk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668249186; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=leyZkFm/H1Ug67EGoS9cgja/SPXbwRKk1/xB4yKJAtU=; b=1QHXse//9ClChXsr4/SvT+DQXAI9CMldyky0h4CBlsy3wLV+8ZURfmPXkLuqji+Bi0lIIM kpWdJ56fzbzKiwZaitBG4TcsFnAubdC0hk00hDzRuTFvfUhHa+umBC57Psdt2WQn86gge6 HB2J0lXZ3FhnWZq/k/MjTmFvFE0xy1g= Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=liushixin2@huawei.com X-Rspam-User: X-Stat-Signature: 57omy4z3o3z6pmsoon8njx61sdbu38ao X-Rspamd-Queue-Id: F296B16000B X-Rspamd-Server: rspam05 X-HE-Tag: 1668249185-918985 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The string cache_name and its kmem_cache have same life cycle. The latter is allocated in create_cache() so move cache_name to create_cache() too for better error handing. Signed-off-by: Liu Shixin --- mm/slab_common.c | 34 ++++++++++++++-------------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 33b1886b06eb..e5f430a17d95 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -209,17 +209,21 @@ static struct kmem_cache *create_cache(const char *name, struct kmem_cache *root_cache) { struct kmem_cache *s; - int err; + const char *cache_name; + int err = -ENOMEM; if (WARN_ON(useroffset + usersize > object_size)) useroffset = usersize = 0; - err = -ENOMEM; s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL); if (!s) - goto out; + return ERR_PTR(err); - s->name = name; + cache_name = kstrdup_const(name, GFP_KERNEL); + if (!cache_name) + goto out_free_cache; + + s->name = cache_name; s->size = s->object_size = object_size; s->align = align; s->ctor = ctor; @@ -228,18 +232,17 @@ static struct kmem_cache *create_cache(const char *name, err = __kmem_cache_create(s, flags); if (err) - goto out_free_cache; + goto out_free_name; s->refcount = 1; list_add(&s->list, &slab_caches); -out: - if (err) - return ERR_PTR(err); return s; +out_free_name: + kfree_const(s->name); out_free_cache: kmem_cache_free(kmem_cache, s); - goto out; + return ERR_PTR(err); } /** @@ -278,7 +281,6 @@ kmem_cache_create_usercopy(const char *name, void (*ctor)(void *)) { struct kmem_cache *s = NULL; - const char *cache_name; int err; #ifdef CONFIG_SLUB_DEBUG @@ -326,19 +328,11 @@ kmem_cache_create_usercopy(const char *name, if (s) goto out_unlock; - cache_name = kstrdup_const(name, GFP_KERNEL); - if (!cache_name) { - err = -ENOMEM; - goto out_unlock; - } - - s = create_cache(cache_name, size, + s = create_cache(name, size, calculate_alignment(flags, align, size), flags, useroffset, usersize, ctor, NULL); - if (IS_ERR(s)) { + if (IS_ERR(s)) err = PTR_ERR(s); - kfree_const(cache_name); - } out_unlock: mutex_unlock(&slab_mutex); From patchwork Sat Nov 12 11:20:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B757C433FE for ; Sat, 12 Nov 2022 10:33:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1F126B007D; Sat, 12 Nov 2022 05:33:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCF986B007E; Sat, 12 Nov 2022 05:33:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A97A38E0002; Sat, 12 Nov 2022 05:33:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9A9466B007D for ; Sat, 12 Nov 2022 05:33:14 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4A74BAB117 for ; Sat, 12 Nov 2022 10:33:14 +0000 (UTC) X-FDA: 80124427908.19.72A5C04 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf18.hostedemail.com (Postfix) with ESMTP id 2DD7F1C0003 for ; Sat, 12 Nov 2022 10:33:12 +0000 (UTC) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4N8X2w3FJyzRp04; Sat, 12 Nov 2022 18:32:56 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v3 2/3] mm/slub: Refactor __kmem_cache_create() Date: Sat, 12 Nov 2022 19:20:54 +0800 Message-ID: <20221112112055.1111078-3-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112112055.1111078-1-liushixin2@huawei.com> References: <20221112112055.1111078-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668249193; a=rsa-sha256; cv=none; b=iSVYY6afvYjI0qsoATwn+B1YyV0Oy4ElluZ3euvbWxIv4gTO03GzbdgmDtAlUoznV701bo gABSe4OE4QJEzkECDYCdxJiPc1lvzsrLUH7Hn5vx8sxthIyHnP8YQ8nZeQdwHX7hY7Djur duSR5/5Xip+Spq3grYEg5xmDAtXMbxo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668249193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=90ANJqgOdPu0wHv5XfKko3Pg1F/mhKrEsE/exsdCtZU=; b=yQ36EIKfgl+nTAnR3daPW8EzzBuf9g51/EnyS+hfIsZkfZ52RiWELf8YFf6ZDzSd5RMZrJ NYiulz+QhQqc0vmFUwO7/Tr60XIoERl807BUisUTzhdcTczTKAfKxCJ2Uw66Fw+dcvOiXk vCYQxMgMiB0Pza3Y5oIa0cllo7SLX9g= X-Stat-Signature: awg4aiiax77xj15qudkki1dfatg9ra1m X-Rspamd-Queue-Id: 2DD7F1C0003 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam11 X-HE-Tag: 1668249192-246083 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Separate sysfs_slab_add() and debugfs_slab_add() from __kmem_cache_create() can help to fix a memory leak about kobject. After this patch, we can fix the memory leak naturally by calling kobject_put() to free kobject and associated kmem_cache when sysfs_slab_add() failed. Besides, after that, we can easy to provide sysfs and debugfs support for other allocators too. Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 11 ++++++++++ mm/slab_common.c | 12 +++++++++++ mm/slub.c | 44 +++++++--------------------------------- 3 files changed, 30 insertions(+), 37 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..26d56c4c74d1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,9 +144,14 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS +int sysfs_slab_add(struct kmem_cache *); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else +static inline int sysfs_slab_add(struct kmem_cache *s) +{ + return 0; +} static inline void sysfs_slab_unlink(struct kmem_cache *s) { } @@ -155,6 +160,12 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) +void debugfs_slab_add(struct kmem_cache *); +#else +static inline void debugfs_slab_add(struct kmem_cache *s) { } +#endif + void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, diff --git a/mm/slab_common.c b/mm/slab_common.c index e5f430a17d95..55e2cf064dfe 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -234,6 +234,18 @@ static struct kmem_cache *create_cache(const char *name, if (err) goto out_free_name; +#ifdef SLAB_SUPPORTS_SYSFS + /* Mutex is not taken during early boot */ + if (slab_state >= FULL) { + err = sysfs_slab_add(s); + if (err) { + slab_kmem_cache_release(s); + return ERR_PTR(err); + } + debugfs_slab_add(s); + } +#endif + s->refcount = 1; list_add(&s->list, &slab_caches); return s; diff --git a/mm/slub.c b/mm/slub.c index ba94eb6fda78..a1ad759753ce 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -299,20 +299,12 @@ struct track { enum track_item { TRACK_ALLOC, TRACK_FREE }; #ifdef CONFIG_SYSFS -static int sysfs_slab_add(struct kmem_cache *); static int sysfs_slab_alias(struct kmem_cache *, const char *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; } static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p) { return 0; } #endif -#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) -static void debugfs_slab_add(struct kmem_cache *); -#else -static inline void debugfs_slab_add(struct kmem_cache *s) { } -#endif - static inline void stat(const struct kmem_cache *s, enum stat_item si) { #ifdef CONFIG_SLUB_STATS @@ -4297,7 +4289,7 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) +int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name); #ifdef CONFIG_SLAB_FREELIST_HARDENED @@ -4900,30 +4892,6 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, return s; } -int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) -{ - int err; - - err = kmem_cache_open(s, flags); - if (err) - return err; - - /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; - - err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } - - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); - - return 0; -} - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { @@ -5913,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -static int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s) { int err; const char *name; @@ -6236,10 +6204,13 @@ static const struct file_operations slab_debugfs_fops = { .release = slab_debug_trace_release, }; -static void debugfs_slab_add(struct kmem_cache *s) +void debugfs_slab_add(struct kmem_cache *s) { struct dentry *slab_cache_dir; + if (!(s->flags & SLAB_STORE_USER)) + return; + if (unlikely(!slab_debugfs_root)) return; @@ -6264,8 +6235,7 @@ static int __init slab_debugfs_init(void) slab_debugfs_root = debugfs_create_dir("slab", NULL); list_for_each_entry(s, &slab_caches, list) - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); + debugfs_slab_add(s); return 0; From patchwork Sat Nov 12 11:20:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 353B1C433FE for ; Sat, 12 Nov 2022 10:33:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 640666B0078; Sat, 12 Nov 2022 05:33:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62C406B007B; Sat, 12 Nov 2022 05:33:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A3A88E0002; Sat, 12 Nov 2022 05:33:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 39F0E6B0074 for ; Sat, 12 Nov 2022 05:33:07 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0BFF0AB311 for ; Sat, 12 Nov 2022 10:33:07 +0000 (UTC) X-FDA: 80124427614.16.1EEB197 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf06.hostedemail.com (Postfix) with ESMTP id BB920180007 for ; Sat, 12 Nov 2022 10:33:05 +0000 (UTC) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4N8X2k1jBXz15MXG; Sat, 12 Nov 2022 18:32:46 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:02 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:33:01 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v3 3/3] mm/slub: Fix memory leak of kobj->name in sysfs_slab_add() Date: Sat, 12 Nov 2022 19:20:55 +0800 Message-ID: <20221112112055.1111078-4-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112112055.1111078-1-liushixin2@huawei.com> References: <20221112112055.1111078-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668249186; a=rsa-sha256; cv=none; b=OhE6o+7xfq2WSBCXO3GBFeWyWErOCP+Su1MjCEYuxLU9qCmDEpaV3xg5UAx8WrYWo3qH+y SNCYsUZHtpA93mb2zsnSINcyVW15LApp4kp7IkfpZ52/IXoAAx4/95kZ1ymCStDbt+LocY jUad5SbjXtOSCXMfNngsuxrgSLaQGZM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668249186; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mtV6wkCFPXYCWtk1Ao0m893pEu81mkEMvcctfZmwBd4=; b=S1JzXwGSoYNvQ5I1kO2yczkiBtKkqK8Dn6HgTG6JaObIuJvRzD7L+xvJzO+Vl3jSrakJHq u03Ae3Kh0Kns9vI2lagsYhgSJ4ShdCehccIP3vwPnlTTuMbsGaLsY6QSnImN1Ajbr3XQlU o+mM4t8RnzdZhqnHlt/7C0mnamtDDXQ= X-Rspamd-Queue-Id: BB920180007 Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=liushixin2@huawei.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Stat-Signature: xyq31knjzosgsaytmoxb6uokbdye3c5k X-HE-Tag: 1668249185-339876 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a memory leak of kobj->name in sysfs_slab_add(): unreferenced object 0xffff88817e446440 (size 32): comm "insmod", pid 4085, jiffies 4296564501 (age 126.272s) hex dump (first 32 bytes): 75 62 69 66 73 5f 69 6e 6f 64 65 5f 73 6c 61 62 ubifs_inode_slab 00 65 44 7e 81 88 ff ff 00 00 00 00 00 00 00 00 .eD~............ backtrace: [<000000005b30fbbd>] __kmalloc_node_track_caller+0x4e/0x150 [<000000002f70da0c>] kstrdup_const+0x4b/0x80 [<00000000c6712c61>] kobject_set_name_vargs+0x2f/0xb0 [<00000000b151218e>] kobject_init_and_add+0xb0/0x120 [<00000000e56a4cf5>] sysfs_slab_add+0x17d/0x220 [<000000009326fd57>] __kmem_cache_create+0x406/0x590 [<00000000dde33cff>] kmem_cache_create_usercopy+0x1fc/0x300 [<00000000fe90cedb>] kmem_cache_create+0x12/0x20 [<000000007a6531c8>] 0xffffffffa02d802d [<000000000e3b13c7>] do_one_initcall+0x87/0x2a0 [<00000000995ecdcf>] do_init_module+0xdf/0x320 [<000000008821941f>] load_module+0x2f98/0x3330 [<00000000ef51efa4>] __do_sys_finit_module+0x113/0x1b0 [<000000009339fbce>] do_syscall_64+0x35/0x80 [<000000006b7f2033>] entry_SYSCALL_64_after_hwframe+0x46/0xb0 Following the rules stated in the comment for kobject_init_and_add(): If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. kobject_put() is more appropriate for error handling after kobject_init(). And we can use this function to solve this problem. For the cache created early, the related sysfs_slab_add() is called in slab_sysfs_init(). Skip free these kmem_cache since they are important for system. Keep them working without sysfs. Fixes: 80da026a8e5d ("mm/slub: fix slab double-free in case of duplicate sysfs filename") Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 4 ++-- mm/slab_common.c | 6 ++---- mm/slub.c | 21 +++++++++++++++++---- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 26d56c4c74d1..90c3e06b77b1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,11 +144,11 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS -int sysfs_slab_add(struct kmem_cache *); +int sysfs_slab_add(struct kmem_cache *, bool); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) +static inline int sysfs_slab_add(struct kmem_cache *s, bool free_slab) { return 0; } diff --git a/mm/slab_common.c b/mm/slab_common.c index 55e2cf064dfe..30808a1d1b32 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -237,11 +237,9 @@ static struct kmem_cache *create_cache(const char *name, #ifdef SLAB_SUPPORTS_SYSFS /* Mutex is not taken during early boot */ if (slab_state >= FULL) { - err = sysfs_slab_add(s); - if (err) { - slab_kmem_cache_release(s); + err = sysfs_slab_add(s, true); + if (err) return ERR_PTR(err); - } debugfs_slab_add(s); } #endif diff --git a/mm/slub.c b/mm/slub.c index a1ad759753ce..06a3223fc833 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5881,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s, bool free_slab) { int err; const char *name; @@ -5911,14 +5911,17 @@ int sysfs_slab_add(struct kmem_cache *s) * for the symlinks. */ name = create_unique_id(s); - if (IS_ERR(name)) + if (IS_ERR(name)) { + if (free_slab) + slab_kmem_cache_release(s); return PTR_ERR(name); + } } s->kobj.kset = kset; err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name); if (err) - goto out; + goto out_put_kobj; err = sysfs_create_group(&s->kobj, &slab_attr_group); if (err) @@ -5934,6 +5937,16 @@ int sysfs_slab_add(struct kmem_cache *s) return err; out_del_kobj: kobject_del(&s->kobj); +out_put_kobj: + /* + * Skip free kmem_cache that create early since they are important + * for system. Keep them working without sysfs. Only free name + * for this case. + */ + if (free_slab) + kobject_put(&s->kobj); + else + kfree_const(&s->kobj.name); goto out; } @@ -6002,7 +6015,7 @@ static int __init slab_sysfs_init(void) slab_state = FULL; list_for_each_entry(s, &slab_caches, list) { - err = sysfs_slab_add(s); + err = sysfs_slab_add(s, false); if (err) pr_err("SLUB: Unable to add boot slab %s to sysfs\n", s->name);