From patchwork Sat Nov 12 11:46:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4233BC4332F for ; Sat, 12 Nov 2022 10:58:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D442D8E0006; Sat, 12 Nov 2022 05:58:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF4968E0002; Sat, 12 Nov 2022 05:58:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBD268E0006; Sat, 12 Nov 2022 05:58:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AE28C8E0002 for ; Sat, 12 Nov 2022 05:58:32 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 840C41C629D for ; Sat, 12 Nov 2022 10:58:32 +0000 (UTC) X-FDA: 80124491664.07.EFAC9D1 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf13.hostedemail.com (Postfix) with ESMTP id A6F6320004 for ; Sat, 12 Nov 2022 10:58:31 +0000 (UTC) Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4N8Xc63NMczRp4M; Sat, 12 Nov 2022 18:58:14 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:19 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:19 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v4 1/3] mm/slab_common: Move cache_name to create_cache() Date: Sat, 12 Nov 2022 19:46:00 +0800 Message-ID: <20221112114602.1268989-2-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112114602.1268989-1-liushixin2@huawei.com> References: <20221112114602.1268989-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668250712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=leyZkFm/H1Ug67EGoS9cgja/SPXbwRKk1/xB4yKJAtU=; b=FxxOHvVfKafqhaHTFAsX4iK36VNy0oZfJLhC7okMP6US/Owh1x6OOiIvCWJaMEKc46NNqX QDaAk+grI4fI05/EVhoYbq1weJk09NUvKBX8urrkPWkXZJ099qgg1Kw/lNGJ0jkxhP3oBh ykJ3IoRYq5BEdzAkXSWbWk1KtsJLTmA= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668250712; a=rsa-sha256; cv=none; b=bomg/r1XcyKdETzOm0MtzRhU2ex8Rju27akf6tdPpFm76YsSMRcCYl0rmqxiytvlnRg41y PZvDlVr6qQEVn/vZ8ZIMtJQoeF3kIDZYTPz1Bkf/jlKBSFNRvoveSPrPgvC55hhRXdrcdA CwyJRENDZq0eUmfj5fXc0POWMlhMGk8= X-Stat-Signature: iqno5oojetjppd5j4zhsudotwpyx7t67 X-Rspamd-Queue-Id: A6F6320004 Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1668250711-451182 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The string cache_name and its kmem_cache have same life cycle. The latter is allocated in create_cache() so move cache_name to create_cache() too for better error handing. Signed-off-by: Liu Shixin --- mm/slab_common.c | 34 ++++++++++++++-------------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 33b1886b06eb..e5f430a17d95 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -209,17 +209,21 @@ static struct kmem_cache *create_cache(const char *name, struct kmem_cache *root_cache) { struct kmem_cache *s; - int err; + const char *cache_name; + int err = -ENOMEM; if (WARN_ON(useroffset + usersize > object_size)) useroffset = usersize = 0; - err = -ENOMEM; s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL); if (!s) - goto out; + return ERR_PTR(err); - s->name = name; + cache_name = kstrdup_const(name, GFP_KERNEL); + if (!cache_name) + goto out_free_cache; + + s->name = cache_name; s->size = s->object_size = object_size; s->align = align; s->ctor = ctor; @@ -228,18 +232,17 @@ static struct kmem_cache *create_cache(const char *name, err = __kmem_cache_create(s, flags); if (err) - goto out_free_cache; + goto out_free_name; s->refcount = 1; list_add(&s->list, &slab_caches); -out: - if (err) - return ERR_PTR(err); return s; +out_free_name: + kfree_const(s->name); out_free_cache: kmem_cache_free(kmem_cache, s); - goto out; + return ERR_PTR(err); } /** @@ -278,7 +281,6 @@ kmem_cache_create_usercopy(const char *name, void (*ctor)(void *)) { struct kmem_cache *s = NULL; - const char *cache_name; int err; #ifdef CONFIG_SLUB_DEBUG @@ -326,19 +328,11 @@ kmem_cache_create_usercopy(const char *name, if (s) goto out_unlock; - cache_name = kstrdup_const(name, GFP_KERNEL); - if (!cache_name) { - err = -ENOMEM; - goto out_unlock; - } - - s = create_cache(cache_name, size, + s = create_cache(name, size, calculate_alignment(flags, align, size), flags, useroffset, usersize, ctor, NULL); - if (IS_ERR(s)) { + if (IS_ERR(s)) err = PTR_ERR(s); - kfree_const(cache_name); - } out_unlock: mutex_unlock(&slab_mutex); From patchwork Sat Nov 12 11:46:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7923BC433FE for ; Sat, 12 Nov 2022 10:58:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E8CA6B0071; Sat, 12 Nov 2022 05:58:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 099216B0072; Sat, 12 Nov 2022 05:58:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA1D26B0074; Sat, 12 Nov 2022 05:58:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DF1A46B0071 for ; Sat, 12 Nov 2022 05:58:25 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BAC851A0E1C for ; Sat, 12 Nov 2022 10:58:25 +0000 (UTC) X-FDA: 80124491370.13.C7EC7F3 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf20.hostedemail.com (Postfix) with ESMTP id 48CF71C0009 for ; Sat, 12 Nov 2022 10:58:23 +0000 (UTC) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4N8XWw5pTxzqSLf; Sat, 12 Nov 2022 18:54:36 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:20 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:19 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v4 2/3] mm/slub: Refactor __kmem_cache_create() Date: Sat, 12 Nov 2022 19:46:01 +0800 Message-ID: <20221112114602.1268989-3-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112114602.1268989-1-liushixin2@huawei.com> References: <20221112114602.1268989-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668250705; a=rsa-sha256; cv=none; b=4MkEmMwgPBzsmqS+f4WCDcICuIbAXVu7ejkwfjLxMc6pVsuuctGM/HVwDznomABrEnVTQQ Px/d8k7/nGvBfU2zYZVOMQbVPjyEwBy2e4pW+3hbhCEtuNFmUbnLdUDzvD4j7y2aLAw/FR gYMjs/HPlN+iSPQbbpAYrdn/GQzMrcE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668250705; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=90ANJqgOdPu0wHv5XfKko3Pg1F/mhKrEsE/exsdCtZU=; b=jREGUJCSjdCUYikra1n8OlWkVPvPS7oPnYJmbD6PyD5Xe0jqsPHAoruIEQYimSryNwfPe+ VT8bbz/GygSqc6rDOMDKoGhDocfpFXeL5XOTrEBcIJ4tX751j/49g0Ym1KY2qmpyR8f3q8 tpCyPNT3m69qS4dB9j3ssoQx7ww9p/Y= X-Stat-Signature: 4fii7c376hyesutd6dw6qm1ttg1ad6k6 Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam10 X-Rspam-User: X-Rspamd-Queue-Id: 48CF71C0009 X-HE-Tag: 1668250703-572596 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Separate sysfs_slab_add() and debugfs_slab_add() from __kmem_cache_create() can help to fix a memory leak about kobject. After this patch, we can fix the memory leak naturally by calling kobject_put() to free kobject and associated kmem_cache when sysfs_slab_add() failed. Besides, after that, we can easy to provide sysfs and debugfs support for other allocators too. Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 11 ++++++++++ mm/slab_common.c | 12 +++++++++++ mm/slub.c | 44 +++++++--------------------------------- 3 files changed, 30 insertions(+), 37 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..26d56c4c74d1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,9 +144,14 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS +int sysfs_slab_add(struct kmem_cache *); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else +static inline int sysfs_slab_add(struct kmem_cache *s) +{ + return 0; +} static inline void sysfs_slab_unlink(struct kmem_cache *s) { } @@ -155,6 +160,12 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) +void debugfs_slab_add(struct kmem_cache *); +#else +static inline void debugfs_slab_add(struct kmem_cache *s) { } +#endif + void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, diff --git a/mm/slab_common.c b/mm/slab_common.c index e5f430a17d95..55e2cf064dfe 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -234,6 +234,18 @@ static struct kmem_cache *create_cache(const char *name, if (err) goto out_free_name; +#ifdef SLAB_SUPPORTS_SYSFS + /* Mutex is not taken during early boot */ + if (slab_state >= FULL) { + err = sysfs_slab_add(s); + if (err) { + slab_kmem_cache_release(s); + return ERR_PTR(err); + } + debugfs_slab_add(s); + } +#endif + s->refcount = 1; list_add(&s->list, &slab_caches); return s; diff --git a/mm/slub.c b/mm/slub.c index ba94eb6fda78..a1ad759753ce 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -299,20 +299,12 @@ struct track { enum track_item { TRACK_ALLOC, TRACK_FREE }; #ifdef CONFIG_SYSFS -static int sysfs_slab_add(struct kmem_cache *); static int sysfs_slab_alias(struct kmem_cache *, const char *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; } static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p) { return 0; } #endif -#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) -static void debugfs_slab_add(struct kmem_cache *); -#else -static inline void debugfs_slab_add(struct kmem_cache *s) { } -#endif - static inline void stat(const struct kmem_cache *s, enum stat_item si) { #ifdef CONFIG_SLUB_STATS @@ -4297,7 +4289,7 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) +int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name); #ifdef CONFIG_SLAB_FREELIST_HARDENED @@ -4900,30 +4892,6 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, return s; } -int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) -{ - int err; - - err = kmem_cache_open(s, flags); - if (err) - return err; - - /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; - - err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } - - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); - - return 0; -} - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { @@ -5913,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -static int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s) { int err; const char *name; @@ -6236,10 +6204,13 @@ static const struct file_operations slab_debugfs_fops = { .release = slab_debug_trace_release, }; -static void debugfs_slab_add(struct kmem_cache *s) +void debugfs_slab_add(struct kmem_cache *s) { struct dentry *slab_cache_dir; + if (!(s->flags & SLAB_STORE_USER)) + return; + if (unlikely(!slab_debugfs_root)) return; @@ -6264,8 +6235,7 @@ static int __init slab_debugfs_init(void) slab_debugfs_root = debugfs_create_dir("slab", NULL); list_for_each_entry(s, &slab_caches, list) - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); + debugfs_slab_add(s); return 0; From patchwork Sat Nov 12 11:46:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13041179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08895C4332F for ; Sat, 12 Nov 2022 10:58:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 974C46B0072; Sat, 12 Nov 2022 05:58:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FD2C8E0002; Sat, 12 Nov 2022 05:58:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79FCE6B0075; Sat, 12 Nov 2022 05:58:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6C4FF6B0072 for ; Sat, 12 Nov 2022 05:58:27 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4981380BBC for ; Sat, 12 Nov 2022 10:58:27 +0000 (UTC) X-FDA: 80124491454.01.B88B428 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id 16427A0009 for ; Sat, 12 Nov 2022 10:58:25 +0000 (UTC) Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4N8Xbz2YFmzRp3V; Sat, 12 Nov 2022 18:58:07 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:21 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 12 Nov 2022 18:58:20 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v4 3/3] mm/slub: Fix memory leak of kobj->name in sysfs_slab_add() Date: Sat, 12 Nov 2022 19:46:02 +0800 Message-ID: <20221112114602.1268989-4-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221112114602.1268989-1-liushixin2@huawei.com> References: <20221112114602.1268989-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668250706; a=rsa-sha256; cv=none; b=MJxjeipnPCIWn2wycZDjSEm60/xCowrtv9kyR1kW02tHmC7ZuRW8tzFgFDGCapZuYl12Eq c5Sk7HsaMG3km0T3wgBfet2Skow7IkO15YWXxZJhzaP9F+lxcFtIJ6FmhUwxaWoE2PZy5w 2LZNvFxQIPs2uxzMAEsHOyfn0LGJO2g= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668250706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rvmtmrtQ+V1ksIipVFctCMAyKpqv1LhhnytHXQVsB7o=; b=mGEYaqS9ZwNqHVMAKKJNkx0Duu5z1iLEWcdsW7yJzf+TogrNuE4HMblJsHDi31sUCX9bMp QATHwQ2QwPzYlPhz2AZykJdtGyPCmWtudshDQTyn3hXIpooGhhcgJy++5ixWo4TAQXoi7b TkqzH7wIqAJh2dUiKFjDMl14XZwcvR8= X-Rspamd-Queue-Id: 16427A0009 Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=liushixin2@huawei.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Stat-Signature: 8im8o6qhdxfpm1p3g9uc8wuu9c64yjg3 X-HE-Tag: 1668250705-521618 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a memory leak of kobj->name in sysfs_slab_add(): unreferenced object 0xffff88817e446440 (size 32): comm "insmod", pid 4085, jiffies 4296564501 (age 126.272s) hex dump (first 32 bytes): 75 62 69 66 73 5f 69 6e 6f 64 65 5f 73 6c 61 62 ubifs_inode_slab 00 65 44 7e 81 88 ff ff 00 00 00 00 00 00 00 00 .eD~............ backtrace: [<000000005b30fbbd>] __kmalloc_node_track_caller+0x4e/0x150 [<000000002f70da0c>] kstrdup_const+0x4b/0x80 [<00000000c6712c61>] kobject_set_name_vargs+0x2f/0xb0 [<00000000b151218e>] kobject_init_and_add+0xb0/0x120 [<00000000e56a4cf5>] sysfs_slab_add+0x17d/0x220 [<000000009326fd57>] __kmem_cache_create+0x406/0x590 [<00000000dde33cff>] kmem_cache_create_usercopy+0x1fc/0x300 [<00000000fe90cedb>] kmem_cache_create+0x12/0x20 [<000000007a6531c8>] 0xffffffffa02d802d [<000000000e3b13c7>] do_one_initcall+0x87/0x2a0 [<00000000995ecdcf>] do_init_module+0xdf/0x320 [<000000008821941f>] load_module+0x2f98/0x3330 [<00000000ef51efa4>] __do_sys_finit_module+0x113/0x1b0 [<000000009339fbce>] do_syscall_64+0x35/0x80 [<000000006b7f2033>] entry_SYSCALL_64_after_hwframe+0x46/0xb0 Following the rules stated in the comment for kobject_init_and_add(): If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. kobject_put() is more appropriate for error handling after kobject_init(). And we can use this function to solve this problem. For the cache created early, the related sysfs_slab_add() is called in slab_sysfs_init(). Skip free these kmem_cache since they are important for system. Keep them working without sysfs. Fixes: 80da026a8e5d ("mm/slub: fix slab double-free in case of duplicate sysfs filename") Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 4 ++-- mm/slab_common.c | 6 ++---- mm/slub.c | 21 +++++++++++++++++---- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 26d56c4c74d1..90c3e06b77b1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,11 +144,11 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS -int sysfs_slab_add(struct kmem_cache *); +int sysfs_slab_add(struct kmem_cache *, bool); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) +static inline int sysfs_slab_add(struct kmem_cache *s, bool free_slab) { return 0; } diff --git a/mm/slab_common.c b/mm/slab_common.c index 55e2cf064dfe..30808a1d1b32 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -237,11 +237,9 @@ static struct kmem_cache *create_cache(const char *name, #ifdef SLAB_SUPPORTS_SYSFS /* Mutex is not taken during early boot */ if (slab_state >= FULL) { - err = sysfs_slab_add(s); - if (err) { - slab_kmem_cache_release(s); + err = sysfs_slab_add(s, true); + if (err) return ERR_PTR(err); - } debugfs_slab_add(s); } #endif diff --git a/mm/slub.c b/mm/slub.c index a1ad759753ce..25575bce0c3c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5881,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s, bool free_slab) { int err; const char *name; @@ -5911,14 +5911,17 @@ int sysfs_slab_add(struct kmem_cache *s) * for the symlinks. */ name = create_unique_id(s); - if (IS_ERR(name)) + if (IS_ERR(name)) { + if (free_slab) + slab_kmem_cache_release(s); return PTR_ERR(name); + } } s->kobj.kset = kset; err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name); if (err) - goto out; + goto out_put_kobj; err = sysfs_create_group(&s->kobj, &slab_attr_group); if (err) @@ -5934,6 +5937,16 @@ int sysfs_slab_add(struct kmem_cache *s) return err; out_del_kobj: kobject_del(&s->kobj); +out_put_kobj: + /* + * Skip free kmem_cache that create early since they are important + * for system. Keep them working without sysfs. Only free the name + * for early allocated kmem_cache. + */ + if (free_slab) + kobject_put(&s->kobj); + else + kfree_const(s->kobj.name); goto out; } @@ -6002,7 +6015,7 @@ static int __init slab_sysfs_init(void) slab_state = FULL; list_for_each_entry(s, &slab_caches, list) { - err = sysfs_slab_add(s); + err = sysfs_slab_add(s, false); if (err) pr_err("SLUB: Unable to add boot slab %s to sysfs\n", s->name);