From patchwork Tue Mar 8 11:41:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 697FFC433FE for ; Tue, 8 Mar 2022 11:42:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07C678D000D; Tue, 8 Mar 2022 06:42:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B678D0001; Tue, 8 Mar 2022 06:42:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35BC8D000D; Tue, 8 Mar 2022 06:42:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id D4BED8D0001 for ; Tue, 8 Mar 2022 06:42:44 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A6B4123195 for ; Tue, 8 Mar 2022 11:42:44 +0000 (UTC) X-FDA: 79221031848.15.F0E5460 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf12.hostedemail.com (Postfix) with ESMTP id 20C9140007 for ; Tue, 8 Mar 2022 11:42:44 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id kx6-20020a17090b228600b001bf859159bfso1980374pjb.1 for ; Tue, 08 Mar 2022 03:42:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WyJUzAbm9yHjJ4QKSKKsRPqOaZ/q3Aw9wGC8avt4y3A=; b=SPOB2Is9v5XHsx3KQKzQnq3I+g7W8LG2kR7QM7TXpvg7Yy/WQuHZEsBvJVobSOf7TL S0Y5M5vqJcfdS51VXiGAzQe5WwhM5NXOmkw5bdY5Q5WSxRN4ffqzxXZnkMpWzqC0nsGC pQKd/G9pyhqk62HNgFxlL3PPLu7a4u7RKHj6qguBWeNrl81xQHwFBaXc4YcGDx74fQ6H /EQeI77O1+2AC+zZFbojMcQ+ZmaVDV00jrpTjS7YIaHHMzUUkSb1zVu9+gaXwFZ2+jW0 ZYZooIb73Psyq0sG1WsaQl+lPMt4gf+lSMEp5zDZXy+VfkO2Jr+iNY9O2vezybOEWf6O wwtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WyJUzAbm9yHjJ4QKSKKsRPqOaZ/q3Aw9wGC8avt4y3A=; b=SlNorsNloc9EY6s7Texh1SnFkumWI7GMxnGDyB2hoi5kmEuLQKJucZfu2I9xx6oTDz i3v9A6RU04rVePeN3fVQvDsB74d70XpENl3qjUPj/aswr0mYooNwCWCjr6TLQo4wmeU1 0FNqrFmyOFN7vLCNsl3GKpyp5FRzJswhqScjjNedtqfW2XjP6dN5dFi4RMDndPUQ2Njq riMOtwU+blEP+1Ma3AzFtxWMUohNisnuOuPqcCIQeYWd8/JTjY7gfRoYhKE9GRUSCOws PcWYnwMCnxMq+pG6cap8QUhsrChlG4cW60GaEDlVZNfFjNZVj97Ujy6Z/LqOlOTm0TPZ Accg== X-Gm-Message-State: AOAM532chrvm249/7rC9WhdFjs0r6HnmRfVg2T5AyzmdFSU2nkkiU3JP b1VLwKGVrys9SVtpy+Bn69OVReudtcelTg== X-Google-Smtp-Source: ABdhPJwBBmcQhwZHwUbPZucQH/JdiAMEHfLJ+/PU4B7VIoOPrR4kHVV3U6pLD61NDo5NAPt4lomuqA== X-Received: by 2002:a17:902:7298:b0:151:842b:a241 with SMTP id d24-20020a170902729800b00151842ba241mr16959617pll.115.1646739762983; Tue, 08 Mar 2022 03:42:42 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:42 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 08/15] mm/sl[auo]b: cleanup kmalloc() Date: Tue, 8 Mar 2022 11:41:35 +0000 Message-Id: <20220308114142.1744229-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 20C9140007 X-Stat-Signature: 6qs8kc311bi9xzr6jwdpfftyjwdtasb1 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=SPOB2Is9; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646739764-12640 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc() and kmalloc_node() do same job, make kmalloc() wrapper of kmalloc_node(). Remove kmalloc_trace() that is now unused. This patch makes slab allocator use kmalloc_node tracepoints in kmalloc(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 82 +++++++++++++++++--------------------------- mm/slab.c | 14 -------- mm/slub.c | 9 ----- 3 files changed, 31 insertions(+), 74 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 33d4260bce8b..dfcc8301d969 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -460,9 +460,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __alloc_size(3); - extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); @@ -475,6 +472,36 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) return kmalloc_large_node(size, flags, NUMA_NO_NODE); } +#ifndef CONFIG_SLOB +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size)) { + unsigned int index; + + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large(size, flags); + + index = kmalloc_index(size); + + if (!index) + return ZERO_SIZE_PTR; + + return kmem_cache_alloc_node_trace( + kmalloc_caches[kmalloc_type(flags)][index], + flags, node, size); + } + return __kmalloc_node(size, flags, node); +} +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large(size, flags); + + return __kmalloc_node(size, flags, node); +} +#endif + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. @@ -531,55 +558,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) */ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { - if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB - unsigned int index; -#endif - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, size); -#endif - } - return __kmalloc(size, flags); -} - -#ifndef CONFIG_SLOB -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size)) { - unsigned int index; - - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], - flags, node, size); - } - return __kmalloc_node(size, flags, node); -} -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); + return kmalloc_node(size, flags, NUMA_NO_NODE); } -#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.c b/mm/slab.c index 1f3195344bdf..6ebf509bf2de 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,20 +3519,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret = slab_alloc(cachep, flags, size, _RET_IP_); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); - /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. diff --git a/mm/slub.c b/mm/slub.c index cdbbf0e97637..d8fb987ff7e0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3230,15 +3230,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) -{ - void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); - void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size);