From patchwork Thu Apr 14 08:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5152AC433F5 for ; Thu, 14 Apr 2022 08:58:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3C8D6B0074; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DECEA6B0078; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8DDF6B007B; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id BBAA36B0074 for ; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7425DA89CB for ; Thu, 14 Apr 2022 08:58:54 +0000 (UTC) X-FDA: 79354884588.26.580D7AE Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf17.hostedemail.com (Postfix) with ESMTP id E34D940005 for ; Thu, 14 Apr 2022 08:58:53 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id u29so591237pfg.7 for ; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=KXURwW+dd0r1EwDdcia61oDDeDZOkH2JDLvuDit1QPWsQbh8zV5QqtL8HcPxiPPOp1 6kgJavCCKYQ50eZESfleZyqDHMC1P8+g/MXDcGxE45Ty9PtJ2wQD//Oq5ySu3YsSiw4O lVWCkof3OXtnMscc9ZirN2sv7RjxChA6QIkKPh68hAI2TZZKP4KHuPANhQJUo5cfdb8K R79jVwVnqhhtn1UQWI7PW1rbEmTkeLFKGsWk3cSmQ+PUDP1N4jlogGHn+a/2TVm59UOc UjmPdALdxv0sZrq//TMSeSaVmonLCInzEK381vKQfmqIQX+M7mHnI40z2tdyY59jGZ0k vmIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=lzABsC7vYKx+Jyc2UU5N4EkCWYDdlPD+kVDA8DgnTwaB22RyD/lboZkVgfBZutJwbi hiqVGMU0kId6cTLMUMthkMkFzP9Ey+m8IgBGuXTD9BialnbSeQkkSv3ZCs/4cHAJMUM4 QamDi6Zz+bn1ZB3zD+2LpGBwIUVpQr7iDSrHGToHbYHnhUTsEvD2GoXLSNi2OFRwTGhP okRlp86iTO2jxmK5hNzAvklqfH8OJWSQ7l1k3TcgnPlJIbO+fOoWy1NLC6A8VaZ5Ptb0 AYt2XQTqB9LFire3RngcGzwsNgUougSWSprfYkV5Z1Doy7zw+bIiHBXI0a0q0XFeJYtB UKdw== X-Gm-Message-State: AOAM530iGUQaRVYwyIoKkbsfYTcq1jX9ibrymxUGPDkPmoePb97rvGhQ 7ZAdoM6xBxwhqOX5yDkhSVI= X-Google-Smtp-Source: ABdhPJyXSchgrJoVMijeZDC4c/X4LGGEpbmRQ/w4nF5EeMTTozyFoFqFsK3/RsNvjtnq5YSI+P5HOQ== X-Received: by 2002:a63:3d0b:0:b0:37f:ef34:1431 with SMTP id k11-20020a633d0b000000b0037fef341431mr1424935pga.547.1649926733078; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:51 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/23] mm/slab_common: cleanup kmalloc() Date: Thu, 14 Apr 2022 17:57:16 +0900 Message-Id: <20220414085727.643099-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 376ccxbqsyyx4jobtrjix995w9wzqhdz Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=KXURwW+d; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E34D940005 X-HE-Tag: 1649926733-867254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc() and kmalloc_node() do same job, make kmalloc() wrapper of kmalloc_node(). Remove kmem_cache_alloc_trace() that is now unused. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 93 +++++++++++++++----------------------------- mm/slab.c | 16 -------- mm/slub.c | 12 ------ 3 files changed, 32 insertions(+), 89 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index eb457f20f415..ea168f8a248d 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,23 +497,10 @@ static __always_inline void kfree_bulk(size_t size, void **p) } #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __alloc_size(3); - extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); - #else /* CONFIG_TRACING */ -static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) -{ - void *ret = kmem_cache_alloc(s, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - return ret; -} - static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { @@ -532,6 +519,37 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) return kmalloc_large_node(size, flags, NUMA_NO_NODE); } +#ifndef CONFIG_SLOB +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size)) { + unsigned int index; + + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) + return ZERO_SIZE_PTR; + + return kmem_cache_alloc_node_trace( + kmalloc_caches[kmalloc_type(flags)][index], + flags, node, size); + } + return __kmalloc_node(size, flags, node); +} +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif + + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. @@ -588,55 +606,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) */ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { - if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB - unsigned int index; -#endif - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, size); -#endif - } - return __kmalloc(size, flags); -} - -#ifndef CONFIG_SLOB -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size)) { - unsigned int index; - - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); - } - return __kmalloc_node(size, flags, node); -} -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); + return kmalloc_node(size, flags, NUMA_NO_NODE); } -#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.c b/mm/slab.c index c5ffe54c207a..b0aaca017f42 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3507,22 +3507,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -#ifdef CONFIG_TRACING -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, diff --git a/mm/slub.c b/mm/slub.c index 2a2be2a8a5d0..892988990da7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,18 +3216,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *l return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } - -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) -{ - void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, int node, unsigned long caller __maybe_unused) {