From patchwork Thu Apr 14 08:57:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F68CC433FE for ; Thu, 14 Apr 2022 08:57:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AA006B0071; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9599F6B0078; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822896B007B; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 749B86B0071 for ; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4BFE4213E0 for ; Thu, 14 Apr 2022 08:57:58 +0000 (UTC) X-FDA: 79354882236.06.9F21DE2 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf25.hostedemail.com (Postfix) with ESMTP id DA233A0002 for ; Thu, 14 Apr 2022 08:57:57 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id t13so4233936pgn.8 for ; Thu, 14 Apr 2022 01:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=HX/bbJRHO3U3FJx3/b3hVoAzs392gVgvfWPhItFLKedHkoAlVEE6foru+q1MleQcYK +9E+XntVlp3ohUJMNnB7WZ3DtVaNzJ3FICs50O1Mm7NAz90b57jM9QlpR/mOXNfYt1ot wV4qA5Uw0PO8s0OfpVPXxFBXn6XmaKg2foj0E5bgqSA/mX3e1O1wPAAyQ/PTAzWtIMRe +86gWMlWrPb62XPH1F8W7bdxTllnFnSuKCKrtSBtlxV7DQ28IPCLj3xjUqTpCbw/Em2r hwArpFKwFIswZNegyfJZxAR/ejUX/PtFbB/XewBFCHv/yshHXtl5cr0Zbvp4AYfqWahD 2wTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=T/t6c2dlqNJGggZnYdj7HyPfeojBn+XWCxDLHb8Lor1XIdmWkKvyT9Guse/4FHolma B8tAQeXPFznNhMIHuky+RSIwcVdRkAqKws+wZY/3P1uGZx5zhhfqS1u19/aH1LJ8CZAy 60kRgfsnUYBcze9KHdZ5M0ZVeFT09M1HB8XTHbt60PJR5rVITqEJArCYc84o0aAXdFbY UXVhqtUMYYTSMrRWqu3GkISS+18hRwER72j/5o5oWNGIf1to5y4qPDxyUVQF8g6P71hz RfacCiR1+hGo3QqwsPekWFQvufPodCO9dcPnfDMkUzKmOUTgaUb3HKDhPMW/w7SNTMes Iq+g== X-Gm-Message-State: AOAM533FAJnqFpkua3Nz2rhtuL1CE6u4De7ndTsEEOQdvT2MUr7iw7Td 9fy/DKPArvA1BEA5QFvjjaA= X-Google-Smtp-Source: ABdhPJwdf6PDIV9iYxCmyC0U1BopJSIt85+zGKggzLDKYsaXkFCXCFJ6KkO4FwiQNT+ofldpajL/qA== X-Received: by 2002:a63:5648:0:b0:398:dfcf:c9c6 with SMTP id g8-20020a635648000000b00398dfcfc9c6mr1485952pgm.0.1649926676999; Thu, 14 Apr 2022 01:57:56 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:55 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/23] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Date: Thu, 14 Apr 2022 17:57:07 +0900 Message-Id: <20220414085727.643099-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: drauehbi73d8nfe99qzme7e6y5b1t4bq Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="HX/bbJRH"; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DA233A0002 X-HE-Tag: 1649926677-300381 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that slab_alloc_node() is available for SLAB when CONFIG_NUMA=n, remove CONFIG_NUMA ifdefs for common kmalloc functions. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 28 ---------------------------- mm/slab.c | 2 -- mm/slob.c | 5 +---- mm/slub.c | 6 ------ 4 files changed, 1 insertion(+), 40 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 11ceddcae9f4..a3b9d4c20d7e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -444,38 +444,18 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, @@ -689,20 +669,12 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t } -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index f033d5b4fefb..5ad55ca96ab6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3545,7 +3545,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3619,7 +3618,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) diff --git a/mm/slob.c b/mm/slob.c index dfa6808dff36..c8c3b5662edf 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -534,14 +534,12 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif void kfree(const void *block) { @@ -641,7 +639,7 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_ return slob_alloc_node(cachep, flags, NUMA_NO_NODE); } EXPORT_SYMBOL(kmem_cache_alloc_lru); -#ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -653,7 +651,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index d7e8355b2f08..e36c148e5069 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3260,7 +3260,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3287,7 +3286,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -4424,7 +4422,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4471,7 +4468,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4929,7 +4925,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4959,7 +4954,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab)