From patchwork Mon Nov 13 19:13:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13454372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65890C4167D for ; Mon, 13 Nov 2023 19:15:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E1BB6B0282; Mon, 13 Nov 2023 14:14:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 41FD66B0286; Mon, 13 Nov 2023 14:14:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3D466B0287; Mon, 13 Nov 2023 14:14:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9AED56B0282 for ; Mon, 13 Nov 2023 14:14:18 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 672A01CAC57 for ; Mon, 13 Nov 2023 19:14:18 +0000 (UTC) X-FDA: 81453881796.22.9D3F0A3 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf15.hostedemail.com (Postfix) with ESMTP id 754E2A000C for ; Mon, 13 Nov 2023 19:14:16 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=D46Whj8Q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Sib4jhJi; dmarc=none; spf=pass (imf15.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699902856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C7E+/AO4YI9JRw9v06Xn3hJRV/04TmJk0Y92H5grHWg=; b=Etdp44tVb2EH+PGadyLrBxS+4/DBXy54I3NLhMtjLvaER1iUw8CYnUrgIw3b7AZMPo8vl3 6m8BcSMUWTgl+Wcsu5ev3Gsvbzo+QBSys0jBGccxs/CesjnRs3P9pRW06PIx0aqSKOaoWW rl6c4v5tCbsRGyCOEj2JgSYe1D08D8A= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=D46Whj8Q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Sib4jhJi; dmarc=none; spf=pass (imf15.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699902856; a=rsa-sha256; cv=none; b=u4HZ27SYEcZ/kxQT0ze6ZF6fbdHQwgtKDTwBmP83zMBiWjtGpfWyLbJ/dy5U8k3kf8LKWv bB4iIB4/5wl8QS0vwHe5xAbM7wb0duL3u/rMzUkM1Ke8YzRtUPUYWkBnegNM+JzlTJRhD4 i5Q9ILWGnXOXuwQHxOh6+VOCFxd2wms= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 4048A2192D; Mon, 13 Nov 2023 19:14:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1699902855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C7E+/AO4YI9JRw9v06Xn3hJRV/04TmJk0Y92H5grHWg=; b=D46Whj8Q6QonuM/Pbf/QbhvCbS3sZsUb90QIsWFW79n6vLYtfn2DMcVp+sbsP8kOED7wmY fyg10f5CBok+ktkURsWt0wvTh11EV2yADa7T8Pbj7nVu7KCFBoBPnyLSKqAX+HtusIoeSY QH3OGPX8gxKDaRkWbKBCSPrAM/+msAA= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1699902855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C7E+/AO4YI9JRw9v06Xn3hJRV/04TmJk0Y92H5grHWg=; b=Sib4jhJiAUR9HENXCcUXXzLqY9yFkRaZ9xM/WjbLKNqXtpvfe/CfebFcq5xH1h9BXeQmA5 oVOXZRKaooRGSiAQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E469513907; Mon, 13 Nov 2023 19:14:14 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id GA4qN4Z1UmVFOgAAMHmgww (envelope-from ); Mon, 13 Nov 2023 19:14:14 +0000 From: Vlastimil Babka To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Vlastimil Babka Subject: [PATCH 17/20] mm/slab: move kmalloc() functions from slab_common.c to slub.c Date: Mon, 13 Nov 2023 20:13:58 +0100 Message-ID: <20231113191340.17482-39-vbabka@suse.cz> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231113191340.17482-22-vbabka@suse.cz> References: <20231113191340.17482-22-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Queue-Id: 754E2A000C X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: yen5udi1k4fgdu5gogzab5ezwxquqjkx X-HE-Tag: 1699902856-541249 X-HE-Meta: U2FsdGVkX1+GrfggHBb7SpOgB6ZTTZElrgilTR9UmjoBdnVQ9yc24KCR37qDqaKqiIG4y/w1xlZHw00ibmtU7MeVhD441dPJPw2J9DU2gg8GauMZQLR6EerqpnXwaEULL+iDqHefFRaeauLARTG9IGuUE1grFR76Oe21UvySc4P7RNi7o0yL3O6Yu1lLHSGP+bhqbmJKBR/oWRG68hq1q1Dhc4EgHBDSMed5vZj1Wg2qGhTuzNS5ptVfrQvFlTFQs255JJ+e1pX4fW6OJQysmK2gNOnUZs/ryU96GGp/TDr9sfCN3/xrF2cJnZAO8dWCTtsRAB+7+OPa/L2VIzWUizVUTeF53oDlKyWKRzU3bSneAuqJTRjHblbtxhmFTkUAgaCdceXOle9E5e8s14Hsde3lCBDzY0tlZ6Rpn2BDPDzmeIP6cYcupv9cRRImmaB8sc+uvphTFl4r1AHgkxIVG0/gp4XPvLIsbKt1I1xfTSmbFV8XSqJ1uoNsdRq/j54uag/DUHoHJSVPCc4vFTP5+6d+QzYd0fvD0rGp3JXVd+d0dVMe78QfZgE9is16LsM+kOrVWZ/AGZCcU68wjFPBEg7IuV+VLV0nXLfU94E2cdeq5vrujaDyknu9S26xxm8BMJBcNMZ7/77o0up7auu5LoqmERg8JXs+jqSP409V5PDkhXFyT9oTqw8/S79lVxs4tb4Ljpl879il2PKR7iA2ePozSPa93SFXdjwUC5hzcEvodpWbeJrIxQhtdmH5LNTvPasgPyEst/jOoK7mmSTFihrFQ0/TbyulJ8UpDoxpUE+mXTZGMOCMlZoEF38U85pQTvW3ae/vRwAP0M811UpUuZaEt2uHqHuIhjxNd5cayLHIQd59f5AsomtckWKUTguTugF8dALs6Ybg+En0JZG5sytQEcAtEWkqZ5NMc3cnhtfUBrEzNX3B3Jy621hk6h2lU9yeTv7xd0i9o42j0Y4 INi3vnGj vgWmt0tqBningbTKowvl3rNJj7Mahw6gGjTngFUf1ged6FrY72OHO4rA3mE3+iU2R/U0lkhCdxOi5Gqb93oCoOeYiHNKLbD/OBtRgOTdPd2lL8v3ltWQh/JABuJoEoCNevbJzH88qecvHklfc7rXAUl8idlSZnOaUvdl/SstIy38xDv8ExM4jmZ+3T2j4c6D1rbzyE2ovzBoSM8lt2Vs26szl+qPyOH0vOlG8JlcC6pLd3WtT8FHAAl0GLE1mVt1x+La51KMtj/50Rmbs/xyboJiaunCzha/XK3oJtGGIrGl/fr8fs5oj7opAtQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This will eliminate a call between compilation units through __kmem_cache_alloc_node() and allow better inlining of the allocation fast path. Signed-off-by: Vlastimil Babka Reviewed-by: Kees Cook --- mm/slab.h | 3 -- mm/slab_common.c | 119 -------------------------------------------- mm/slub.c | 126 ++++++++++++++++++++++++++++++++++++++++++++--- 3 files changed, 118 insertions(+), 130 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 744384efa7be..eb04c8a5dbd1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -415,9 +415,6 @@ kmalloc_slab(size_t size, gfp_t flags, unsigned long caller) return kmalloc_caches[kmalloc_type(flags, caller)][index]; } -void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t orig_size, - unsigned long caller); gfp_t kmalloc_fix_flags(gfp_t flags); /* Functions provided by the slab allocators */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 31ade17a7ad9..238293b1dbe1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -936,50 +936,6 @@ void __init create_kmalloc_caches(slab_flags_t flags) slab_state = UP; } -static void *__kmalloc_large_node(size_t size, gfp_t flags, int node); -static __always_inline -void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = __kmalloc_large_node(size, flags, node); - trace_kmalloc(caller, ret, size, - PAGE_SIZE << get_order(size), flags, node); - return ret; - } - - if (unlikely(!size)) - return ZERO_SIZE_PTR; - - s = kmalloc_slab(size, flags, caller); - - ret = __kmem_cache_alloc_node(s, flags, node, size, caller); - ret = kasan_kmalloc(s, ret, size, flags); - trace_kmalloc(caller, ret, size, s->size, flags, node); - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - /** * __ksize -- Report full size of underlying allocation * @object: pointer to the object @@ -1016,30 +972,6 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) -{ - void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, - size, _RET_IP_); - - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); - - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmalloc_trace); - -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) -{ - void *ret = __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); - - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); - - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmalloc_node_trace); - gfp_t kmalloc_fix_flags(gfp_t flags) { gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK; @@ -1052,57 +984,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) return flags; } -/* - * To avoid unnecessary overhead, we pass through large allocation requests - * directly to the page allocator. We use __GFP_COMP, because we will need to - * know the allocation order to free the pages properly in kfree. - */ - -static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr = NULL; - unsigned int order = get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags = kmalloc_fix_flags(flags); - - flags |= __GFP_COMP; - page = alloc_pages_node(node, flags, order); - if (page) { - ptr = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - ptr = kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - kmsan_kmalloc_large(ptr, size, flags); - - return ptr; -} - -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); - - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), - flags, NUMA_NO_NODE); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); - -void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - void *ret = __kmalloc_large_node(size, flags, node); - - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), - flags, node); - return ret; -} -EXPORT_SYMBOL(kmalloc_large_node); - #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 52e2a65b1b11..b44243e7cc5e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3851,14 +3851,6 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru); -void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t orig_size, - unsigned long caller) -{ - return slab_alloc_node(s, NULL, gfpflags, node, - caller, orig_size); -} - void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3869,6 +3861,124 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) } EXPORT_SYMBOL(kmem_cache_alloc_node); +/* + * To avoid unnecessary overhead, we pass through large allocation requests + * directly to the page allocator. We use __GFP_COMP, because we will need to + * know the allocation order to free the pages properly in kfree. + */ +static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr = NULL; + unsigned int order = get_order(size); + + if (unlikely(flags & GFP_SLAB_BUG_MASK)) + flags = kmalloc_fix_flags(flags); + + flags |= __GFP_COMP; + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr = kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); + + return ptr; +} + +void *kmalloc_large(size_t size, gfp_t flags) +{ + void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); + + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), + flags, NUMA_NO_NODE); + return ret; +} +EXPORT_SYMBOL(kmalloc_large); + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + void *ret = __kmalloc_large_node(size, flags, node); + + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), + flags, node); + return ret; +} +EXPORT_SYMBOL(kmalloc_large_node); + +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, + unsigned long caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { + ret = __kmalloc_large_node(size, flags, node); + trace_kmalloc(caller, ret, size, + PAGE_SIZE << get_order(size), flags, node); + return ret; + } + + if (unlikely(!size)) + return ZERO_SIZE_PTR; + + s = kmalloc_slab(size, flags, caller); + + ret = slab_alloc_node(s, NULL, flags, node, caller, size); + ret = kasan_kmalloc(s, ret, size, flags); + trace_kmalloc(caller, ret, size, s->size, flags, node); + return ret; +} + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc_node); + +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, + int node, unsigned long caller) +{ + return __do_kmalloc_node(size, flags, node, caller); +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +{ + void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE, + _RET_IP_, size); + + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); + + ret = kasan_kmalloc(s, ret, size, gfpflags); + return ret; +} +EXPORT_SYMBOL(kmalloc_trace); + +void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) +{ + void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); + + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); + + ret = kasan_kmalloc(s, ret, size, gfpflags); + return ret; +} +EXPORT_SYMBOL(kmalloc_node_trace); + static noinline void free_to_partial_list( struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt,