From patchwork Tue Jul 12 13:39:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E68DCC433EF for ; Tue, 12 Jul 2022 13:40:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8373394007A; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E8A9940063; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 687E794007A; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 58CD0940063 for ; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 28978604E4 for ; Tue, 12 Jul 2022 13:40:11 +0000 (UTC) X-FDA: 79678556622.04.173C14D Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf19.hostedemail.com (Postfix) with ESMTP id BE6241A0068 for ; Tue, 12 Jul 2022 13:40:10 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id o18so7576863pgu.9 for ; Tue, 12 Jul 2022 06:40:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=ThsouO6vHYVO9wZ6HVSglUj9qGd+WEyZjOmXsK1l2F9xrvQrbQGR0JkLp94pRUcVyy 0JuVUYZufPEbar4+20VlXfJXzD/YREBs+HqTEUyJ9dza5nY8HbVXlxttZr7aRzLgrbL5 XV2OtLLkL76cB9B2Uts67ZO8pjpnm8KGDX4fmOZ+iIWYP7buo5leMp2OKTb139/oGv9q gVlF+nDglEgWpcwIKbS3NlZbMtrtRd2Ng6dQ8th5Od8C48QDkqIsHzrB+mpfDRaeq8Fb RierjCgrRwJNMkUEPFdapaCH8CZ1HuOcdgYydNt7s7fbBmDBO3MvFponw6ACAeoi4VHY m1cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=Qxe+bIWA1Sb8Bj2u5A+qPdZs9b1ivXG5OqxKVtLreRtW3U4HEcr36Ylm3YiHT+ePLh TGZcYLt9OwORf0mqMQ2fKE55fzzfRnLoRGChACGi7Q32CGH2EXeuv0LfwUiNhOLmkjty RECp72xRTeU4169cne+UxCFHkC4jCV/+KHfe/yqyrn/yIxA3LZwivMNrdIZ+kpUtfobV 0z77HUze3aBNic6fZpcS6yFopibiAzHsUZ0os/2pdqX2bXKkWydwZxlsOai/w/UnjYbW rO90SwK/udBQMnC592HZYE28SuUvgQ+HU7Eo1NmT5urCHl8Mj9oJFLFVw8WNxGzkOZPl N82w== X-Gm-Message-State: AJIora+6dOJ0hhyH+twklAYNLVHteFEiCDBuNsz8zTWl/Q51xx8L9Z4A aMr2zZgu6mFGS9/iZFhK+KE= X-Google-Smtp-Source: AGRyM1tcJ8LvhNY1blwZS1ll3lYp4SL+tU513CYHvlgfsd43chQOMW2DATB/G8tJYHwwoaYncOvzfw== X-Received: by 2002:a63:1607:0:b0:412:8fc0:756b with SMTP id w7-20020a631607000000b004128fc0756bmr20201203pgl.142.1657633209739; Tue, 12 Jul 2022 06:40:09 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:09 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 05/15] mm/sl[au]b: factor out __do_kmalloc_node() Date: Tue, 12 Jul 2022 13:39:35 +0000 Message-Id: <20220712133946.307181-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ThsouO6v; spf=pass (imf19.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633210; a=rsa-sha256; cv=none; b=WDtKZe4dn202RYbVpqwDDLD5Dt0JCvfbSrEjxVvTzZhlJacRBxhGPz0ymTyqoq8z2EiA/C v0sRji3nFK9p1Ah05u2JS6AsZXApiLk9q+mZWjqTYEO04MQxNh9SFs3V8VY9H1ZuAjHb6K 1w6x4edKqjlw+oMz+aX8jwx9VX/KG1E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633210; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=r4vL35tDZ0xI8ZIrbW6FMhx4/vFUeZwhiZT8Tt/JNppC6d3zjjsiTjC9kcRVEdhDi7/U7U 1HUIEZbalI3IflohmDqie9QzryaCUG9gjUQMIqXv+LKXPrLEWkyB5MoZGhCbFu6R3Y8HI6 7fH4glk5o+4x1YCQms7zXppvleTdk/Y= Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ThsouO6v; spf=pass (imf19.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Stat-Signature: kuufgd4efd49pj3sdyy8go5gdumuiipg X-Rspamd-Queue-Id: BE6241A0068 X-HE-Tag: 1657633210-574060 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __kmalloc(), __kmalloc_node(), __kmalloc_node_track_caller() mostly do same job. Factor out common code into __do_kmalloc_node(). Note that this patch also fixes missing kasan_kmalloc() in SLUB's __kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 30 +---------------------- mm/slub.c | 71 +++++++++++++++---------------------------------------- 2 files changed, 20 insertions(+), 81 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index da2f6a5dd8fa..ab34727d61b2 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3631,37 +3631,9 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = slab_alloc(cachep, NULL, flags, size, caller); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, cachep, - size, cachep->size, flags); - - return ret; -} - void *__kmalloc(size_t size, gfp_t flags) { - return __do_kmalloc(size, flags, _RET_IP_); + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } EXPORT_SYMBOL(__kmalloc); diff --git a/mm/slub.c b/mm/slub.c index 7c284535a62b..2ccc473e0ae7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4402,29 +4402,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, NULL, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4442,7 +4419,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) return kmalloc_large_node_hook(ptr, size, flags); } -void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) { struct kmem_cache *s; void *ret; @@ -4450,7 +4428,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node(size, flags, node); - trace_kmalloc_node(_RET_IP_, ret, NULL, + trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), flags, node); @@ -4462,16 +4440,28 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); + ret = slab_alloc_node(s, NULL, flags, node, caller, size); - trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, flags, node); + trace_kmalloc_node(caller, ret, s, size, s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); return ret; } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} EXPORT_SYMBOL(__kmalloc_node); +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + + #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4905,32 +4895,9 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) } void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) + int node, unsigned long caller) { - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, gfpflags, node); - - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - gfpflags, node); - - return ret; - } - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node); - - return ret; + return __do_kmalloc_node(size, gfpflags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller);