From patchwork Thu Apr 14 08:57:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27846C433F5 for ; Thu, 14 Apr 2022 08:58:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7ECB6B0073; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B30946B007D; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F6F66B007E; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 919906B0073 for ; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5A8F92546C for ; Thu, 14 Apr 2022 08:58:17 +0000 (UTC) X-FDA: 79354883034.23.C306BBA Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf26.hostedemail.com (Postfix) with ESMTP id E14A314000A for ; Thu, 14 Apr 2022 08:58:16 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id s8so4256505pfk.12 for ; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=oony/Txz5UItxFolta6iBfVnB8Y1xpXvK3iNV5pZyj98ppFo9RkGtVpwBS+DKI8QWO VwGRSP+JOqEQKfBIFD3st+33YwNN2BP25EYis48bexbK4VejcsLUZYH6XDcLI9Him/Cc pdL4JgTE8s4JX3p8FES71ZLoECOOogMnaangOCqpr0OoafDq3QrY56Or4zqI6gg3Rl4I 6PJpps2bvqQyRG8B5sAjDr83OMcBN8MiYs6T12aHSnzK6ovfvPD4vpB0khpULee6lrlv lMsscFikXRKquMov+dpptRT9c7vJXLJ4n2ViPI2dPNKB+x+S5Mt3HpQnOIq2ER68Z3th 5ltQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=gNVXt9MvHqLUApgUZYuOivt7Ps6CPOXEX835plXkMWwYykjTILTnaxonZ9HBCdu4uy 24PKV4NiyodyGC+mfRGVs8QKXg7hZhJbr3Lsm/oEUaY2KaRktNebsrDhw2rZDyjWhXZG T2x9LthJ0aOGLPeoVYz1oMygO+zb4F96m6ty3/EL7a5W/CpWTJpugKxxWUIH3uW9Qm67 Tzo/gEZHRE+zNzvbfLTFkd1v72Z7hV7pL8v6OL1oMe0e+rDzunYYZjEj/GMD0SPlv6KC uo1+uarFLUTR3Le8xhQOPjeyKRSnlC9ip/vApGmxdGNPH5oQ9qPvzNljuH+lqIbUgHeX RsgQ== X-Gm-Message-State: AOAM532Beyz6M3f4HXZFmuqqBJqbKQNmR9CR4UVbkH50FyIf7LoW61va oCHqRkFiqTxjQnNdb5UY5Vc= X-Google-Smtp-Source: ABdhPJyHufbjJAEzA9QC+Uba/N3QwSoEgPj/JV96DrZ1cwqz0NEioGGRA5GL0D2mSmMpFjxGqs7n6g== X-Received: by 2002:a05:6a00:1bca:b0:505:ac8b:cc4b with SMTP id o10-20020a056a001bca00b00505ac8bcc4bmr14022293pfw.26.1649926696025; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:14 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/23] mm/sl[auo]b: fold kmalloc_order_trace() into kmalloc_large() Date: Thu, 14 Apr 2022 17:57:10 +0900 Message-Id: <20220414085727.643099-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: ijipy5an8tjpuitpo3htnd4mg6d75ne7 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="oony/Txz"; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: E14A314000A X-HE-Tag: 1649926696-78833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is no caller of kmalloc_order_trace() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Also add tracepoint in kmalloc_large() that was previously in kmalloc_order_trace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from v1: - updated some changelog (kmalloc_order() -> kmalloc_order_trace()) include/linux/slab.h | 22 ++-------------------- mm/slab_common.c | 14 +++----------- 2 files changed, 5 insertions(+), 31 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4c06d15f731c..6f6e22959b39 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,26 +484,8 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __alloc_size(1); - -#ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) -{ - unsigned int order = get_order(size); - return kmalloc_order_trace(size, flags, order); -} - +extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment + __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index c4d63f2c78b8..308cd5449285 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,10 +925,11 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret = NULL; struct page *page; + unsigned int order = get_order(size); if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -943,19 +944,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -#ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret = kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); -#endif +EXPORT_SYMBOL(kmalloc_large); #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */