From patchwork Thu Apr 14 08:57:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Harry (Hyeonggon) Yoo" <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79DC5C433EF for ; Thu, 14 Apr 2022 08:58:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B9E46B007D; Thu, 14 Apr 2022 04:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16AFB6B007E; Thu, 14 Apr 2022 04:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 032686B0080; Thu, 14 Apr 2022 04:58:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id EAD856B007D for ; Thu, 14 Apr 2022 04:58:35 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CD693219CD for ; Thu, 14 Apr 2022 08:58:35 +0000 (UTC) X-FDA: 79354883790.09.4B4C0A0 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf31.hostedemail.com (Postfix) with ESMTP id 7893120004 for ; Thu, 14 Apr 2022 08:58:35 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id t13so4235102pgn.8 for ; Thu, 14 Apr 2022 01:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=RubjxjJED6Mw2ADHs9bC6pFF/8ixEw2g9qX1cjRSx/QoRTFS+aCl/M/LClPmVSB/Ar hUcRjn3NB0Pm8HSMAvEpsHx0mw/7A3gGoqxg5WHW6wo/VbNffgIDqlwAZJJqRqh6iqnl V2aVPMB/JyjUFKCu0HnMMjO7f9F9eAahWb3d5YeUosdMbdlAHGcYG8RxoSNToK1zZ0Tw hjfZen2/EZsE0tEprZfscu94f65wy7/H5Xqp94d01zPuaU7+0yHqgI9oLcR4l73SRYqH M5UB6pkQkTYLn7GOirpdaaHFy7F/7drJz6a4hDl6a5igt9StPR8Y0lNVzY6tKbADs5te yHVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=JvRWcmk0WlTQ+qKKlNCuygkhE3pdV/ZKIkfv2OXQysp56V9DMyqj+ZG81x8mpeEZwL ErVMIkmTGiKGkWffur9M6e9XG38EGVUnX8q/JNq3MtmGhdOWJFBkqd8b6QaQBo8aj3jU zTIaCPTMOt+CCLMnR/vWZuf+cEMd3oECc91+fxcoBbdO3p25KtfFchZZCKM7ynjt/XZZ gOtK+g4W6A5YfYDaiB4OH1oa7t6IMHZ58bTMa8cmm+J4d7n7t1eH46aHStBB/lssG/98 sKssgJTU4SHfJ4UqvSHUgcNBrI37zidW40mBe8d6cs9DwSn9r9dS6i9DRwTm7X8dsrWq jeXg== X-Gm-Message-State: AOAM5313o79pVQGCw8MhSIfT+WxTiZwg3gLMdM+F4QfQN2CjZwRC3U6W eFUXz76gOrkB42OyaUxwib8= X-Google-Smtp-Source: ABdhPJzxZfc9soDCUGa8twuzUynuxNIiT+1ORj/tjvMAiS1C7RpfPC5rAYNf51IGF11gXFeOROr0JQ== X-Received: by 2002:a05:6a00:1341:b0:4fa:a3af:6ba3 with SMTP id k1-20020a056a00134100b004faa3af6ba3mr2847232pfu.51.1649926714563; Thu, 14 Apr 2022 01:58:34 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/23] mm/slab_common: cleanup kmalloc_large() Date: Thu, 14 Apr 2022 17:57:13 +0900 Message-Id: <20220414085727.643099-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: w85cyjzhfziot3wgapiuj38wi3nmac6g Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RubjxjJE; spf=pass (imf31.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7893120004 X-HE-Tag: 1649926715-242872 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large() and kmalloc_large_node() do same job, make kmalloc_large() wrapper of kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 9 ++++++--- mm/slab_common.c | 24 ------------------------ 2 files changed, 6 insertions(+), 27 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 97336acbebbf..143830f57a7f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,11 +484,14 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment - __alloc_size(1); - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); + +static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +{ + return kmalloc_large_node(size, flags, NUMA_NO_NODE); +} + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index cf17be8cd9ad..30684efc89d7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,30 +925,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret = NULL; - struct page *page; - unsigned int order = get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags = kmalloc_fix_flags(flags); - - flags |= __GFP_COMP; - page = alloc_pages(flags, order); - if (likely(page)) { - ret = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret = kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); - void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page;