From patchwork Thu Apr 14 08:57:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Harry (Hyeonggon) Yoo" <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ACFAC433EF for ; Thu, 14 Apr 2022 08:58:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDA4C6B0074; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E89A76B007D; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D527A6B007E; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id C732D6B0074 for ; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9B7CB219C9 for ; Thu, 14 Apr 2022 08:58:23 +0000 (UTC) X-FDA: 79354883286.04.522B03C Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf25.hostedemail.com (Postfix) with ESMTP id 1781AA0007 for ; Thu, 14 Apr 2022 08:58:22 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id y6so4141618plg.2 for ; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=LviAtoyKtaKBlZ2FL6h97l5iQCipPPEbnpaURSu/Q6d31pZSaLfk737UHtPjVNNsya 8UjFYt0XMpXiAj6EBZwDZVB762UoYzBR2Lzj9I3sMXWCZ8YFQ8tYaOt1+aE2v0m9SmXH 1kAPmDPsZ8J3ypTav1BKdu93hoc1ICkpd0CgIAdtvDBuioCInx4v2XMaSw2rMgpRmQ0V x8zQIQ3/oGnkXGZ+54jI0wCM1PCUt+vZfoGONGUa2q+VSonlhy6okbOOfU2p36TJPYAS r3WdtXJQHOamPnjAmWsDfakB80v3/UcMgboUJRkgjAUuGThAWyyCkumDxlZIQvS0PPPQ 0zBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=5UgyabY+0fUqB48VRMNeBgXz29DENzqjEqli/rXw+wrbU7vgdUx4qi5DN97jcOpW9Z tmpzXpS0VJmAKy9fAjPueWQ+Qf8043arAAaIdD91wDaYbHLLP/KIR8Ae0HEB6kRCgq3i xUJGOaSbGqnzmQZPIJuHn3eex+1sSN/XygAslBth5utXRhLCvV1ROJXKUbk6K/fZLRUf r2st49cFNBt+d0GEEtdCpxMyjLgLkJFrfV+WAWdzLJEOZ61Z/RBP33iB6nCGauYtun+d JYNobU03p4y9nFMMniLsRxIe5TCEjcR/n5JoloUqWIK6xQqKYliKguy7/KYF+WXrF9rE 7QhA== X-Gm-Message-State: AOAM531XAf4c+SGE7jSbl59f5B4lKpn8LwkUQxUmmQ7DpqQ4chuA2ACF aiO/zT7I/60Hdv1lsiFthD4= X-Google-Smtp-Source: ABdhPJyC2Gz8AiXe//HxfShI0SN9Ulacz6Yh5OZH2akI/9dzq5MTL9VvmXVj1hdw+LWLAjf1fwUBCw== X-Received: by 2002:a17:90b:4d01:b0:1cd:46e8:215a with SMTP id mw1-20020a17090b4d0100b001cd46e8215amr2635214pjb.73.1649926702169; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:20 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/23] mm/slub: move kmalloc_large_node() to slab_common.c Date: Thu, 14 Apr 2022 17:57:11 +0900 Message-Id: <20220414085727.643099-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: as5fhtmpk3odgwi74shpyu7oqonawhr7 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LviAtoyK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 1781AA0007 X-HE-Tag: 1649926702-886823 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 3 +++ mm/slab_common.c | 22 ++++++++++++++++++++++ mm/slub.c | 25 ------------------------- 3 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6f6e22959b39..97336acbebbf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -486,6 +486,9 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); + +extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) + __assume_page_alignment __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 308cd5449285..e72089515030 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -949,6 +949,28 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr = NULL; + unsigned int order = get_order(size); + + flags |= __GFP_COMP; + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr = kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + + return ptr; +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 44170b4f084b..640712706f2b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1679,14 +1679,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) -{ - ptr = kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4399,23 +4391,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr = NULL; - unsigned int order = get_order(size); - - flags |= __GFP_COMP; - page = alloc_pages_node(node, flags, order); - if (page) { - ptr = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - void *__kmalloc_node(size_t size, gfp_t flags, int node) { struct kmem_cache *s;