From patchwork Tue Jul 12 13:39:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07097C43334 for ; Tue, 12 Jul 2022 13:40:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FBE094007D; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AAB7940063; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7725394007D; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 69026940063 for ; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3CA0F340BA for ; Tue, 12 Jul 2022 13:40:21 +0000 (UTC) X-FDA: 79678557042.04.0D4EEBD Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf30.hostedemail.com (Postfix) with ESMTP id D903A8007D for ; Tue, 12 Jul 2022 13:40:20 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id q5so7267566plr.11 for ; Tue, 12 Jul 2022 06:40:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=Z2yvwOEeqLzlkysX3tAzh98KlM9fDBJZWvwk/jHI28aqsj3dgNo4OXlx4xGiU/PUOL hMX475+QpEsnX3L7CV0avgMlQzZXQKnVFqo+ix0FNpw6i/yJ6A+fVFXyd0YoX62i5/Gu KXl4bJGNnLvIIgBmXXLYvWZZeWXZAremP1z/CKu5XgBKzOe0b0JjL/R1YpIF99Pbzok0 6WXbOFEgbd7aCfzzRLZV4iU+aiJgGnLFQ1Y2XmODraA6PNtKuL86I8oh8xORR93tdZRC 2qHCylrHfqUOjyVV0z3Vx545hBF87H9lHT5MPU05JkJ7w0ZiOSHL20lPc6N4Bns4MivH Py1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=rRkWEPx4WYl/swYcMX3MsBW97ySfPDVxUmE3bUovNKnVEMpfOKoJWr7D1fFgK26Vpy us4TgManSjf1KCttHfWqYB4DgGZYckb5H/ewwfXR7bHDHgMPT1bRj7l0UY1bb2usQgID ZQ01e8pnr+e4W/vsahnjRs+UdevRETmliIdnHuzuwHcmv1tNAKtztrGosIwQdLZHGk8/ T/GPfQ2ucVxjJtadGoORPJY3c6QyDAisTW4cNuGFFrMDBmlYqM0A1ZTlMMl+pxuZXFwk UiPVGQ/W8jn8nmqZWYU2ssKbmdfYahd+T2Et41a4kY1rBr2kjcQtJUZboxgyNYwlV3Qm 0hug== X-Gm-Message-State: AJIora+Y/3KysBaJnE2efhOCazTg7DYV04A+G2tAdcMfNwcKzrGgxlAj c+ZQQsDYjRWFiOdzU5Z6Z9s= X-Google-Smtp-Source: AGRyM1t65LKcTddXYYGzUV8MUChdpiaqqiK6HwnsR9t+AA9zssLUWZ9aL/V+mESwNPFA8YEPjFrUhg== X-Received: by 2002:a17:902:b08a:b0:16c:68b6:311 with SMTP id p10-20020a170902b08a00b0016c68b60311mr712737plr.166.1657633219946; Tue, 12 Jul 2022 06:40:19 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:19 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 08/15] mm/slab_common: kmalloc_node: pass large requests to page allocator Date: Tue, 12 Jul 2022 13:39:38 +0000 Message-Id: <20220712133946.307181-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633220; a=rsa-sha256; cv=none; b=JsjoyKVCeRYbOSlCPMb+bAPOKuhiy5lYBCnRMCwoho2zj5tcDARPWt/spBz3axH952zoLB Ry5lzJPLRs62b6kK1CDTzpGkw7qDYj9HV2D3diSuTn6ZYJP2Wvx08sQVraDW60vf3IdnVc Dwpu3a7fqvuyeHdaJbB4xfVaMhdIcQ4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z2yvwOEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=cAOfEgSrcTvWy9pW+ZT6gnd/I2L2HparWHMywuRsOOdOnJI1Q+Yq3A1h/rT4FixhSZqs+/ +ApAKjuUzOcVhqSOdFbn2wK4aKE1UTfO/aSn1AuqRdHIs081lM7I10+PBEn/xW6i1G1nfA Poix2O8ZVR2flIa9RDry6bDsFDiQHDU= Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z2yvwOEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Stat-Signature: dpjorzd7jrx6cz9zgyf4f34838y5fk6e X-Rspamd-Queue-Id: D903A8007D X-Rspamd-Server: rspam08 X-HE-Tag: 1657633220-748573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). One problem is that currently there is no tracepoint in kmalloc_large_node(). Instead of simply putting tracepoint in it, use kmalloc_large_node{,_notrace} depending on its caller to show useful address for both inlined kmalloc_node() and __kmalloc_node_track_caller() when large objects are allocated. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- v3: This patch is new in v3 and this avoids missing caller in __kmalloc_large_node_track_caller() when kmalloc_large_node() is called. include/linux/slab.h | 26 +++++++++++++++++++------- mm/slab.h | 2 ++ mm/slab_common.c | 17 ++++++++++++++++- mm/slub.c | 2 +- 4 files changed, 38 insertions(+), 9 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 082499306098..fd2e129fc813 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -571,23 +571,35 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <= KMALLOC_MAX_CACHE_SIZE) { - unsigned int i = kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], + kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.h b/mm/slab.h index a8d5eb1c323f..7cb51ff44f0c 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -273,6 +273,8 @@ void create_kmalloc_caches(slab_flags_t); /* Find the kmalloc slab corresponding for a certain size */ struct kmem_cache *kmalloc_slab(size_t, gfp_t); + +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); #endif gfp_t kmalloc_fix_flags(gfp_t flags); diff --git a/mm/slab_common.c b/mm/slab_common.c index 6f855587b635..dc872e0ef0fc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,7 +956,8 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) { struct page *page; void *ptr = NULL; @@ -976,6 +977,20 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) return ptr; } + +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) +{ + return __kmalloc_large_node_notrace(size, flags, node); +} + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + void *ret = __kmalloc_large_node_notrace(size, flags, node); + + trace_kmalloc_node(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); + return ret; +} EXPORT_SYMBOL(kmalloc_large_node); #ifdef CONFIG_SLAB_FREELIST_RANDOM diff --git a/mm/slub.c b/mm/slub.c index f22a84dd27de..3d02cf44adf7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4401,7 +4401,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, flags, node); + ret = kmalloc_large_node_notrace(size, flags, node); trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size),