From patchwork Thu Apr 14 08:57:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3C36C433EF for ; Thu, 14 Apr 2022 08:57:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69EFB6B0074; Thu, 14 Apr 2022 04:57:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64CE06B0075; Thu, 14 Apr 2022 04:57:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 515426B0078; Thu, 14 Apr 2022 04:57:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 42B816B0074 for ; Thu, 14 Apr 2022 04:57:46 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 11E8F604F6 for ; Thu, 14 Apr 2022 08:57:46 +0000 (UTC) X-FDA: 79354881732.04.3D71DF1 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf26.hostedemail.com (Postfix) with ESMTP id 82FE414000A for ; Thu, 14 Apr 2022 08:57:45 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id bg24so4532913pjb.1 for ; Thu, 14 Apr 2022 01:57:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BH7DV9U/vu2jtDErAITdUqLyoGxX/jkJAwwizEZv+Js=; b=JJCBD1YAeoPMhuIjVfylzRv0NxsFdBPMH+kZzmPcRUHMX5TQEv+wea9a8orZanf8p/ iQBR9sZAT+j7gUSeUw5x+S3c7HxDHcxVHwjYCdK64Ubqco6qU/RZDbVHtMfKFNqWUVmP O3nytkKNxVblG0o8Nm0u+CZV0R5UyGqV/FOYrYzfKFjL4Bnc7sOFMZdfj6D4b2M0/r+b TqmA3PW8ykLEUpurPiQU+4bjY8J77+tYoh5W+qyYIXtfWC3/zHTrhaDEEx0IXBPcZFgh jzHWoU+t+KbP9vrRpNLtvi96RB8dbsJo5Oj8NiPQCGf25lX0J3QbY8w05i/odEi4GOc7 WqsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BH7DV9U/vu2jtDErAITdUqLyoGxX/jkJAwwizEZv+Js=; b=elrReEAfKQATEQrftdAiH8lhzbfZWB0F0kupo+HWZgwdwpYOL0XX4+6OavJQlbqlly O9qkYBT42y4mnfCuxn5voQI50L9adAKP4odl40tz9MhyrZRUJI2Cd0UczPwpAhHnv0uc 1jYnRXQx8rx6HMlZKECow0mMsNBZhyURURMPZ5LimrlzavDMs5eiQD6IlHu9+qAWecG5 0ltFvfVBkwp2N9PG+h9fO0WzE6dLXJLNBBhKRs/y5Auk3jzn6qTDb58VYz/Z0UjUfNRK d9rVkiwqAEZJEQ61tK8lu6EAzgC7862O54Mv8v95dNUVb4jJh0lYO4aEZ2cME8cZs26t LEGg== X-Gm-Message-State: AOAM5322whwyPCFd7liaemExSrxvVbj88fhHb0xJlVyYfTIobONmmH+6 BvjaXeIKjbEret7yI6O/yw8= X-Google-Smtp-Source: ABdhPJxsUXct5U3lD2WN+lLx+SB2iXDsebklAp6DajrSrorvr3K71YGlQGb2/NbFbsSvuQLhCUWLBg== X-Received: by 2002:a17:90a:e7d2:b0:1c7:b410:ccfc with SMTP id kb18-20020a17090ae7d200b001c7b410ccfcmr2567374pjb.209.1649926664449; Thu, 14 Apr 2022 01:57:44 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:42 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 01/23] mm/slab: move NUMA-related code to __do_cache_alloc() Date: Thu, 14 Apr 2022 17:57:05 +0900 Message-Id: <20220414085727.643099-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 82FE414000A X-Stat-Signature: cmny33kedmk5rqyqqxzmui6hs8jepq9k Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=JJCBD1YA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1649926665-561149 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To implement slab_alloc_node() independent of NUMA configuration, move NUMA fallback/alternate allocation code into __do_cache_alloc(). One functional change here is not to check availability of node when allocating from local node. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from v1: - Undo removing path to alternate_node_alloc code when node id is not specified (which was mistake.) mm/slab.c | 68 +++++++++++++++++++++++++------------------------------ 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index e882657c1494..d854c24d5f5a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3187,13 +3187,14 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } +static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid); + static __always_inline void * slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *ptr; - int slab_node = numa_mem_id(); struct obj_cgroup *objcg = NULL; bool init = false; @@ -3208,30 +3209,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - - if (nodeid == NUMA_NO_NODE) - nodeid = slab_node; - - if (unlikely(!get_node(cachep, nodeid))) { - /* Node not bootstrapped yet */ - ptr = fallback_alloc(cachep, flags); - goto out; - } - - if (nodeid == slab_node) { - /* - * Use the locally cached objects if possible. - * However ____cache_alloc does not allow fallback - * to other nodes. It may fail while we still have - * objects on other nodes available. - */ - ptr = ____cache_alloc(cachep, flags); - if (ptr) - goto out; - } - /* ___cache_alloc_node can fall back to other nodes */ - ptr = ____cache_alloc_node(cachep, flags, nodeid); - out: + ptr = __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); init = slab_want_init_on_alloc(flags, cachep); @@ -3242,31 +3220,46 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ } static __always_inline void * -__do_cache_alloc(struct kmem_cache *cache, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *objp; + int slab_node = numa_mem_id(); - if (current->mempolicy || cpuset_do_slab_mem_spread()) { - objp = alternate_node_alloc(cache, flags); - if (objp) - goto out; + if (nodeid == NUMA_NO_NODE) { + if (current->mempolicy || cpuset_do_slab_mem_spread()) { + objp = alternate_node_alloc(cachep, flags); + if (objp) + goto out; + } + /* + * Use the locally cached objects if possible. + * However ____cache_alloc does not allow fallback + * to other nodes. It may fail while we still have + * objects on other nodes available. + */ + objp = ____cache_alloc(cachep, flags); + nodeid = slab_node; + } else if (nodeid == slab_node) { + objp = ____cache_alloc(cachep, flags); + } else if (!get_node(cachep, nodeid)) { + /* Node not bootstrapped yet */ + objp = fallback_alloc(cachep, flags); + goto out; } - objp = ____cache_alloc(cache, flags); /* * We may just have run out of memory on the local node. * ____cache_alloc_node() knows how to locate memory on other nodes */ if (!objp) - objp = ____cache_alloc_node(cache, flags, numa_mem_id()); - - out: + objp = ____cache_alloc_node(cachep, flags, nodeid); +out: return objp; } #else static __always_inline void * -__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __maybe_unused) { return ____cache_alloc(cachep, flags); } @@ -3293,7 +3286,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp = __do_cache_alloc(cachep, flags); + objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE); local_irq_restore(save_flags); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3532,7 +3525,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_disable(); for (i = 0; i < size; i++) { - void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags); + void *objp = kfence_alloc(s, s->object_size, flags) ?: + __do_cache_alloc(s, flags, NUMA_NO_NODE); if (unlikely(!objp)) goto error; From patchwork Thu Apr 14 08:57:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB6A5C433EF for ; Thu, 14 Apr 2022 08:57:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69D1A6B0075; Thu, 14 Apr 2022 04:57:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64E0D6B0078; Thu, 14 Apr 2022 04:57:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 514F36B007B; Thu, 14 Apr 2022 04:57:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 433646B0075 for ; Thu, 14 Apr 2022 04:57:52 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 102E961A72 for ; Thu, 14 Apr 2022 08:57:52 +0000 (UTC) X-FDA: 79354881984.24.E21DBDC Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf07.hostedemail.com (Postfix) with ESMTP id 6ED9340009 for ; Thu, 14 Apr 2022 08:57:51 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id q19so4232794pgm.6 for ; Thu, 14 Apr 2022 01:57:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1FPkm/TLzLTQMHUsixIGcuD+OLSdW/9rH0hk3KXQQRs=; b=nEVcOqG+5eRBSzUKW8iulIUpYNphwv7RdkQabP5oXTmOBF1Ho2/hsibWym9Lfx4IYu k96SHmurryzNiATsEe3/EJMtIEvHiOX07ahOn/YMby4d3Vx6ITx9bYpPoA1OQ7ZANpHh ernORsqx2XRP+ELWlZ/MXN+6nHQYIo44e/Gnuyx3HxLtCwWmpPYaJEK/+pvwD0yxFY7+ AELf4iNxECWm80YQMJ//H7PvCPhGuqkqOPt6/nrwmWz41b9289t5Sf0Sq1XE+jVtITTG IzWYeH9u7fJWdTQXzBMkVvyiRXg2sekJF3JZYq17DVddoqee2/o6f4TkQWpLWKWpfyZp E+tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1FPkm/TLzLTQMHUsixIGcuD+OLSdW/9rH0hk3KXQQRs=; b=B9J2CSr1sgbIxKSwOgioP4mz5plaP1O8kLJqWH02OGFjSId3Hp6HD6fUCRhaBhKAQw Z7u3ZdI8AEmsLNtgC+18X0tyilzKaKAB/dvVlvKXqX7cmXyMevFr+aVcwyaHomt5EbJ8 N2IxS5ea9kpP/NkF+/Vu4Jr7SeFCKrplVoW5sGalrvNkfnVM1VFgu9YMbG7WgkZtCzMq 2aT1sNCuswz5x1vOJLym05EihhwhoRNB+rwG5Fv9owjoR68sIuDaDTy0oe92WK/L+1L2 /dSMYnSkm1Don4ci43co8VIF71ZHrnNaNP+if3qy0mFwYABObATiBa0uxxUeoub2j/kj vDqw== X-Gm-Message-State: AOAM533CtTsOpuSAHWbsumOxUv16+9lNK2cwDLUVbckUAaQpkU4b1lhi dT7PUuagMOKX89VSNDTb0lg= X-Google-Smtp-Source: ABdhPJzebzesCCi5YprbRSNLDIjE6SUmQD82KtTQFMn1URgYcQqctqNOi9eF2IZnCysdbjBJWvsqSw== X-Received: by 2002:aa7:888b:0:b0:4fb:10e1:8983 with SMTP id z11-20020aa7888b000000b004fb10e18983mr2848126pfe.62.1649926670569; Thu, 14 Apr 2022 01:57:50 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:49 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 02/23] mm/slab: cleanup slab_alloc() and slab_alloc_node() Date: Thu, 14 Apr 2022 17:57:06 +0900 Message-Id: <20220414085727.643099-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 95bw1ndhroh8uzwbsz97d6ebrundebmb Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nEVcOqG+; spf=pass (imf07.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6ED9340009 X-HE-Tag: 1649926671-567317 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make slab_alloc_node() available even when CONFIG_NUMA=n and make slab_alloc() wrapper of slab_alloc_node(). This is necessary for further cleanup. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 50 +++++++++++++------------------------------------- 1 file changed, 13 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d854c24d5f5a..f033d5b4fefb 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3187,38 +3187,6 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } -static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid); - -static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, - unsigned long caller) -{ - unsigned long save_flags; - void *ptr; - struct obj_cgroup *objcg = NULL; - bool init = false; - - flags &= gfp_allowed_mask; - cachep = slab_pre_alloc_hook(cachep, NULL, &objcg, 1, flags); - if (unlikely(!cachep)) - return NULL; - - ptr = kfence_alloc(cachep, orig_size, flags); - if (unlikely(ptr)) - goto out_hooks; - - cache_alloc_debugcheck_before(cachep, flags); - local_irq_save(save_flags); - ptr = __do_cache_alloc(cachep, flags, nodeid); - local_irq_restore(save_flags); - ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - init = slab_want_init_on_alloc(flags, cachep); - -out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); - return ptr; -} - static __always_inline void * __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { @@ -3267,8 +3235,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __maybe_unus #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, - size_t orig_size, unsigned long caller) +slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3286,7 +3254,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE); + objp = __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3297,6 +3265,14 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, return objp; } +static __always_inline void * +slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + size_t orig_size, unsigned long caller) +{ + return slab_alloc_node(cachep, lru, flags, NUMA_NO_NODE, orig_size, + caller); +} + /* * Caller needs to acquire correct kmem_cache_node's list_lock * @list: List of detached free slabs should be freed by caller @@ -3585,7 +3561,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); + void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3603,7 +3579,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; - ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); + ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, From patchwork Thu Apr 14 08:57:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F68CC433FE for ; Thu, 14 Apr 2022 08:57:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AA006B0071; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9599F6B0078; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822896B007B; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 749B86B0071 for ; Thu, 14 Apr 2022 04:57:58 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4BFE4213E0 for ; Thu, 14 Apr 2022 08:57:58 +0000 (UTC) X-FDA: 79354882236.06.9F21DE2 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf25.hostedemail.com (Postfix) with ESMTP id DA233A0002 for ; Thu, 14 Apr 2022 08:57:57 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id t13so4233936pgn.8 for ; Thu, 14 Apr 2022 01:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=HX/bbJRHO3U3FJx3/b3hVoAzs392gVgvfWPhItFLKedHkoAlVEE6foru+q1MleQcYK +9E+XntVlp3ohUJMNnB7WZ3DtVaNzJ3FICs50O1Mm7NAz90b57jM9QlpR/mOXNfYt1ot wV4qA5Uw0PO8s0OfpVPXxFBXn6XmaKg2foj0E5bgqSA/mX3e1O1wPAAyQ/PTAzWtIMRe +86gWMlWrPb62XPH1F8W7bdxTllnFnSuKCKrtSBtlxV7DQ28IPCLj3xjUqTpCbw/Em2r hwArpFKwFIswZNegyfJZxAR/ejUX/PtFbB/XewBFCHv/yshHXtl5cr0Zbvp4AYfqWahD 2wTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SBtfUMUo5AKJFVkDupZX+/F7Ml8C31Xorc4nBPBe7Xo=; b=T/t6c2dlqNJGggZnYdj7HyPfeojBn+XWCxDLHb8Lor1XIdmWkKvyT9Guse/4FHolma B8tAQeXPFznNhMIHuky+RSIwcVdRkAqKws+wZY/3P1uGZx5zhhfqS1u19/aH1LJ8CZAy 60kRgfsnUYBcze9KHdZ5M0ZVeFT09M1HB8XTHbt60PJR5rVITqEJArCYc84o0aAXdFbY UXVhqtUMYYTSMrRWqu3GkISS+18hRwER72j/5o5oWNGIf1to5y4qPDxyUVQF8g6P71hz RfacCiR1+hGo3QqwsPekWFQvufPodCO9dcPnfDMkUzKmOUTgaUb3HKDhPMW/w7SNTMes Iq+g== X-Gm-Message-State: AOAM533FAJnqFpkua3Nz2rhtuL1CE6u4De7ndTsEEOQdvT2MUr7iw7Td 9fy/DKPArvA1BEA5QFvjjaA= X-Google-Smtp-Source: ABdhPJwdf6PDIV9iYxCmyC0U1BopJSIt85+zGKggzLDKYsaXkFCXCFJ6KkO4FwiQNT+ofldpajL/qA== X-Received: by 2002:a63:5648:0:b0:398:dfcf:c9c6 with SMTP id g8-20020a635648000000b00398dfcfc9c6mr1485952pgm.0.1649926676999; Thu, 14 Apr 2022 01:57:56 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:57:55 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/23] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Date: Thu, 14 Apr 2022 17:57:07 +0900 Message-Id: <20220414085727.643099-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: drauehbi73d8nfe99qzme7e6y5b1t4bq Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="HX/bbJRH"; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DA233A0002 X-HE-Tag: 1649926677-300381 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that slab_alloc_node() is available for SLAB when CONFIG_NUMA=n, remove CONFIG_NUMA ifdefs for common kmalloc functions. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 28 ---------------------------- mm/slab.c | 2 -- mm/slob.c | 5 +---- mm/slub.c | 6 ------ 4 files changed, 1 insertion(+), 40 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 11ceddcae9f4..a3b9d4c20d7e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -444,38 +444,18 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, @@ -689,20 +669,12 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t } -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index f033d5b4fefb..5ad55ca96ab6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3545,7 +3545,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3619,7 +3618,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) diff --git a/mm/slob.c b/mm/slob.c index dfa6808dff36..c8c3b5662edf 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -534,14 +534,12 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif void kfree(const void *block) { @@ -641,7 +639,7 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_ return slob_alloc_node(cachep, flags, NUMA_NO_NODE); } EXPORT_SYMBOL(kmem_cache_alloc_lru); -#ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -653,7 +651,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index d7e8355b2f08..e36c148e5069 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3260,7 +3260,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3287,7 +3286,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -4424,7 +4422,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4471,7 +4468,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4929,7 +4925,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4959,7 +4954,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) From patchwork Thu Apr 14 08:57:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A51FC433F5 for ; Thu, 14 Apr 2022 08:58:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8C5B6B0078; Thu, 14 Apr 2022 04:58:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E3B8A6B007B; Thu, 14 Apr 2022 04:58:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D04456B007D; Thu, 14 Apr 2022 04:58:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id C30E86B0078 for ; Thu, 14 Apr 2022 04:58:04 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9336A25438 for ; Thu, 14 Apr 2022 08:58:04 +0000 (UTC) X-FDA: 79354882488.17.0B412B2 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf18.hostedemail.com (Postfix) with ESMTP id 235101C0005 for ; Thu, 14 Apr 2022 08:58:03 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id t4so4259000pgc.1 for ; Thu, 14 Apr 2022 01:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K7ZmOmH1IbMc+vnlcJBZfbxN3MEcHeLW2ET7s3Wh5sM=; b=I70WfUKgVpQ6q+asH3FAmRn1WWiOAy6zd0FZbkAq6a02yN2z6/+ljmprknkpE2cMGP b4fjuvaH99F1aAEhWhnan9EeGlHqrNslTlrjI4++n/eXgw1TU26x9LyLn7zzYY7G8I5H k+Me9mMeCsC6PFp93hM/rk+BLLAkGkljpvOBs/M9dmFGcEAxWhAOIEZbIEa6HSBsTTR/ gGDOtgcz78mbZNfs+WMW7SVniVtfBKOzZjoUJOB74VvgPmFiGz4yVUfDdsGyyDfDMEYI /7LrxWRqv+J5fTU5AIfm6CDGouRBJSKF5fdrjc8qplsLME+iRSJfjU0b9+2f2fdEOGPX 9LJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K7ZmOmH1IbMc+vnlcJBZfbxN3MEcHeLW2ET7s3Wh5sM=; b=C5xUbltMmUX42eKRKBJ7bRoOGSJh5EXHKilEMqxsRKEqq9OHuQErixCIG09wOLKhY6 CbFrpen7hiHecUtuulwj2SE1uCiJLkmzjwtlkqj8jov7/Dq6Av0LY5zHoum0KJw04a0A D/lANW/1uw+b+iL0j7ilWKHqxh6dTA42+TVnVRHVkTyfugPQj/hoMDORttB9uVRteF/e KB9wb9/K88XViKRJlylNXsEVktPCcBc7s6P+GyOSbVbxlCjo+l+wOLg9kaAq5Cgva1im 3NgVZO3y2VuJpJ4iLnMH02kBalec3rsF7f1ts4vraWUlNGRAN+eZ4k2Ls5UUrl7Hod5a CoJg== X-Gm-Message-State: AOAM530Ul4N/uCMvRMPAt7fNefFOo7YSDQLCXZ+sNwLm+lgzqhvDwDcg rz+TT9dmM4dupp+jxMbx0lE= X-Google-Smtp-Source: ABdhPJwjedRUJ62C/VUVzRlP7p+Lh/lKMZPqawWi7OpyiXjnkb9bEleKPusTPvxIUdrOkQXNBlkZtg== X-Received: by 2002:aa7:88ce:0:b0:505:6a0b:c965 with SMTP id k14-20020aa788ce000000b005056a0bc965mr14021729pff.67.1649926683254; Thu, 14 Apr 2022 01:58:03 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.57.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:01 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 04/23] mm/slab_common: cleanup kmalloc_track_caller() Date: Thu, 14 Apr 2022 17:57:08 +0900 Message-Id: <20220414085727.643099-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 235101C0005 X-Stat-Signature: o8uptnpoonfkaf5pew4ms5b1k993hh3g X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=I70WfUKg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-HE-Tag: 1649926683-442898 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make kmalloc_track_caller() wrapper of kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 17 ++++++++--------- mm/slab.c | 6 ------ mm/slob.c | 6 ------ mm/slub.c | 22 ---------------------- 4 files changed, 8 insertions(+), 43 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index a3b9d4c20d7e..acdb4b7428f9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -639,6 +639,12 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag return kmalloc_array(n, size, flags | __GFP_ZERO); } +extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller) __alloc_size(1); +#define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) + /* * kmalloc_track_caller is a special version of kmalloc that records the * calling function of the routine calling it for slab leak tracking instead @@ -647,9 +653,9 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); #define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) + __kmalloc_node_track_caller(size, flags, \ + NUMA_NO_NODE, _RET_IP_) static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) @@ -668,13 +674,6 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } - -extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, - unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 5ad55ca96ab6..5f20efc7a330 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3675,12 +3675,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); -void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) -{ - return __do_kmalloc(size, flags, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index c8c3b5662edf..6d0fc6ad1413 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -528,12 +528,6 @@ void *__kmalloc(size_t size, gfp_t gfp) } EXPORT_SYMBOL(__kmalloc); -void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index e36c148e5069..e425c5c372de 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4903,28 +4903,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, gfpflags); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, NULL, gfpflags, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, size, s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { From patchwork Thu Apr 14 08:57:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04D3DC433F5 for ; Thu, 14 Apr 2022 08:58:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9839A6B007B; Thu, 14 Apr 2022 04:58:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9332C6B007D; Thu, 14 Apr 2022 04:58:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FAE36B007E; Thu, 14 Apr 2022 04:58:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 7298D6B007B for ; Thu, 14 Apr 2022 04:58:11 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3D0CF25211 for ; Thu, 14 Apr 2022 08:58:11 +0000 (UTC) X-FDA: 79354882782.06.A177E61 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf23.hostedemail.com (Postfix) with ESMTP id D1A07140006 for ; Thu, 14 Apr 2022 08:58:10 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id n43so276918pfv.10 for ; Thu, 14 Apr 2022 01:58:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=puoTxozYPPe1WBJOeEiOmhEKhJO+f8Zoi/BnyAhKQd8=; b=cdksr7EV49IyEJgLUz1YWDTL9lJ/zdfWCGdXtI9Bh0TKKCe9eNh2TlIy4BlUAgAKC6 pUG2zCtCjSVzx33NAuhCNZLPINqhW5RFCGOTvuSw6R9sdp9BRwMW+slA01wYuSmUIHUJ QOWhpVTp1JVhPuHxZO5FvGsP6u76460r6QxrVOKjXOGFqhLVMPGKuTv0Y5GFDi4JRz1B 0trDBxOgXfgi7wNmvFjM9EbAa+QLrLxC+iXRis/JYgzqurNP6MiMTeQ/pjG4lh21Ymkl ixXRImK43ZXqcpydtsny7uQVxBezw9010hdqZWsIQd1hZgE+ssHU28Cse+Jcs3es+08b Is9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=puoTxozYPPe1WBJOeEiOmhEKhJO+f8Zoi/BnyAhKQd8=; b=QnXO36rHj6kx+kaQwrl6pkmd6Lfn4qEvgqE+UTuzX70rfaOyl2tsCzRzKg8gwcgPtc dHSpvh/5bsgVroeSFRoRJ1fOV+P9iX6ZMBQkUEHTu0TTYm9hCUCrtbviqluw6EhqwyBp LCuMJqVHnePEU5UUT2neQ6c3HKTzTt/NO4uC2uRofLu0K+XU63xbkJa1JuKQBeaFjif0 cRFxQ6C+agjlbFTR9wfhAuKPJMZvb/fOm8GuQQJjZedmQVI5v1wGJo/BaEmj0h905ybm 1SMVfYPYxexmc2R+RbQJIiAkNcVjfdLo0Eqhl4pjmnWZatayisKZSFp1w9Rx0wldX182 MdXQ== X-Gm-Message-State: AOAM5323WO1jwXmRhuyZll/gGTY5oFXrpsNtxBpnM5dT6MHeIReSuukN iftm71VRSH7bvW+HEFOXnHs= X-Google-Smtp-Source: ABdhPJzR2qAtqNLzPfNtBk/cK7xehrYa7p742CXwvRI/dSLx9v1+BABWqj52kjI0Nkv1Qjq5r9dLmw== X-Received: by 2002:a05:6a00:a12:b0:504:e93f:2dd9 with SMTP id p18-20020a056a000a1200b00504e93f2dd9mr2886591pfh.49.1649926689688; Thu, 14 Apr 2022 01:58:09 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:07 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 05/23] mm/slab_common: cleanup __kmalloc() Date: Thu, 14 Apr 2022 17:57:09 +0900 Message-Id: <20220414085727.643099-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: u8z941394p7b7uxqfi7r3xcrw4uu8h8k Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cdksr7EV; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D1A07140006 X-HE-Tag: 1649926690-973430 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make __kmalloc() wrapper of __kmalloc_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 13 ++++++++++--- mm/slab.c | 34 ---------------------------------- mm/slob.c | 6 ------ mm/slub.c | 23 ----------------------- 4 files changed, 10 insertions(+), 66 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index acdb4b7428f9..4c06d15f731c 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -419,7 +419,16 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); +extern void *__kmalloc_node(size_t size, gfp_t flags, int node) + __assume_kmalloc_alignment + __alloc_size(1); + +static __always_inline __alloc_size(1) __assume_kmalloc_alignment +void *__kmalloc(size_t size, gfp_t flags) +{ + return __kmalloc_node(size, flags, NUMA_NO_NODE); +} + void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; @@ -444,8 +453,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment - __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; diff --git a/mm/slab.c b/mm/slab.c index 5f20efc7a330..db7eab9e2e9f 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3641,40 +3641,6 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = slab_alloc(cachep, NULL, flags, size, caller); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, - size, cachep->size, flags); - - return ret; -} - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc(size, flags, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index 6d0fc6ad1413..ab67c8219e8d 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -522,12 +522,6 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) return ret; } -void *__kmalloc(size_t size, gfp_t gfp) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index e425c5c372de..44170b4f084b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4399,29 +4399,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, NULL, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; From patchwork Thu Apr 14 08:57:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27846C433F5 for ; Thu, 14 Apr 2022 08:58:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7ECB6B0073; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B30946B007D; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F6F66B007E; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 919906B0073 for ; Thu, 14 Apr 2022 04:58:17 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5A8F92546C for ; Thu, 14 Apr 2022 08:58:17 +0000 (UTC) X-FDA: 79354883034.23.C306BBA Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf26.hostedemail.com (Postfix) with ESMTP id E14A314000A for ; Thu, 14 Apr 2022 08:58:16 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id s8so4256505pfk.12 for ; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=oony/Txz5UItxFolta6iBfVnB8Y1xpXvK3iNV5pZyj98ppFo9RkGtVpwBS+DKI8QWO VwGRSP+JOqEQKfBIFD3st+33YwNN2BP25EYis48bexbK4VejcsLUZYH6XDcLI9Him/Cc pdL4JgTE8s4JX3p8FES71ZLoECOOogMnaangOCqpr0OoafDq3QrY56Or4zqI6gg3Rl4I 6PJpps2bvqQyRG8B5sAjDr83OMcBN8MiYs6T12aHSnzK6ovfvPD4vpB0khpULee6lrlv lMsscFikXRKquMov+dpptRT9c7vJXLJ4n2ViPI2dPNKB+x+S5Mt3HpQnOIq2ER68Z3th 5ltQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Flouc9x4YW1EROU8wB1b7QSoEspFYDh7fC18OAMupnU=; b=gNVXt9MvHqLUApgUZYuOivt7Ps6CPOXEX835plXkMWwYykjTILTnaxonZ9HBCdu4uy 24PKV4NiyodyGC+mfRGVs8QKXg7hZhJbr3Lsm/oEUaY2KaRktNebsrDhw2rZDyjWhXZG T2x9LthJ0aOGLPeoVYz1oMygO+zb4F96m6ty3/EL7a5W/CpWTJpugKxxWUIH3uW9Qm67 Tzo/gEZHRE+zNzvbfLTFkd1v72Z7hV7pL8v6OL1oMe0e+rDzunYYZjEj/GMD0SPlv6KC uo1+uarFLUTR3Le8xhQOPjeyKRSnlC9ip/vApGmxdGNPH5oQ9qPvzNljuH+lqIbUgHeX RsgQ== X-Gm-Message-State: AOAM532Beyz6M3f4HXZFmuqqBJqbKQNmR9CR4UVbkH50FyIf7LoW61va oCHqRkFiqTxjQnNdb5UY5Vc= X-Google-Smtp-Source: ABdhPJyHufbjJAEzA9QC+Uba/N3QwSoEgPj/JV96DrZ1cwqz0NEioGGRA5GL0D2mSmMpFjxGqs7n6g== X-Received: by 2002:a05:6a00:1bca:b0:505:ac8b:cc4b with SMTP id o10-20020a056a001bca00b00505ac8bcc4bmr14022293pfw.26.1649926696025; Thu, 14 Apr 2022 01:58:16 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:14 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/23] mm/sl[auo]b: fold kmalloc_order_trace() into kmalloc_large() Date: Thu, 14 Apr 2022 17:57:10 +0900 Message-Id: <20220414085727.643099-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: ijipy5an8tjpuitpo3htnd4mg6d75ne7 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="oony/Txz"; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: E14A314000A X-HE-Tag: 1649926696-78833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is no caller of kmalloc_order_trace() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Also add tracepoint in kmalloc_large() that was previously in kmalloc_order_trace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from v1: - updated some changelog (kmalloc_order() -> kmalloc_order_trace()) include/linux/slab.h | 22 ++-------------------- mm/slab_common.c | 14 +++----------- 2 files changed, 5 insertions(+), 31 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4c06d15f731c..6f6e22959b39 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,26 +484,8 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __alloc_size(1); - -#ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) -{ - unsigned int order = get_order(size); - return kmalloc_order_trace(size, flags, order); -} - +extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment + __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index c4d63f2c78b8..308cd5449285 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,10 +925,11 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret = NULL; struct page *page; + unsigned int order = get_order(size); if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -943,19 +944,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -#ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret = kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); -#endif +EXPORT_SYMBOL(kmalloc_large); #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ From patchwork Thu Apr 14 08:57:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ACFAC433EF for ; Thu, 14 Apr 2022 08:58:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDA4C6B0074; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E89A76B007D; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D527A6B007E; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id C732D6B0074 for ; Thu, 14 Apr 2022 04:58:23 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9B7CB219C9 for ; Thu, 14 Apr 2022 08:58:23 +0000 (UTC) X-FDA: 79354883286.04.522B03C Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf25.hostedemail.com (Postfix) with ESMTP id 1781AA0007 for ; Thu, 14 Apr 2022 08:58:22 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id y6so4141618plg.2 for ; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=LviAtoyKtaKBlZ2FL6h97l5iQCipPPEbnpaURSu/Q6d31pZSaLfk737UHtPjVNNsya 8UjFYt0XMpXiAj6EBZwDZVB762UoYzBR2Lzj9I3sMXWCZ8YFQ8tYaOt1+aE2v0m9SmXH 1kAPmDPsZ8J3ypTav1BKdu93hoc1ICkpd0CgIAdtvDBuioCInx4v2XMaSw2rMgpRmQ0V x8zQIQ3/oGnkXGZ+54jI0wCM1PCUt+vZfoGONGUa2q+VSonlhy6okbOOfU2p36TJPYAS r3WdtXJQHOamPnjAmWsDfakB80v3/UcMgboUJRkgjAUuGThAWyyCkumDxlZIQvS0PPPQ 0zBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RAAeRzDkd8VHwI1btUNFRYh6M+YjRVYA6C41AKkMgWk=; b=5UgyabY+0fUqB48VRMNeBgXz29DENzqjEqli/rXw+wrbU7vgdUx4qi5DN97jcOpW9Z tmpzXpS0VJmAKy9fAjPueWQ+Qf8043arAAaIdD91wDaYbHLLP/KIR8Ae0HEB6kRCgq3i xUJGOaSbGqnzmQZPIJuHn3eex+1sSN/XygAslBth5utXRhLCvV1ROJXKUbk6K/fZLRUf r2st49cFNBt+d0GEEtdCpxMyjLgLkJFrfV+WAWdzLJEOZ61Z/RBP33iB6nCGauYtun+d JYNobU03p4y9nFMMniLsRxIe5TCEjcR/n5JoloUqWIK6xQqKYliKguy7/KYF+WXrF9rE 7QhA== X-Gm-Message-State: AOAM531XAf4c+SGE7jSbl59f5B4lKpn8LwkUQxUmmQ7DpqQ4chuA2ACF aiO/zT7I/60Hdv1lsiFthD4= X-Google-Smtp-Source: ABdhPJyC2Gz8AiXe//HxfShI0SN9Ulacz6Yh5OZH2akI/9dzq5MTL9VvmXVj1hdw+LWLAjf1fwUBCw== X-Received: by 2002:a17:90b:4d01:b0:1cd:46e8:215a with SMTP id mw1-20020a17090b4d0100b001cd46e8215amr2635214pjb.73.1649926702169; Thu, 14 Apr 2022 01:58:22 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:20 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 07/23] mm/slub: move kmalloc_large_node() to slab_common.c Date: Thu, 14 Apr 2022 17:57:11 +0900 Message-Id: <20220414085727.643099-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: as5fhtmpk3odgwi74shpyu7oqonawhr7 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LviAtoyK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 1781AA0007 X-HE-Tag: 1649926702-886823 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 3 +++ mm/slab_common.c | 22 ++++++++++++++++++++++ mm/slub.c | 25 ------------------------- 3 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6f6e22959b39..97336acbebbf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -486,6 +486,9 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); + +extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) + __assume_page_alignment __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 308cd5449285..e72089515030 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -949,6 +949,28 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr = NULL; + unsigned int order = get_order(size); + + flags |= __GFP_COMP; + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr = kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + + return ptr; +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 44170b4f084b..640712706f2b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1679,14 +1679,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) -{ - ptr = kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4399,23 +4391,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr = NULL; - unsigned int order = get_order(size); - - flags |= __GFP_COMP; - page = alloc_pages_node(node, flags, order); - if (page) { - ptr = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - void *__kmalloc_node(size_t size, gfp_t flags, int node) { struct kmem_cache *s; From patchwork Thu Apr 14 08:57:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79DC5C433EF for ; Thu, 14 Apr 2022 08:58:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B9E46B007D; Thu, 14 Apr 2022 04:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16AFB6B007E; Thu, 14 Apr 2022 04:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 032686B0080; Thu, 14 Apr 2022 04:58:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id EAD856B007D for ; Thu, 14 Apr 2022 04:58:35 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CD693219CD for ; Thu, 14 Apr 2022 08:58:35 +0000 (UTC) X-FDA: 79354883790.09.4B4C0A0 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf31.hostedemail.com (Postfix) with ESMTP id 7893120004 for ; Thu, 14 Apr 2022 08:58:35 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id t13so4235102pgn.8 for ; Thu, 14 Apr 2022 01:58:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=RubjxjJED6Mw2ADHs9bC6pFF/8ixEw2g9qX1cjRSx/QoRTFS+aCl/M/LClPmVSB/Ar hUcRjn3NB0Pm8HSMAvEpsHx0mw/7A3gGoqxg5WHW6wo/VbNffgIDqlwAZJJqRqh6iqnl V2aVPMB/JyjUFKCu0HnMMjO7f9F9eAahWb3d5YeUosdMbdlAHGcYG8RxoSNToK1zZ0Tw hjfZen2/EZsE0tEprZfscu94f65wy7/H5Xqp94d01zPuaU7+0yHqgI9oLcR4l73SRYqH M5UB6pkQkTYLn7GOirpdaaHFy7F/7drJz6a4hDl6a5igt9StPR8Y0lNVzY6tKbADs5te yHVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FlV8ZEK41ZdMIfwkF7+gZmKxcQ8uBvSWc85Gpkn7FwU=; b=JvRWcmk0WlTQ+qKKlNCuygkhE3pdV/ZKIkfv2OXQysp56V9DMyqj+ZG81x8mpeEZwL ErVMIkmTGiKGkWffur9M6e9XG38EGVUnX8q/JNq3MtmGhdOWJFBkqd8b6QaQBo8aj3jU zTIaCPTMOt+CCLMnR/vWZuf+cEMd3oECc91+fxcoBbdO3p25KtfFchZZCKM7ynjt/XZZ gOtK+g4W6A5YfYDaiB4OH1oa7t6IMHZ58bTMa8cmm+J4d7n7t1eH46aHStBB/lssG/98 sKssgJTU4SHfJ4UqvSHUgcNBrI37zidW40mBe8d6cs9DwSn9r9dS6i9DRwTm7X8dsrWq jeXg== X-Gm-Message-State: AOAM5313o79pVQGCw8MhSIfT+WxTiZwg3gLMdM+F4QfQN2CjZwRC3U6W eFUXz76gOrkB42OyaUxwib8= X-Google-Smtp-Source: ABdhPJzxZfc9soDCUGa8twuzUynuxNIiT+1ORj/tjvMAiS1C7RpfPC5rAYNf51IGF11gXFeOROr0JQ== X-Received: by 2002:a05:6a00:1341:b0:4fa:a3af:6ba3 with SMTP id k1-20020a056a00134100b004faa3af6ba3mr2847232pfu.51.1649926714563; Thu, 14 Apr 2022 01:58:34 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/23] mm/slab_common: cleanup kmalloc_large() Date: Thu, 14 Apr 2022 17:57:13 +0900 Message-Id: <20220414085727.643099-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: w85cyjzhfziot3wgapiuj38wi3nmac6g Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RubjxjJE; spf=pass (imf31.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7893120004 X-HE-Tag: 1649926715-242872 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large() and kmalloc_large_node() do same job, make kmalloc_large() wrapper of kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 9 ++++++--- mm/slab_common.c | 24 ------------------------ 2 files changed, 6 insertions(+), 27 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 97336acbebbf..143830f57a7f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -484,11 +484,14 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment - __alloc_size(1); - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); + +static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +{ + return kmalloc_large_node(size, flags, NUMA_NO_NODE); +} + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index cf17be8cd9ad..30684efc89d7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,30 +925,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret = NULL; - struct page *page; - unsigned int order = get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags = kmalloc_fix_flags(flags); - - flags |= __GFP_COMP; - page = alloc_pages(flags, order); - if (likely(page)) { - ret = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret = kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); - void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; From patchwork Thu Apr 14 08:57:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CCBFC433F5 for ; Thu, 14 Apr 2022 08:58:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF3C06B0071; Thu, 14 Apr 2022 04:58:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA43B6B007E; Thu, 14 Apr 2022 04:58:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A44576B0080; Thu, 14 Apr 2022 04:58:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 97DD86B0071 for ; Thu, 14 Apr 2022 04:58:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6B76E234C7 for ; Thu, 14 Apr 2022 08:58:42 +0000 (UTC) X-FDA: 79354884084.06.F3D4AB9 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf05.hostedemail.com (Postfix) with ESMTP id 02AD610000B for ; Thu, 14 Apr 2022 08:58:41 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id s14so4116907plk.8 for ; Thu, 14 Apr 2022 01:58:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+ZJfCekWiq8F6LeclLH2EX7WmH3vxid4PPYVPgU+XNY=; b=PyS26MG52EA2v+5qrmXqDD8CMoZcoJx9HU6Uai/EzLFXfEPQlmK2xSu8EsFyKnzVSW 3vRReMigOKnc98w8pNi8tKC54o+q14hAA8NwSV3DbvPKBYWyRDFoE+ebTbuTSchT0Zx9 4P83K+mYXIxDW9x2YNJ4sBykq30udXIWXw658qVKKmkARp591P0alHbjbtvxESJOoMYd Sgy4svqTChxrEFHi5x5fG11cpzi83w4dPdY0sZkCidu/P47e7SDNo3ufaA4JaYrgw2s6 2RUtiVm5F6/lyqQB2PEZXTAFvrjq67heL6RXDPqkt3+DHuu4et/MFIEoMhk/ZhFSf3kd yXxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+ZJfCekWiq8F6LeclLH2EX7WmH3vxid4PPYVPgU+XNY=; b=uQYqY5u1JJ9xA7WNLsedMc5gsr2Qo0JL+nWnL4PHJpQp523d/BKuYWNZJDbe+6jm3R vU93lT1BIJ8ssaYrXKkDXGmpgjlxqGQI3ZCVzM33vsN3Ajja9KEMak17bHYaRkOWwrC3 MF7AqOfyEiHQOiYbInv/FNGvJUw4LBhfM72mKAt6EXLvOVlCkbjgZDaSiOXkwAHFxA/4 tuJqRSLJ26u7URkWjWVOdLUTeHgWitX0hVBZ6ct1DhkOvb8FFhrIFVrRGbAMQEysR+D7 dAsGaN7/i7mhQbdE9uEZIwj5y4sMD+CAG8nC1eAXL2eoB9A2ldqxHi3WuAulUIohgPC6 qxoQ== X-Gm-Message-State: AOAM530d7lksDTSSZJk3D4KZVSzlszf9S0IMOMXjFWK15S+v43h/lysg 3+/hW721fCzaqZS374uMC7M= X-Google-Smtp-Source: ABdhPJzgzBkSAWIuFJ1GQ7GPARwXR+mujiRe6H6Cuue9w2lWjEi1HI79ZBHiGxGceGxOZ0pObNzVAQ== X-Received: by 2002:a17:902:e1d4:b0:158:91b8:edea with SMTP id t20-20020a170902e1d400b0015891b8edeamr10257201pla.167.1649926720992; Thu, 14 Apr 2022 01:58:40 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:39 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 10/23] mm/slab_common: cleanup kmem_cache_alloc{,node,lru} Date: Thu, 14 Apr 2022 17:57:14 +0900 Message-Id: <20220414085727.643099-11-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: m86doih9wtotjud3ku8tqxgqq5iwgq6g Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PyS26MG5; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 02AD610000B X-HE-Tag: 1649926721-243731 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement only __kmem_cache_alloc_node() in slab allocators and make kmem_cache_alloc{,node,lru} wrapper of it. Now that kmem_cache_alloc{,node,lru} is inline function, we should use _THIS_IP_ instead of _RET_IP_ for consistency. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 52 ++++++++++++++++++++++++++++++++----- mm/slab.c | 61 +++++--------------------------------------- mm/slob.c | 27 ++++++-------------- mm/slub.c | 35 +++++-------------------- 4 files changed, 67 insertions(+), 108 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 143830f57a7f..1b5bdcb0fd31 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -429,9 +429,52 @@ void *__kmalloc(size_t size, gfp_t flags) return __kmalloc_node(size, flags, NUMA_NO_NODE); } -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) __assume_slab_alignment __malloc; + +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags, int node, unsigned long caller __maybe_unused) + __assume_slab_alignment __malloc; + +/** + * kmem_cache_alloc - Allocate an object + * @cachep: The cache to allocate from. + * @flags: See kmalloc(). + * + * Allocate an object from this cache. The flags are only relevant + * if the cache has no available objects. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline __malloc +void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) +{ + return __kmem_cache_alloc_node(s, NULL, flags, NUMA_NO_NODE, _THIS_IP_); +} + +/** + * kmem_cache_alloc_node - Allocate an object on the specified node + * @s: The cache to allocate from. + * @flags: See kmalloc(). + * @node: node number of the target node. + * + * Identical to kmem_cache_alloc but it will allocate memory on the given + * node, which can improve the performance for cpu bound structures. + * + * Fallback to other node is possible if __GFP_THISNODE is not set. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline __malloc +void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) +{ + return __kmem_cache_alloc_node(s, NULL, flags, node, _THIS_IP_); +} + +static __always_inline __malloc +void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) +{ + return __kmem_cache_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, _THIS_IP_); +} + void kmem_cache_free(struct kmem_cache *s, void *objp); /* @@ -453,9 +496,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment - __malloc; - #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); diff --git a/mm/slab.c b/mm/slab.c index db7eab9e2e9f..c5ffe54c207a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3442,40 +3442,18 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, __free_one(ac, objp); } -static __always_inline -void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, - gfp_t flags) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, + gfp_t flags, int nodeid, unsigned long caller) { - void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); + void *ret = slab_alloc_node(cachep, lru, flags, nodeid, + cachep->object_size, caller); - trace_kmem_cache_alloc(_RET_IP_, ret, - cachep->object_size, cachep->size, flags); + trace_kmem_cache_alloc_node(caller, ret, cachep->object_size, + cachep->size, flags, nodeid); return ret; } - -/** - * kmem_cache_alloc - Allocate an object - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * - * Allocate an object from this cache. The flags are only relevant - * if the cache has no available objects. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - return __kmem_cache_alloc_lru(cachep, NULL, flags); -} -EXPORT_SYMBOL(kmem_cache_alloc); - -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, - gfp_t flags) -{ - return __kmem_cache_alloc_lru(cachep, lru, flags); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(__kmem_cache_alloc_node); static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, @@ -3545,31 +3523,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -/** - * kmem_cache_alloc_node - Allocate an object on the specified node - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * @nodeid: node number of the target node. - * - * Identical to kmem_cache_alloc but it will allocate memory on the given - * node, which can improve the performance for cpu bound structures. - * - * Fallback to other node is possible if __GFP_THISNODE is not set. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) -{ - void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); - - trace_kmem_cache_alloc_node(_RET_IP_, ret, - cachep->object_size, cachep->size, - flags, nodeid); - - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node); - #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, diff --git a/mm/slob.c b/mm/slob.c index ab67c8219e8d..6c7c30845056 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -586,7 +586,8 @@ int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) return 0; } -static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) +static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node, + unsigned long caller) { void *b; @@ -596,12 +597,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(caller, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(caller, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -615,30 +616,18 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) return b; } -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - return slob_alloc_node(cachep, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(kmem_cache_alloc); - - -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags) -{ - return slob_alloc_node(cachep, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); - void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); } EXPORT_SYMBOL(__kmalloc_node); -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *lru __maybe_unused, + gfp_t gfp, int node, unsigned long caller __maybe_unused) { - return slob_alloc_node(cachep, gfp, node); + return slob_alloc_node(cachep, gfp, node, caller); } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index f10a892f1772..2a2be2a8a5d0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,30 +3216,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *l return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } -static __always_inline -void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) -{ - void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); - - trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, - s->size, gfpflags); - - return ret; -} - -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) -{ - return __kmem_cache_alloc_lru(s, NULL, gfpflags); -} -EXPORT_SYMBOL(kmem_cache_alloc); - -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) -{ - return __kmem_cache_alloc_lru(s, lru, gfpflags); -} -EXPORT_SYMBOL(kmem_cache_alloc_lru); #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) @@ -3252,16 +3228,17 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, + int node, unsigned long caller __maybe_unused) { - void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); + void *ret = slab_alloc_node(s, lru, gfpflags, node, caller, s->object_size); - trace_kmem_cache_alloc_node(_RET_IP_, ret, - s->object_size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(caller, ret, s->object_size, + s->size, gfpflags, node); return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *s, From patchwork Thu Apr 14 08:57:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2073C433F5 for ; Thu, 14 Apr 2022 08:58:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D41D6B0075; Thu, 14 Apr 2022 04:58:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 784EA6B0078; Thu, 14 Apr 2022 04:58:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6735D6B007B; Thu, 14 Apr 2022 04:58:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 5A3DF6B0075 for ; Thu, 14 Apr 2022 04:58:48 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 345A580A10 for ; Thu, 14 Apr 2022 08:58:48 +0000 (UTC) X-FDA: 79354884336.10.43A7263 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf06.hostedemail.com (Postfix) with ESMTP id D7C81180007 for ; Thu, 14 Apr 2022 08:58:47 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id bx5so4517125pjb.3 for ; Thu, 14 Apr 2022 01:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7gSbtqBRr2wSdYKirc6pWSRHibf6eUTgwDuPBAZwnl8=; b=d1aUNa4xAf9AFLk27JROnTj7gTMxi+QKNfpwzKujGplZaDWjcUHJbpmmyHUlxNQWWv up2SXBZ/CPbl36VLVAS6pP9Ll2Rm8ygAHJ548vam92dc7eGxOpgDCT1CCw/0Nc8WaUgy WM0WARRtqcLQfaHYHSUoRl1wWH0bPL/Oe/nQoHD1XbeEAnN0Zdx64Cb9lIxHowMa2bF+ 3nlolC3fzeGVtaGqAztkZhJiyoT3BRjQN32KkJnoa2vwiTJ0aOY4X/xgbXsLwD+kq7z7 hQZXajkx77QNN/8G6PLQQ52NcVzmt1tSOD3484narV1c5YgHPL9lPDq2c+s+9DDIElQ0 NE0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7gSbtqBRr2wSdYKirc6pWSRHibf6eUTgwDuPBAZwnl8=; b=M/HPJFVH2t/aU2ex4Ikoid0vwqwnbTdgWfptyXlU1JT/9GvXfrHNqShaHdrQ68kQhK 9xkTi91HhWvHzyVNZrAHL08WYhNIeRL2m80q9e0Q0g7ZrcM8a6u4PEQNwyEuhvH9F6So eccFoS3olVk+SaKwkNc6fPXJ7s7X2pH4ReW4HJHGzs6tYwXQXRZ0P3JqS1ai7/BPRHzk nYXnSprN+pdRHvXwkQQZ9bWL01X5VC4RxjaQAT/5sRwdvG+TuDlrFZTw1R+ui6o7stEd 5uK4z+T4LhX4jAwKdzz5kECmXkzNKQBc9NoTPgft1L/tMeOEI2jQTkiyNF2JN5ChiadC JDtA== X-Gm-Message-State: AOAM530IXaKMvN3D8BS9ZJqoHPA6B7zMjmpEuam7fGmZcqmZHsdjvL2V iR4MlNlHLKzS9ylWESlRAP3i5mnurmM= X-Google-Smtp-Source: ABdhPJzj3pUgRnAQ9AeoN1UcTJDOaBjCMDcKEporHnxQE//dhFhbfvsbYssdkFT4Us/0SBKGseAknQ== X-Received: by 2002:a17:902:b495:b0:158:8318:4cf9 with SMTP id y21-20020a170902b49500b0015883184cf9mr13579160plr.33.1649926726962; Thu, 14 Apr 2022 01:58:46 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:45 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 11/23] mm/slab_common: kmalloc_node: pass large requests to page allocator Date: Thu, 14 Apr 2022 17:57:15 +0900 Message-Id: <20220414085727.643099-12-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=d1aUNa4x; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D7C81180007 X-Stat-Signature: uxyy94rd4eifp5cpnn4gigereon1kkdo X-HE-Tag: 1649926727-270444 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 1b5bdcb0fd31..eb457f20f415 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -608,23 +608,35 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <= KMALLOC_MAX_CACHE_SIZE) { - unsigned int i = kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], + kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif /** * kmalloc_array - allocate memory for an array. From patchwork Thu Apr 14 08:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5152AC433F5 for ; Thu, 14 Apr 2022 08:58:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3C8D6B0074; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DECEA6B0078; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8DDF6B007B; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id BBAA36B0074 for ; Thu, 14 Apr 2022 04:58:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7425DA89CB for ; Thu, 14 Apr 2022 08:58:54 +0000 (UTC) X-FDA: 79354884588.26.580D7AE Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf17.hostedemail.com (Postfix) with ESMTP id E34D940005 for ; Thu, 14 Apr 2022 08:58:53 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id u29so591237pfg.7 for ; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=KXURwW+dd0r1EwDdcia61oDDeDZOkH2JDLvuDit1QPWsQbh8zV5QqtL8HcPxiPPOp1 6kgJavCCKYQ50eZESfleZyqDHMC1P8+g/MXDcGxE45Ty9PtJ2wQD//Oq5ySu3YsSiw4O lVWCkof3OXtnMscc9ZirN2sv7RjxChA6QIkKPh68hAI2TZZKP4KHuPANhQJUo5cfdb8K R79jVwVnqhhtn1UQWI7PW1rbEmTkeLFKGsWk3cSmQ+PUDP1N4jlogGHn+a/2TVm59UOc UjmPdALdxv0sZrq//TMSeSaVmonLCInzEK381vKQfmqIQX+M7mHnI40z2tdyY59jGZ0k vmIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6hxL75KFeP2qi1GRuhGW4keQJz0+ExPc+GiFqWwmOVw=; b=lzABsC7vYKx+Jyc2UU5N4EkCWYDdlPD+kVDA8DgnTwaB22RyD/lboZkVgfBZutJwbi hiqVGMU0kId6cTLMUMthkMkFzP9Ey+m8IgBGuXTD9BialnbSeQkkSv3ZCs/4cHAJMUM4 QamDi6Zz+bn1ZB3zD+2LpGBwIUVpQr7iDSrHGToHbYHnhUTsEvD2GoXLSNi2OFRwTGhP okRlp86iTO2jxmK5hNzAvklqfH8OJWSQ7l1k3TcgnPlJIbO+fOoWy1NLC6A8VaZ5Ptb0 AYt2XQTqB9LFire3RngcGzwsNgUougSWSprfYkV5Z1Doy7zw+bIiHBXI0a0q0XFeJYtB UKdw== X-Gm-Message-State: AOAM530iGUQaRVYwyIoKkbsfYTcq1jX9ibrymxUGPDkPmoePb97rvGhQ 7ZAdoM6xBxwhqOX5yDkhSVI= X-Google-Smtp-Source: ABdhPJyXSchgrJoVMijeZDC4c/X4LGGEpbmRQ/w4nF5EeMTTozyFoFqFsK3/RsNvjtnq5YSI+P5HOQ== X-Received: by 2002:a63:3d0b:0:b0:37f:ef34:1431 with SMTP id k11-20020a633d0b000000b0037fef341431mr1424935pga.547.1649926733078; Thu, 14 Apr 2022 01:58:53 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:51 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/23] mm/slab_common: cleanup kmalloc() Date: Thu, 14 Apr 2022 17:57:16 +0900 Message-Id: <20220414085727.643099-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 376ccxbqsyyx4jobtrjix995w9wzqhdz Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=KXURwW+d; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E34D940005 X-HE-Tag: 1649926733-867254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc() and kmalloc_node() do same job, make kmalloc() wrapper of kmalloc_node(). Remove kmem_cache_alloc_trace() that is now unused. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 93 +++++++++++++++----------------------------- mm/slab.c | 16 -------- mm/slub.c | 12 ------ 3 files changed, 32 insertions(+), 89 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index eb457f20f415..ea168f8a248d 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,23 +497,10 @@ static __always_inline void kfree_bulk(size_t size, void **p) } #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __alloc_size(3); - extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); - #else /* CONFIG_TRACING */ -static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) -{ - void *ret = kmem_cache_alloc(s, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - return ret; -} - static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { @@ -532,6 +519,37 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) return kmalloc_large_node(size, flags, NUMA_NO_NODE); } +#ifndef CONFIG_SLOB +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size)) { + unsigned int index; + + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) + return ZERO_SIZE_PTR; + + return kmem_cache_alloc_node_trace( + kmalloc_caches[kmalloc_type(flags)][index], + flags, node, size); + } + return __kmalloc_node(size, flags, node); +} +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif + + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. @@ -588,55 +606,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) */ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { - if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB - unsigned int index; -#endif - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, size); -#endif - } - return __kmalloc(size, flags); -} - -#ifndef CONFIG_SLOB -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size)) { - unsigned int index; - - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); - } - return __kmalloc_node(size, flags, node); -} -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); + return kmalloc_node(size, flags, NUMA_NO_NODE); } -#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.c b/mm/slab.c index c5ffe54c207a..b0aaca017f42 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3507,22 +3507,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -#ifdef CONFIG_TRACING -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, diff --git a/mm/slub.c b/mm/slub.c index 2a2be2a8a5d0..892988990da7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,18 +3216,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, struct list_lru *l return slab_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, addr, orig_size); } - -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) -{ - void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif - void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags, int node, unsigned long caller __maybe_unused) { From patchwork Thu Apr 14 08:57:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EFAEC433FE for ; Thu, 14 Apr 2022 08:59:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CFC56B0078; Thu, 14 Apr 2022 04:59:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07F796B007B; Thu, 14 Apr 2022 04:59:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E62D86B007E; Thu, 14 Apr 2022 04:59:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id D9B436B0078 for ; Thu, 14 Apr 2022 04:59:00 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8E6E6A89C8 for ; Thu, 14 Apr 2022 08:59:00 +0000 (UTC) X-FDA: 79354884840.27.825C412 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf15.hostedemail.com (Postfix) with ESMTP id 25677A0002 for ; Thu, 14 Apr 2022 08:58:59 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id bx5so4517527pjb.3 for ; Thu, 14 Apr 2022 01:58:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=anpXLhQ6pJWpmFj8SKm1BnkWTcdROH9kMWzOdPADW34=; b=LenyhKHFzD+n2vYe7925mlD8BIH5F+MaLSbAXusG6eUaQCPtXx6mhz+ylnS5SG4cbh lIzOAJ4kYKEswmVcWMxdBKIk9JvcgveLd6eW5tvJGFBzB2GBoBceuyM0veJTxsVl3d7v kCk4k5Yw/uGM1FBYsDq6pvALqwmvZ6mLj4pKR85OFlOT9JuL7TU/a0I2orbYwGaAGwD5 ZI/5v60Avg044pNPCNtu88H4aV3Tdwov7MNM6WmkLfJD+nm5i0q9w0TU5bg40iwj2yWQ 70wxiXOEPsVJlltigb4Et7SUYyAgktCjmUVnS2sAVHKml8F6xoGWGMS/SV4OgQQrGJNi AnyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=anpXLhQ6pJWpmFj8SKm1BnkWTcdROH9kMWzOdPADW34=; b=ptSa0OaWs4VM1hCzUhO/yxPYQWGnt1YmxozFMWkwVYUgttuZtkUM/fEqgwSVgg4HQM 5s+bVkR0/Z8PkxAYW1TgPJf6qRhxFg2UoSi4DvlqBtnb3xXcHmqXX3VR1RErjjVrXLXP qKw0DohfTjXd+VUBwWR2aPjhOPjw+yGuLF7v65JXcjhPrVGL8Knpk494yrxvNFIuhX7z crAMUA/0lWDSXKZzpnog4iYFbidIkUVLIw9BqvcmgPFFC2JEmU+oIp7fglVaEDB2OFFP +05fEJ+0HOr4sSTyIY5aRTUWvi2svYE5I8HAeTkR76g+VdhpVLBtpmMNkWf1d9ye9BNc G+zA== X-Gm-Message-State: AOAM533UuQjiCAKJT3LMJkagGcSd7XK7cx443ZL/VBtL2aPhcGVHdvT0 YT81Dl3RXX+LwMLZlXTomdI= X-Google-Smtp-Source: ABdhPJwHNxp2vapxB+MLkK7ET2sv+h8Elix+TFHRPcGof4ZmEDdRc7uB882QarF1ebs+ae78TNfHzg== X-Received: by 2002:a17:90a:8b91:b0:1be:db25:eecd with SMTP id z17-20020a17090a8b9100b001bedb25eecdmr2634323pjn.10.1649926739224; Thu, 14 Apr 2022 01:58:59 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:58:57 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 13/23] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Thu, 14 Apr 2022 17:57:17 +0900 Message-Id: <20220414085727.643099-14-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LenyhKHF; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 25677A0002 X-Stat-Signature: qm7afm7s1ybnm3s4zaxs5apoxihne665 X-HE-Tag: 1649926739-157649 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- Changes from previous series (Thanks to Vlastimil): - Disable/Enable irqs around free_large_kmalloc() - Do not lose NUMA locality in __do_kmalloc - Some style fixes (use slab->slab_cache instead of virt_to_cache) - Remove unsupported sizes in __kmalloc_index Changes from v1: - instead of defining varaible x, just casting to (void *) while calling free_large_kmalloc(). include/linux/slab.h | 23 +++++------------------ mm/slab.c | 44 ++++++++++++++++++++++++++++++-------------- mm/slab.h | 3 +++ mm/slab_common.c | 25 ++++++++++++++++++------- mm/slub.c | 19 ------------------- 5 files changed, 56 insertions(+), 58 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index ea168f8a248d..c8c82087c3f9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -231,27 +231,17 @@ void kmem_dump_obj(void *object); #ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -403,10 +393,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size, if (size <= 512 * 1024) return 19; if (size <= 1024 * 1024) return 20; if (size <= 2 * 1024 * 1024) return 21; - if (size <= 4 * 1024 * 1024) return 22; - if (size <= 8 * 1024 * 1024) return 23; - if (size <= 16 * 1024 * 1024) return 24; - if (size <= 32 * 1024 * 1024) return 25; if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant) BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()"); @@ -416,6 +402,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, /* Will never be reached. Needed because the compiler may complain */ return -1; } +static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ diff --git a/mm/slab.c b/mm/slab.c index b0aaca017f42..1dfe0f9d5882 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3533,7 +3533,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + return kmalloc_large_node(size, flags, node); cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; @@ -3607,15 +3607,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { struct kmem_cache *s; size_t i; + struct folio *folio; local_irq_disable(); for (i = 0; i < size; i++) { void *objp = p[i]; - if (!orig_s) /* called via kfree_bulk */ - s = virt_to_cache(objp); - else + if (!orig_s) { + folio = virt_to_folio(objp); + /* called via kfree_bulk */ + if (!folio_test_slab(folio)) { + local_irq_enable(); + free_large_kmalloc(folio, objp); + local_irq_disable(); + continue; + } + s = folio_slab(folio)->slab_cache; + } else s = cache_from_obj(orig_s, objp); + if (!s) continue; @@ -3644,20 +3654,24 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; - local_irq_save(flags); - kfree_debugcheck(objp); - c = virt_to_cache(objp); - if (!c) { - local_irq_restore(flags); + + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) { + free_large_kmalloc(folio, (void *)objp); return; } - debug_check_no_locks_freed(objp, c->object_size); + c = folio_slab(folio)->slab_cache; + + local_irq_save(flags); + kfree_debugcheck(objp); + debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); __cache_free(c, (void *)objp, _RET_IP_); local_irq_restore(flags); @@ -4079,15 +4093,17 @@ void __check_heap_object(const void *ptr, unsigned long n, size_t __ksize(const void *objp) { struct kmem_cache *c; - size_t size; + struct folio *folio; BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; - c = virt_to_cache(objp); - size = c ? c->object_size : 0; + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); - return size; + c = folio_slab(folio)->slab_cache; + return c->object_size; } EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index f7d018100994..b864c5bc4c25 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -681,6 +681,9 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +void free_large_kmalloc(struct folio *folio, void *object); + #endif /* CONFIG_SLOB */ static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index 30684efc89d7..960cc07c3a91 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -764,8 +764,8 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) /* * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time. - * kmalloc_index() supports up to 2^25=32MB, so the final entry of the table is - * kmalloc-32M. + * kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is + * kmalloc-2M. */ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(0, 0), @@ -789,11 +789,7 @@ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(262144, 256k), INIT_KMALLOC_INFO(524288, 512k), INIT_KMALLOC_INFO(1048576, 1M), - INIT_KMALLOC_INFO(2097152, 2M), - INIT_KMALLOC_INFO(4194304, 4M), - INIT_KMALLOC_INFO(8388608, 8M), - INIT_KMALLOC_INFO(16777216, 16M), - INIT_KMALLOC_INFO(33554432, 32M) + INIT_KMALLOC_INFO(2097152, 2M) }; /* @@ -906,6 +902,21 @@ void __init create_kmalloc_caches(slab_flags_t flags) /* Kmalloc array is now usable */ slab_state = UP; } + +void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(order == 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index 892988990da7..1dc9e8eebb62 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1679,12 +1679,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3490,19 +3484,6 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order = folio_order(folio); - - if (WARN_ON_ONCE(order == 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same From patchwork Thu Apr 14 08:57:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97808C433F5 for ; Thu, 14 Apr 2022 08:59:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33F896B0073; Thu, 14 Apr 2022 04:59:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F0506B007B; Thu, 14 Apr 2022 04:59:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B8406B007D; Thu, 14 Apr 2022 04:59:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 0F4E36B0073 for ; Thu, 14 Apr 2022 04:59:07 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D03421B1B for ; Thu, 14 Apr 2022 08:59:06 +0000 (UTC) X-FDA: 79354885092.01.65EC85E Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf27.hostedemail.com (Postfix) with ESMTP id 46A584000A for ; Thu, 14 Apr 2022 08:59:06 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id bg9so4219836pgb.9 for ; Thu, 14 Apr 2022 01:59:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7nL+6Lqmsphi23eQHyx3PjMEoO6jPBsPY6a/e/Mn+GM=; b=FxWh7U87zLedE8esjk7IcRw92WnQY7kx0n9GtXR7T/DBbCmrRqfpYk9BUV/Si7UCOI QkMCtjKIskmGztFstDW8OE2tpj3XkQK2P1SsBJBNPlakcVYuA9vp97I9s46MadowmUNv VYMnJhAgn34FOHW9mU/3jXQgzNcod00X7Bnr8sSZ8rq1mF3hBfdx3dFBoabnB44ap5Ko 2l+U0qlqxeQOK8L5IBTcGzgd4Q1SmCc1sIn4nzXN8aRbYlQYyZGTK/K2DNruRfNA4QWT 4Hq8Iw1vyLI611psCz0Zu3dDhV/siul56BdxJvX+1Ydv+TCf70lyPlXWNH79i1aWd+ql 6ogw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7nL+6Lqmsphi23eQHyx3PjMEoO6jPBsPY6a/e/Mn+GM=; b=uixPPLi7kj/uEoZ1ZkKXBx0AA9IX90nGN1lv6vbrPGog+lAF9xdcjtE39DtR/xwGMV tXTNDjZylvmFcmxZPFP2mOa7W2Wo9cvyNlO6BtJAiEZd4r3jg41+wQCaI8u8cZOZowwa UA/39XtBKSm1obsSSK0jEOdadZyqM1BRrJcVA5+cusXtgqcuy/Lho3YL5Nh372kgFtLy MMy/rkXARPo+U335KT0qjrDL0XVdH19MuodT1tkFnPVdBgA0jD8a+1apAEGjZJcxtBrs E0cPiQ5uvw87Ib0sKyyvnNyweCV3iSopnC3du0ZkSGWbFl0ADYmrE4Jrsr34/tXtKWVk +uyA== X-Gm-Message-State: AOAM531Hmyr/Z477W4eNPDsqySk5HrZMqzF5gEUNnR7LQkfecsL62JGc LA64xL7xTyt8Elq6ZybG1QU= X-Google-Smtp-Source: ABdhPJwBssAVf7hlNzo0QSt/qFMFzQCGzIzZo1icW6TkNDqIq87qRBwgYLPWhmDxpZA12aT0XMYX8A== X-Received: by 2002:a63:4142:0:b0:39c:dd63:27a2 with SMTP id o63-20020a634142000000b0039cdd6327a2mr1470742pga.119.1649926745388; Thu, 14 Apr 2022 01:59:05 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.58.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:03 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 14/23] mm/slab_common: print cache name in tracepoints Date: Thu, 14 Apr 2022 17:57:18 +0900 Message-Id: <20220414085727.643099-15-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: df1pfpyn8ytqwiujmm4nbf3yxaq381b5 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FxWh7U87; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Queue-Id: 46A584000A X-HE-Tag: 1649926746-187155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Print cache name in tracepoints. If there is no corresponding cache (kmalloc in SLOB or kmalloc_large_node), use KMALLOC_{,LARGE_}NAME macro. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 34 +++++++++++++++++++--------------- mm/slab.c | 9 +++++---- mm/slab.h | 4 ++++ mm/slab_common.c | 6 ++---- mm/slob.c | 10 +++++----- mm/slub.c | 10 +++++----- 6 files changed, 40 insertions(+), 33 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index ddc8c944f417..35e6887c6101 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -61,16 +61,18 @@ DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, DECLARE_EVENT_CLASS(kmem_alloc_node, - TP_PROTO(unsigned long call_site, + TP_PROTO(const char *name, + unsigned long call_site, const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) __field( size_t, bytes_req ) @@ -80,6 +82,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, ), TP_fast_assign( + __assign_str(name, name); __entry->call_site = call_site; __entry->ptr = ptr; __entry->bytes_req = bytes_req; @@ -88,7 +91,8 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node = node; ), - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d", + TP_printk("name=%s call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d", + __get_str(name), (void *)__entry->call_site, __entry->ptr, __entry->bytes_req, @@ -99,20 +103,20 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, DEFINE_EVENT(kmem_alloc_node, kmalloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); TRACE_EVENT(kfree, @@ -137,24 +141,24 @@ TRACE_EVENT(kfree, TRACE_EVENT(kmem_cache_free, - TP_PROTO(unsigned long call_site, const void *ptr, const char *name), + TP_PROTO(const char *name, unsigned long call_site, const void *ptr), - TP_ARGS(call_site, ptr, name), + TP_ARGS(name, call_site, ptr), TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) - __string( name, name ) ), TP_fast_assign( + __assign_str(name, name); __entry->call_site = call_site; __entry->ptr = ptr; - __assign_str(name, name); ), - TP_printk("call_site=%pS ptr=%p name=%s", - (void *)__entry->call_site, __entry->ptr, __get_str(name)) + TP_printk("name=%s call_site=%pS ptr=%p", + __get_str(name), (void *)__entry->call_site, __entry->ptr) ); TRACE_EVENT(mm_page_free, diff --git a/mm/slab.c b/mm/slab.c index 1dfe0f9d5882..3c47d0979706 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3448,8 +3448,9 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, void *ret = slab_alloc_node(cachep, lru, flags, nodeid, cachep->object_size, caller); - trace_kmem_cache_alloc_node(caller, ret, cachep->object_size, - cachep->size, flags, nodeid); + trace_kmem_cache_alloc_node(cachep->name, caller, ret, + cachep->object_size, cachep->size, + flags, nodeid); return ret; } @@ -3518,7 +3519,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(cachep->name, _RET_IP_, ret, size, cachep->size, flags, nodeid); return ret; @@ -3593,7 +3594,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp) if (!cachep) return; - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); + trace_kmem_cache_free(cachep->name, _RET_IP_, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) diff --git a/mm/slab.h b/mm/slab.h index b864c5bc4c25..45ddb19df319 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -275,6 +275,10 @@ void create_kmalloc_caches(slab_flags_t); struct kmem_cache *kmalloc_slab(size_t, gfp_t); #endif +/* cache names for tracepoints where it has no corresponding cache */ +#define KMALLOC_LARGE_NAME "kmalloc_large_node" +#define KMALLOC_NAME "kmalloc_node" + gfp_t kmalloc_fix_flags(gfp_t flags); /* Functions provided by the slab allocators */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 960cc07c3a91..416f0a1f17a6 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,10 +956,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(_RET_IP_, ptr, - size, PAGE_SIZE << order, - flags, node); - + trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); diff --git a/mm/slob.c b/mm/slob.c index 6c7c30845056..8abde6037d95 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,7 +505,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_NAME, caller, ret, size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,7 +514,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, size, PAGE_SIZE << order, gfp, node); } @@ -597,12 +597,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node, if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(caller, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(caller, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -648,7 +648,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(_RET_IP_, b, c->name); + trace_kmem_cache_free(c->name, _RET_IP_, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); diff --git a/mm/slub.c b/mm/slub.c index 1dc9e8eebb62..de03fa1f5667 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3215,7 +3215,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t { void *ret = slab_alloc_node(s, lru, gfpflags, node, caller, s->object_size); - trace_kmem_cache_alloc_node(caller, ret, s->object_size, + trace_kmem_cache_alloc_node(s->name, caller, ret, s->object_size, s->size, gfpflags, node); return ret; @@ -3229,7 +3229,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); @@ -3471,7 +3471,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(_RET_IP_, x, s->name); + trace_kmem_cache_free(s->name, _RET_IP_, x); slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -4352,7 +4352,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4811,7 +4811,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); + trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); return ret; } From patchwork Thu Apr 14 08:57:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC77DC433F5 for ; Thu, 14 Apr 2022 08:59:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8714A6B007B; Thu, 14 Apr 2022 04:59:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8207C6B007D; Thu, 14 Apr 2022 04:59:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C1E26B007E; Thu, 14 Apr 2022 04:59:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 5EA296B007B for ; Thu, 14 Apr 2022 04:59:13 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2C9C2205D9 for ; Thu, 14 Apr 2022 08:59:13 +0000 (UTC) X-FDA: 79354885386.21.841D649 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf03.hostedemail.com (Postfix) with ESMTP id B4D7A20007 for ; Thu, 14 Apr 2022 08:59:12 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id s14-20020a17090a880e00b001caaf6d3dd1so8700110pjn.3 for ; Thu, 14 Apr 2022 01:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zvTeNbwSFjzmp95ydPU9GCtKAlIW9evNe2EP9XmPg9o=; b=hmWMAwYx5oUL6Wp1vBwdgXwIs13Q82Au0zsJ0XWS5lPSR0hfPBYvajbkN6qAoJ8byF Ehy6dZvmsU1vgtuxXzAv8G4WRBh0JrSQ4mdLJrZtn3CXWroWBp0IHkf6cR2dkSA6Lk3t 2J9uBipPAZox0QxR1ftuERIiFWcQW88Anpmeyw6rxE6Dn2DrRWab3XbnMNVauFoqOIAN 6E3ngfB68K0eg1wm9ONKS/wdheLAGTGkPrZWfHyyg8jeHae3uw33bTDlHIjJZd0IIrtn zpFyZ7+az7oHtTl84AG24h1/PDCZSsqUxrF6Q4z9a8avFhEQ/Be5CuwkRqjuSs6YddXW a/cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zvTeNbwSFjzmp95ydPU9GCtKAlIW9evNe2EP9XmPg9o=; b=SfFqInrvhcF/BnLFMohVqa5CXyYSKXmUAAyj4H1293UxMrZh1mTtq18DYIjoy0WZ8l v1DZ45EqLxyUAV0ii4+tQ7+Ty9Jxnv5DDLR5uo3e8WEwfIXif0tbs9ktCxJE5PN+9eDL oiOflD1KOXmjWos1oGO1n3URWtPpMngj+fme31Y6ZHxCHlqh+A3BlbUufrAyXuZynoxh AV8hJNzMf9tSvR0d+zGt7Iuk0b8dozsLMSa66rGFxQRScMLcMUxUdUDMsfpSLOD/7enm udo6QUS7hhUape3lzIE5IkATJvWTeKz2J/xhqdbtAoeDGxF9oXwD+C8UGcKMklDo4xSh 6+KQ== X-Gm-Message-State: AOAM532JrM/ekta7zMQ9NLP0M/jbROCl/XYZ1Y7fuNQtyTqqqVzZuqWC tJxnpmtq2JBE0zj1Lu2/VQo= X-Google-Smtp-Source: ABdhPJx7jNnY5ayXMUX9R/+plcepj453EBGZ/mxvajewMlzF9iepLcw0FshfUCuRkFyU9HwD+z7yfA== X-Received: by 2002:a17:90b:33ca:b0:1cb:d0c:e1b5 with SMTP id lk10-20020a17090b33ca00b001cb0d0ce1b5mr3165304pjb.178.1649926751818; Thu, 14 Apr 2022 01:59:11 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:10 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 15/23] mm/slab_common: use same tracepoint in kmalloc and normal caches Date: Thu, 14 Apr 2022 17:57:19 +0900 Message-Id: <20220414085727.643099-16-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B4D7A20007 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=hmWMAwYx; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: 6tqyin8bjtmj3y58ynp94sd3th1ntxg9 X-HE-Tag: 1649926752-227004 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that tracepoints print cache names, we can distinguish kmalloc and normal cache allocations. Use same tracepoint in kmalloc and normal caches. After this patch, there is only two tracepoints in slab allocators: kmem_cache_alloc_node and kmem_cache_free. Remove all unused tracepoints. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 79 ------------------------------------- mm/slab.c | 8 ++-- mm/slab_common.c | 9 ++--- mm/slob.c | 14 ++++--- mm/slub.c | 19 +++++---- 5 files changed, 27 insertions(+), 102 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 35e6887c6101..ca67ba5fd76a 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,56 +9,6 @@ #include #include -DECLARE_EVENT_CLASS(kmem_alloc, - - TP_PROTO(unsigned long call_site, - const void *ptr, - size_t bytes_req, - size_t bytes_alloc, - gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - __field( size_t, bytes_req ) - __field( size_t, bytes_alloc ) - __field( gfp_t, gfp_flags ) - ), - - TP_fast_assign( - __entry->call_site = call_site; - __entry->ptr = ptr; - __entry->bytes_req = bytes_req; - __entry->bytes_alloc = bytes_alloc; - __entry->gfp_flags = gfp_flags; - ), - - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", - (void *)__entry->call_site, - __entry->ptr, - __entry->bytes_req, - __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags)) -); - -DEFINE_EVENT(kmem_alloc, kmalloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - DECLARE_EVENT_CLASS(kmem_alloc_node, TP_PROTO(const char *name, @@ -101,15 +51,6 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node) ); -DEFINE_EVENT(kmem_alloc_node, kmalloc_node, - - TP_PROTO(const char *name, unsigned long call_site, - const void *ptr, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), - - TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) -); - DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_PROTO(const char *name, unsigned long call_site, @@ -119,26 +60,6 @@ DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); -TRACE_EVENT(kfree, - - TP_PROTO(unsigned long call_site, const void *ptr), - - TP_ARGS(call_site, ptr), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - ), - - TP_fast_assign( - __entry->call_site = call_site; - __entry->ptr = ptr; - ), - - TP_printk("call_site=%pS ptr=%p", - (void *)__entry->call_site, __entry->ptr) -); - TRACE_EVENT(kmem_cache_free, TP_PROTO(const char *name, unsigned long call_site, const void *ptr), diff --git a/mm/slab.c b/mm/slab.c index 3c47d0979706..b9959a6b5c48 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,9 +3519,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3657,7 +3657,6 @@ void kfree(const void *objp) unsigned long flags; struct folio *folio; - trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; @@ -3669,6 +3668,7 @@ void kfree(const void *objp) } c = folio_slab(folio)->slab_cache; + trace_kmem_cache_free(c->name, _RET_IP_, objp); local_irq_save(flags); kfree_debugcheck(objp); diff --git a/mm/slab_common.c b/mm/slab_common.c index 416f0a1f17a6..3d1569085c54 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -910,6 +910,7 @@ void free_large_kmalloc(struct folio *folio, void *object) if (WARN_ON_ONCE(order == 0)) pr_warn_once("object pointer: 0x%p\n", object); + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, object); kmemleak_free(object); kasan_kfree_large(object); @@ -956,8 +957,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, - PAGE_SIZE << order, flags, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1290,11 +1291,7 @@ size_t ksize(const void *objp) EXPORT_SYMBOL(ksize); /* Tracepoints definitions. */ -EXPORT_TRACEPOINT_SYMBOL(kmalloc); -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); -EXPORT_TRACEPOINT_SYMBOL(kmalloc_node); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); -EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); int should_failslab(struct kmem_cache *s, gfp_t gfpflags) diff --git a/mm/slob.c b/mm/slob.c index 8abde6037d95..b1f291128e94 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,8 +505,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(KMALLOC_NAME, caller, ret, - size, size + minalign, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_NAME, caller, ret, + size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,8 +514,9 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, - size, PAGE_SIZE << order, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, caller, + ret, size, PAGE_SIZE << order, + gfp, node); } kmemleak_alloc(ret, size, 1, gfp); @@ -533,8 +534,6 @@ void kfree(const void *block) { struct folio *sp; - trace_kfree(_RET_IP_, block); - if (unlikely(ZERO_OR_NULL_PTR(block))) return; kmemleak_free(block); @@ -543,10 +542,13 @@ void kfree(const void *block) if (folio_test_slab(sp)) { int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); unsigned int *m = (unsigned int *)(block - align); + + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, block); slob_free(m, *m + align); } else { unsigned int order = folio_order(sp); + trace_kmem_cache_free(KMALLOC_NAME, _RET_IP_, block); mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(sp, 0), order); diff --git a/mm/slub.c b/mm/slub.c index de03fa1f5667..d53e9e22d67e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3229,8 +3229,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, + size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -4352,7 +4352,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, + s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4431,8 +4432,7 @@ void kfree(const void *x) struct folio *folio; struct slab *slab; void *object = (void *)x; - - trace_kfree(_RET_IP_, x); + struct kmem_cache *s; if (unlikely(ZERO_OR_NULL_PTR(x))) return; @@ -4442,8 +4442,12 @@ void kfree(const void *x) free_large_kmalloc(folio, object); return; } + slab = folio_slab(folio); - slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); + s = slab->slab_cache; + + trace_kmem_cache_free(s->name, _RET_IP_, x); + slab_free(s, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); @@ -4811,7 +4815,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, caller, ret, size, + s->size, gfpflags, node); return ret; } From patchwork Thu Apr 14 08:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73340C433EF for ; Thu, 14 Apr 2022 08:59:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B9706B007D; Thu, 14 Apr 2022 04:59:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 169E16B0071; Thu, 14 Apr 2022 04:59:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F25AB6B0080; Thu, 14 Apr 2022 04:59:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id E5D0D6B007D for ; Thu, 14 Apr 2022 04:59:19 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B687724974 for ; Thu, 14 Apr 2022 08:59:19 +0000 (UTC) X-FDA: 79354885638.13.ED41B78 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf17.hostedemail.com (Postfix) with ESMTP id 5359940005 for ; Thu, 14 Apr 2022 08:59:19 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id g12-20020a17090a640c00b001cb59d7a57cso5548158pjj.1 for ; Thu, 14 Apr 2022 01:59:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hsQvWpubllx7PdWg9lbvyVRg4B3WKq0E84Aji7H3XUg=; b=RcI79TDVfUjEI+i7r0zv9DyFEarSCFOyyJ0yAe7QADd7JMamrld0EL4b0MRQQtrJFP yl5lGZBKoYaISvBtIUFjkwkE4uRd4hqa75Q8rX2cOSKVe7/OSGuDyPqfJVYs2ji6ZbGH jwAFBBf00cMN6ViOCeY6jFXivsR0+KSCaodvMFqGNHgQPtoYlW7W7OuOywUtOkIw6o2A 4oxeUjuOGCfQ6MpC3ZvHMcRv6IJnmXTXjN5djExNqEzEFYByKGuvntX4KDlBhmv+ZQDE 1Ph97XF0XJ3amt/G8ZzZcNMasQKt7vl8pveXq3v2fOc9v7JM1R58uRuyFHh3lu0WYjj7 T8Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hsQvWpubllx7PdWg9lbvyVRg4B3WKq0E84Aji7H3XUg=; b=xziaGcCYHxNsp/rVj5T7TUev6qCBDt3TbyEpPtBIAyjrbPwKHoT1QSqj8EKwinLY+U HptJtVwoAFnNIoOEO4pOrRGxLFveuUHvM0mZvyxFiKxN2pHD8fZA6eMgj6eSV3YJSdZE OtP0UpmkIrV/qx1vN4BCxio0hUbNjGcQnB/oNwicoPQ2QDyModrzhZmsZLyLX+zbJvyU M7j8GVFgSFRa6RVo4O7jq2umj+Q0k+uOecYFwS5S8hLORVBg4pbu+oEJSc3CUMb0X/QA azqOFUyGthHSjFvgEjW6Ib+7fpuwUzoaF19P33KtQsd39lXk+1XGTORVXXjO5wjAwGqT BBug== X-Gm-Message-State: AOAM530Yrk9v3k5YRLljjy11B8jYBWdNasq6NeCNiGFcmtbPxzskLfLG ggCrOi+b+tK+23K8RN8/qBY= X-Google-Smtp-Source: ABdhPJz7HfWYTCuY7TqGCZq82ODBwRzEgJAykfwr5rZXPv7yJZnkLjt8584O0Wr3hN6glVAZ5QpD3g== X-Received: by 2002:a17:902:aa85:b0:155:ceb9:3710 with SMTP id d5-20020a170902aa8500b00155ceb93710mr12561403plr.59.1649926758443; Thu, 14 Apr 2022 01:59:18 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:16 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 16/23] mm/slab_common: rename tracepoint Date: Thu, 14 Apr 2022 17:57:20 +0900 Message-Id: <20220414085727.643099-17-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RcI79TDV; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 9w7979mpnrx9i4wr4muyra8sez17m5et X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 5359940005 X-HE-Tag: 1649926759-49289 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To reduce overhead of printing tracepoint name, rename trace_kmem_cache_alloc_node to kmem_cache_alloc. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 4 ++-- mm/slab.c | 8 ++++---- mm/slab_common.c | 6 +++--- mm/slob.c | 22 +++++++++++----------- mm/slub.c | 16 ++++++++-------- 5 files changed, 28 insertions(+), 28 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index ca67ba5fd76a..58edb2e3e5a4 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,7 +9,7 @@ #include #include -DECLARE_EVENT_CLASS(kmem_alloc_node, +DECLARE_EVENT_CLASS(kmem_alloc, TP_PROTO(const char *name, unsigned long call_site, @@ -51,7 +51,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node) ); -DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, +DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, TP_PROTO(const char *name, unsigned long call_site, const void *ptr, size_t bytes_req, size_t bytes_alloc, diff --git a/mm/slab.c b/mm/slab.c index b9959a6b5c48..424168b96790 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3448,7 +3448,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, void *ret = slab_alloc_node(cachep, lru, flags, nodeid, cachep->object_size, caller); - trace_kmem_cache_alloc_node(cachep->name, caller, ret, + trace_kmem_cache_alloc(cachep->name, caller, ret, cachep->object_size, cachep->size, flags, nodeid); @@ -3519,9 +3519,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc(cachep->name, _RET_IP_, ret, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); diff --git a/mm/slab_common.c b/mm/slab_common.c index 3d1569085c54..3cd5d7a47ec7 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -957,8 +957,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, - PAGE_SIZE << order, flags, node); + trace_kmem_cache_alloc(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1291,7 +1291,7 @@ size_t ksize(const void *objp) EXPORT_SYMBOL(ksize); /* Tracepoints definitions. */ -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); +EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); int should_failslab(struct kmem_cache *s, gfp_t gfpflags) diff --git a/mm/slob.c b/mm/slob.c index b1f291128e94..1bb4c577b908 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,8 +505,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmem_cache_alloc_node(KMALLOC_NAME, caller, ret, - size, size + minalign, gfp, node); + trace_kmem_cache_alloc(KMALLOC_NAME, caller, ret, + size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,9 +514,9 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, caller, - ret, size, PAGE_SIZE << order, - gfp, node); + trace_kmem_cache_alloc(KMALLOC_LARGE_NAME, caller, + ret, size, PAGE_SIZE << order, + gfp, node); } kmemleak_alloc(ret, size, 1, gfp); @@ -599,14 +599,14 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node, if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(c->name, caller, b, c->object_size, + SLOB_UNITS(c->size) * SLOB_UNIT, + flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(c->name, caller, b, c->object_size, + PAGE_SIZE << get_order(c->size), + flags, node); } if (b && c->ctor) { diff --git a/mm/slub.c b/mm/slub.c index d53e9e22d67e..a088d4fa1062 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3215,8 +3215,8 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t { void *ret = slab_alloc_node(s, lru, gfpflags, node, caller, s->object_size); - trace_kmem_cache_alloc_node(s->name, caller, ret, s->object_size, - s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, caller, ret, s->object_size, + s->size, gfpflags, node); return ret; } @@ -3229,8 +3229,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, _RET_IP_, ret, + size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -4352,8 +4352,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, - s->size, flags, node); + trace_kmem_cache_alloc(s->name, _RET_IP_, ret, size, + s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4815,8 +4815,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmem_cache_alloc_node(s->name, caller, ret, size, - s->size, gfpflags, node); + trace_kmem_cache_alloc(s->name, caller, ret, size, + s->size, gfpflags, node); return ret; } From patchwork Thu Apr 14 08:57:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3901AC433F5 for ; Thu, 14 Apr 2022 08:59:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1BDD6B0071; Thu, 14 Apr 2022 04:59:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC9AD6B0075; Thu, 14 Apr 2022 04:59:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6AFE6B007E; Thu, 14 Apr 2022 04:59:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 988E26B0071 for ; Thu, 14 Apr 2022 04:59:26 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 75B4823439 for ; Thu, 14 Apr 2022 08:59:26 +0000 (UTC) X-FDA: 79354885932.11.033140D Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf30.hostedemail.com (Postfix) with ESMTP id 0D3B180007 for ; Thu, 14 Apr 2022 08:59:25 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id e8-20020a17090a118800b001cb13402ea2so5103118pja.0 for ; Thu, 14 Apr 2022 01:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EgrMWHzF8KTmL9m2nlEJVQvsLaOWu876E+Hsc/ZOTns=; b=eVN9EceUKSMr6S3Dx3WSG3+9RqV3tUi12cCXVc+p3TamUxX3IVV/OiGL12wf+uzrtM bLCfrPExxg+uIJ8cAf6E3lS2stPyY2CvbRMzA7f4G6+1ieA41AjTW9T9LCHmAq3CkC9e NeThT2oo9wah/fTkl0y3/ke9NAsRgUCGibv48DPHIvLZtOw/LiYAD1xVDF8Tg+4YAsSd EUTJTAAugQWgdf7WekUhtVCtHeW4bmLZGjmfcw41siVYZgYfguTUhDgYAF/KrEsuuoPm /VPvwPamPNs2ny/K7qrsoS5YM990ITJ+Ng14TwhM3kQAjAs3uRonECxyb8khqi8TqYZB 1BNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EgrMWHzF8KTmL9m2nlEJVQvsLaOWu876E+Hsc/ZOTns=; b=c9srCLd7lqKzQIb1W+/RWNHa4k8J4TuJn3H8v+J5FWi6/SItrBNZKsy0iQvxld2Yh1 D67k6UgR0JMUj7FYH2+TjsUpJXVeeNhLA/H7okR5sP5MqoxHyfz5MyndqkQKKYn9RK2f 8jSpb2nTMTkMco6I4qUwYsRo04IzkqIeIAaG9h6Fk238pbPdIKLfgCQeYwEoeggxUgP0 X4OpuFM1zJksgjfrwH1peBwI8wuwtCOP2LwCFPPoa4SRgjkbbB2BFeIEuJGTK821rBA3 Ike+RXMZhSavylsb8L3xNz9iHlTrMgqaNINqOYInKhkZ7+qjcZlAMSYen1UvFEd9VSc3 iCiw== X-Gm-Message-State: AOAM530oj3aK3ZODJmT5TGq59DE+wDdeRo8VKY3oixNkulkpES90ohJU al5Y4M77D0mYQGQZoxKWczE= X-Google-Smtp-Source: ABdhPJwZQkFFpM6qRwxrzIAePqZY2pqNaF46syQsAZdIkRUvqJY7Cm8E08BpythhpVh93GkFpAoCag== X-Received: by 2002:a17:903:c2:b0:158:5842:2b3a with SMTP id x2-20020a17090300c200b0015858422b3amr20520315plc.126.1649926765109; Thu, 14 Apr 2022 01:59:25 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:23 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/23] mm/slab_common: implement __kmem_cache_free() Date: Thu, 14 Apr 2022 17:57:21 +0900 Message-Id: <20220414085727.643099-18-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eVN9EceU; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0D3B180007 X-Stat-Signature: c5wmrhirgz8gmzpi4cu1n7tpftx5i39x X-HE-Tag: 1649926765-502668 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To generalize kfree in later patch, implement __kmem_cache_free() that takes caller address and make kmem_cache_free() wrapper of it. Now that kmem_cache_free() is inline function, we should use _THIS_IP_ instead of _RET_IP_ for consistency. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 16 +++++++++++++++- mm/slab.c | 17 +++++------------ mm/slob.c | 13 +++++++------ mm/slub.c | 9 +++++---- 4 files changed, 32 insertions(+), 23 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index c8c82087c3f9..0630c37ee630 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -462,7 +462,21 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfp return __kmem_cache_alloc_node(s, lru, gfpflags, NUMA_NO_NODE, _THIS_IP_); } -void kmem_cache_free(struct kmem_cache *s, void *objp); +void __kmem_cache_free(struct kmem_cache *s, void *objp, unsigned long caller __maybe_unused); + +/** + * kmem_cache_free - Deallocate an object + * @s: The cache the allocation was from. + * @objp: The previously allocated object. + * + * Free an object which was previously allocated from this + * cache. + */ +static __always_inline void kmem_cache_free(struct kmem_cache *s, void *objp) +{ + __kmem_cache_free(s, objp, _THIS_IP_); +} + /* * Bulk allocation and freeing operations. These are accelerated in an diff --git a/mm/slab.c b/mm/slab.c index 424168b96790..d35873da5572 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3579,30 +3579,23 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * kmem_cache_free - Deallocate an object - * @cachep: The cache the allocation was from. - * @objp: The previously allocated object. - * - * Free an object which was previously allocated from this - * cache. - */ -void kmem_cache_free(struct kmem_cache *cachep, void *objp) +void __kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller __maybe_unused) { unsigned long flags; cachep = cache_from_obj(cachep, objp); if (!cachep) return; - trace_kmem_cache_free(cachep->name, _RET_IP_, objp); + trace_kmem_cache_free(cachep->name, caller, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(objp, cachep->object_size); - __cache_free(cachep, objp, _RET_IP_); + __cache_free(cachep, objp, caller); local_irq_restore(flags); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { diff --git a/mm/slob.c b/mm/slob.c index 1bb4c577b908..e893d182d471 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -631,7 +631,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *cachep, struct list_lru *lru __ } EXPORT_SYMBOL(__kmem_cache_alloc_node); -static void __kmem_cache_free(void *b, int size) +static void ____kmem_cache_free(void *b, int size) { if (size < PAGE_SIZE) slob_free(b, size); @@ -644,23 +644,24 @@ static void kmem_rcu_free(struct rcu_head *head) struct slob_rcu *slob_rcu = (struct slob_rcu *)head; void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu)); - __kmem_cache_free(b, slob_rcu->size); + ____kmem_cache_free(b, slob_rcu->size); } -void kmem_cache_free(struct kmem_cache *c, void *b) +void __kmem_cache_free(struct kmem_cache *c, void *b, + unsigned long caller __maybe_unused) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(c->name, _RET_IP_, b); + trace_kmem_cache_free(c->name, caller, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); slob_rcu->size = c->size; call_rcu(&slob_rcu->head, kmem_rcu_free); } else { - __kmem_cache_free(b, c->size); + ____kmem_cache_free(b, c->size); } } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) { diff --git a/mm/slub.c b/mm/slub.c index a088d4fa1062..a72a2d08e793 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3466,15 +3466,16 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) } #endif -void kmem_cache_free(struct kmem_cache *s, void *x) +void __kmem_cache_free(struct kmem_cache *s, void *x, + unsigned long caller __maybe_unused) { s = cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); + trace_kmem_cache_free(s->name, caller, x); + slab_free(s, virt_to_slab(x), x, NULL, 1, caller); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); struct detached_freelist { struct slab *slab; From patchwork Thu Apr 14 08:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4353C433EF for ; Thu, 14 Apr 2022 08:59:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A1016B0074; Thu, 14 Apr 2022 04:59:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44F9E6B0075; Thu, 14 Apr 2022 04:59:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F1026B007E; Thu, 14 Apr 2022 04:59:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 229D66B0074 for ; Thu, 14 Apr 2022 04:59:33 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C55EF8249980 for ; Thu, 14 Apr 2022 08:59:32 +0000 (UTC) X-FDA: 79354886184.31.BC5D733 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf08.hostedemail.com (Postfix) with ESMTP id 3AB22160003 for ; Thu, 14 Apr 2022 08:59:32 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id q12so4214659pgj.13 for ; Thu, 14 Apr 2022 01:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VXU9f8brElqQjQUQyo4iKTj+TdVz9PS035pWXZROjPs=; b=CMsTgOCmxpH9Ya24bTQBUOlRMe8yjhRIe9zC3tPsZTskOcuJ8mb3Cphb6DXR0p6KDJ bsYdwi1vMMybsPx64eS8pO386Q0GhssSVrBfdy8I1gNtmnFAhJKaH1sreyl7+r0bLUG9 PrPYb4xCSzo6XXo8iqJxcfv9vD9UqMDcq4R+O5hHMJUZohK/UMAQc8g3Ws5x7ivvCrir m1Dj8YQD2ZlnD4vUpu+jmQgxn1siLSmZh8k9eM7oEqK37KPx+vzf7MqnN/D6750AK9eQ Uxm5LJF8mJpXNCes2xnd8zVQIfH58lrEQG7KNNsUzaR7nCSiftL5VQ2RT6Jg1EnERwbv psFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VXU9f8brElqQjQUQyo4iKTj+TdVz9PS035pWXZROjPs=; b=4WZpuKZcMrqJgWw9g0n1bgjITdGPhpUp48TvqpJXXZ4jO6ABE3/eu2wLctWh+oZqnk HUay5s8/e3OVjoN2xs4iB3YWFg9mriTdzUDBIOZeFQb0Zh5tWrTq5ykUg9Xmh3z4SZ0L 9Rldp+jxUO99Kb3XGVfrQDy7WZQyNXui7dW4ZxT1nc6wXxgyMUdu6eSjpEaRnSM4IbeY sSMgWvo4PD0WjLTigfgGUiXIdyDGfeOleo0rOBti9L3Gw83USEhB3QzoZv1ybwt/l6sQ Xj6H+kSS4Enk2uzr7HdpLGw5cobgvaD2TdjUYxsVXqYEQYcYoEazdKUJbTQmKN/iqFFS mRzg== X-Gm-Message-State: AOAM530imoJ335Vaz89giITb4cqdf4k04Bhp6EsCrY8Wx7E1tuVmL02k OT+oYuxcIkmzzQbnRBqp0yo= X-Google-Smtp-Source: ABdhPJyWukxNFbT9XT5Je4C9MNigf2VP/qCuFkDFIKDtoulNW7LOXhOtKS3m58upU+oMh1k3yvO1RA== X-Received: by 2002:a05:6a00:1141:b0:506:d0e:6640 with SMTP id b1-20020a056a00114100b005060d0e6640mr2875392pfm.73.1649926771272; Thu, 14 Apr 2022 01:59:31 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:29 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 18/23] mm/sl[au]b: generalize kmalloc subsystem Date: Thu, 14 Apr 2022 17:57:22 +0900 Message-Id: <20220414085727.643099-19-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 9kecrmgxmm6p7aes3dmfir9zwz5braab Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=CMsTgOCm; spf=pass (imf08.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3AB22160003 X-HE-Tag: 1649926772-483470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now everything in kmalloc subsystem can be generalized. Let's do it! Generalize __kmalloc_node_track_caller(), kfree(), __ksize(), __kmalloc_node() and move them to slab_common.c. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.c | 94 ----------------------------------------------- mm/slab_common.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/slub.c | 88 -------------------------------------------- 3 files changed, 95 insertions(+), 182 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d35873da5572..fc00aca62ae3 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3527,36 +3527,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -static __always_inline void * -__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret = kasan_kmalloc(cachep, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { @@ -3635,43 +3605,6 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); -/** - * kfree - free previously allocated memory - * @objp: pointer returned by kmalloc. - * - * If @objp is NULL, no operation is performed. - * - * Don't free memory not originally allocated by kmalloc() - * or you will run into trouble. - */ -void kfree(const void *objp) -{ - struct kmem_cache *c; - unsigned long flags; - struct folio *folio; - - - if (unlikely(ZERO_OR_NULL_PTR(objp))) - return; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) { - free_large_kmalloc(folio, (void *)objp); - return; - } - - c = folio_slab(folio)->slab_cache; - trace_kmem_cache_free(c->name, _RET_IP_, objp); - - local_irq_save(flags); - kfree_debugcheck(objp); - debug_check_no_locks_freed(objp, c->object_size); - debug_check_no_obj_freed(objp, c->object_size); - __cache_free(c, (void *)objp, _RET_IP_); - local_irq_restore(flags); -} -EXPORT_SYMBOL(kfree); - /* * This initializes kmem_cache_node or resizes various caches for all nodes. */ @@ -4074,30 +4007,3 @@ void __check_heap_object(const void *ptr, unsigned long n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - - BUG_ON(!objp); - if (unlikely(objp == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c = folio_slab(folio)->slab_cache; - return c->object_size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 3cd5d7a47ec7..daf626e25e12 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -918,6 +918,101 @@ void free_large_kmalloc(struct folio *folio, void *object) -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, flags, node); + + s = kmalloc_slab(size, flags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret = __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + ret = kasan_kmalloc(s, ret, size, flags); + + return ret; +} +EXPORT_SYMBOL(__kmalloc_node); + +void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, + int node, unsigned long caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, gfpflags, node); + + s = kmalloc_slab(size, gfpflags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret = __kmem_cache_alloc_node(s, NULL, gfpflags, node, caller); + + return ret; +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +/** + * kfree - free previously allocated memory + * @objp: pointer returned by kmalloc. + * + * If @objp is NULL, no operation is performed. + * + * Don't free memory not originally allocated by kmalloc() + * or you will run into trouble. + */ +void kfree(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio = virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, (void *)object); + return; + } + + slab = folio_slab(folio); + s = slab->slab_cache; + __kmem_cache_free(s, object, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object == ZERO_SIZE_PTR)) + return 0; + + folio = virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index a72a2d08e793..bc9c96ce0521 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4338,30 +4338,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - - trace_kmem_cache_alloc(s->name, _RET_IP_, ret, size, - s->size, flags, node); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node); - #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4412,46 +4388,6 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif /* CONFIG_HARDENED_USERCOPY */ -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - -void kfree(const void *x) -{ - struct folio *folio; - struct slab *slab; - void *object = (void *)x; - struct kmem_cache *s; - - if (unlikely(ZERO_OR_NULL_PTR(x))) - return; - - folio = virt_to_folio(x); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); - return; - } - - slab = folio_slab(folio); - s = slab->slab_cache; - - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, slab, object, NULL, 1, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - #define SHRINK_PROMOTE_MAX 32 /* @@ -4799,30 +4735,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, gfpflags, node); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmem_cache_alloc(s->name, caller, ret, size, - s->size, gfpflags, node); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { From patchwork Thu Apr 14 08:57:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E4DAC433FE for ; Thu, 14 Apr 2022 08:59:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B8FE6B0073; Thu, 14 Apr 2022 04:59:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1694D6B0075; Thu, 14 Apr 2022 04:59:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 031626B007B; Thu, 14 Apr 2022 04:59:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id E8A2C6B0073 for ; Thu, 14 Apr 2022 04:59:44 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C60A561AA9 for ; Thu, 14 Apr 2022 08:59:44 +0000 (UTC) X-FDA: 79354886688.06.9D575B0 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf01.hostedemail.com (Postfix) with ESMTP id 1F3524000B for ; Thu, 14 Apr 2022 08:59:43 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id q19so4236384pgm.6 for ; Thu, 14 Apr 2022 01:59:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ua/uZ6qm2oTljPWsofJREnrVaB3Pp7mzKI9GmIJ+zVs=; b=i3xDvTW0KEoA3rocpb/ts6nvQdVhG/gwVA+cxvSoNic20GYFjH31pocaxRa1Gh3LSl er7pUR2X7EB5Vpam3pnIaxCJPc7HKG+SK0tY+SL9qwUloa6IpB4carU16P+0BFvn3Ve3 jMkpgb5ur0o/LiQ/geHPuBcqAqAgRsG1xpZ6VuxtGZr0tQNMW4a/+DkO7kqid05uJmDJ TdPyWjSiO5qVsRdFro8/swAA20IXRPTZP1pL6Th05qXGgxX8S96uowdqPh5hqZryplAZ vdMoTau5uN6f9RxjzOV24KJMBPuK9PUWe89dsCl0SryZyVROw90wYX9LkpQxtN8NpZIS bZPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ua/uZ6qm2oTljPWsofJREnrVaB3Pp7mzKI9GmIJ+zVs=; b=k8VE/E/oJJROtQRkyEICeRMK6+O29+WFqJHqga+SHgT7DWLPunHG/fbw/deVu9jt9+ 0mRIhLB6sCBRhbrdpvENTZtaI5RPwr1pzusmRoqNqZn9XPOZCODMbf88FiJAFPif6fly 7bhAGmSacYtaOLw/gtlB0K1CgGuoX2fskInB7iVvW94gS2WzKqdTgU5a1x+/mtIA3BhK AVyWPNN3pTsGALgNrGS9sAifsViUVNVO6nYFLpzxO+8z2T/4llo0pF3NuK2lQx1Lqruf kd4OJEGY2Nuxg3N+aZ4aA8cP3PTdR6yX4Ykz+5Nxf1nMhBZlOwxHAy1IfKBC+KZqlk7a bPow== X-Gm-Message-State: AOAM533voJEE1oakLxyUo8Aov9peeqxBc1k8YIfHEERWIvNM2dJblLbC mkjDTt726A531ktOZm3HTxY= X-Google-Smtp-Source: ABdhPJx/Hd0A3GzOSvel1BxHOz4/B46nvZGT//rxRVflVuYWERDTpthd5Fup1TuuwYMwwTBDcmCmiw== X-Received: by 2002:a63:a804:0:b0:398:e7d7:29ab with SMTP id o4-20020a63a804000000b00398e7d729abmr1454997pgf.138.1649926783121; Thu, 14 Apr 2022 01:59:43 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:41 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 20/23] mm/slab_common: factor out __do_kmalloc_node() Date: Thu, 14 Apr 2022 17:57:24 +0900 Message-Id: <20220414085727.643099-21-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 1p8miz5c5eeqqptpx9o47x7bxoemrceu Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=i3xDvTW0; spf=pass (imf01.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1F3524000B X-HE-Tag: 1649926783-43625 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Factor out common code into __do_kmalloc_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 27 ++++++++++----------------- 1 file changed, 10 insertions(+), 17 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 6abe7f61c197..af563e64e8aa 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -919,7 +919,9 @@ void free_large_kmalloc(struct folio *folio, void *object) __free_pages(folio_page(folio, 0), order); } -void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, + unsigned long caller __maybe_unused) { struct kmem_cache *s; void *ret; @@ -932,31 +934,22 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + ret = __kmem_cache_alloc_node(s, NULL, flags, node, caller); ret = kasan_kmalloc(s, ret, size, flags); return ret; } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} EXPORT_SYMBOL(__kmalloc_node); void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, gfpflags, node); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = __kmem_cache_alloc_node(s, NULL, gfpflags, node, caller); - ret = kasan_kmalloc(s, ret, size, gfpflags); - - return ret; + return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); From patchwork Thu Apr 14 08:57:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D744C433EF for ; Thu, 14 Apr 2022 08:59:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B89D6B0075; Thu, 14 Apr 2022 04:59:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 169C56B007B; Thu, 14 Apr 2022 04:59:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 030A76B007E; Thu, 14 Apr 2022 04:59:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id E91276B0075 for ; Thu, 14 Apr 2022 04:59:50 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A6081A5D2A for ; Thu, 14 Apr 2022 08:59:50 +0000 (UTC) X-FDA: 79354886940.27.404F357 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf15.hostedemail.com (Postfix) with ESMTP id 08DDCA0006 for ; Thu, 14 Apr 2022 08:59:49 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id bx5so4519184pjb.3 for ; Thu, 14 Apr 2022 01:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l9QL4BycKE4eFXdrmUENq07WhnIQYOOU7butyGlrlts=; b=oLxmbrdrrSYOas267DSbgnV7X8Q/M59tbvkzw39YowuMa64Stzy1Sc8jRluxQdezP0 l1U5kP+u3MOVEuii/WM2XiAh4B58+UWrMK8YwohsdbefFYmO/oHVr8S66x2ZpuxhG44x 4Z4Oc+GPnUvwz/ShIKcgm74rGBZLYTuHCRpl+x/SbY11vbucdT/KaTNwb4B6O35Y4NAg Ka3rTgD2CJnenUkq3d2MuHWGZneo6s2DoYCL5GSoIM8BvRFOjFLb+MDcs6cQ2Zod9Y9v qW8iJh3R5ZK/Rihp8i1pWxhoUpxne6sbZs/VxMXWAMdjRn/cNTEWx5lrUVSUNwpnjhJS w8IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l9QL4BycKE4eFXdrmUENq07WhnIQYOOU7butyGlrlts=; b=nIne9g3cnsvrYN1KPJHuGjFOxeCg9ZvOutqvkq7OyJZLqfvqqfyViBbRUnE7z/DBLN YLQgco4wicxOsv8KpCnseEWhqZ48QYMyG7vvX519OfQdJKPy5rLZ6x5WQmNtRMb1Cues ZaMpoKXGouIQpXdYAFirSK8EEsGEE75fcdLNfR10RKCATZfa8IyRBsJkgRPyPW0ezmIs pzDw3MzDApfOExQVpYuGb17w1bg1Bu/QLf4MUZATNWE4gJvuhOZF0LeBkENK3qJdaCRX dRIbUOgLfTbCqT40JJjvHqF8wK7M5clJP6ITPpJyekQShbC4wMt4WqIXVjFZ1ApOkkcT 10Jw== X-Gm-Message-State: AOAM530VOm/H0zB/gVBoCWxMoPbfg7c4USjmLaH8cNvGAfxUsm8vE0kr zheccRpnysh5aaBvCmk83dQ= X-Google-Smtp-Source: ABdhPJwMF9eGt5BVU1QUVYQqLceYeeGQhDCgIiFSWMMEQ+Qv2jWmAMGkEiShivvQT1P6HAfwJXPljg== X-Received: by 2002:a17:90b:4f8d:b0:1c6:408b:6b0d with SMTP id qe13-20020a17090b4f8d00b001c6408b6b0dmr3217496pjb.90.1649926789022; Thu, 14 Apr 2022 01:59:49 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:47 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 21/23] mm/sl[au]b: remove kmem_cache_alloc_node_trace() Date: Thu, 14 Apr 2022 17:57:25 +0900 Message-Id: <20220414085727.643099-22-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=oLxmbrdr; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 08DDCA0006 X-Stat-Signature: 4ejg89oi8x86op4j5eynz9wi9dgqmkqm X-HE-Tag: 1649926789-782464 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmem_cache_alloc_node_trace() was introduced by commit 4a92379bdfb4 ("slub tracing: move trace calls out of always inlined functions to reduce kernel code size") to avoid inlining tracepoints for inlined kmalloc function calls. Now that we use same tracepoint in kmalloc and normal caches, kmem_cache_alloc_node_trace() can be replaced with __kmem_cache_alloc_node() and kasan_kmalloc(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 26 ++++++++------------------ mm/slab.c | 19 ------------------- mm/slub.c | 16 ---------------- 3 files changed, 8 insertions(+), 53 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0630c37ee630..c1aed9d97cf2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,21 +497,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment - __alloc_size(4); -#else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) -{ - void *ret = kmem_cache_alloc_node(s, gfpflags, node); - - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -#endif /* CONFIG_TRACING */ - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); @@ -523,6 +508,9 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) #ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { + struct kmem_cache *s; + void *objp; + if (__builtin_constant_p(size)) { unsigned int index; @@ -534,9 +522,11 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla if (!index) return ZERO_SIZE_PTR; - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); + s = kmalloc_caches[kmalloc_type(flags)][index]; + + objp = __kmem_cache_alloc_node(s, NULL, flags, node, _RET_IP_); + objp = kasan_kmalloc(s, objp, size, flags); + return objp; } return __kmalloc_node(size, flags, node); } diff --git a/mm/slab.c b/mm/slab.c index fc00aca62ae3..24010e72f603 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3508,25 +3508,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, - gfp_t flags, - int nodeid, - size_t size) -{ - void *ret; - - ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmem_cache_alloc(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { diff --git a/mm/slub.c b/mm/slub.c index bc9c96ce0521..1899c7e1de10 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3222,22 +3222,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, gfp_t } EXPORT_SYMBOL(__kmem_cache_alloc_node); -#ifdef CONFIG_TRACING -void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) -{ - void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - - trace_kmem_cache_alloc(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); - - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif - /* * Slow path handling. This may still be called frequently since objects * have a longer lifetime than the cpu slabs in most processing loads. From patchwork Thu Apr 14 08:57:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03866C433EF for ; Thu, 14 Apr 2022 08:59:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CF166B0071; Thu, 14 Apr 2022 04:59:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97ECF6B007B; Thu, 14 Apr 2022 04:59:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86EE76B007E; Thu, 14 Apr 2022 04:59:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 7A8896B0071 for ; Thu, 14 Apr 2022 04:59:56 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4A42325201 for ; Thu, 14 Apr 2022 08:59:56 +0000 (UTC) X-FDA: 79354887192.14.6EBCF79 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf29.hostedemail.com (Postfix) with ESMTP id AB17E120006 for ; Thu, 14 Apr 2022 08:59:55 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id md4so4515047pjb.4 for ; Thu, 14 Apr 2022 01:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NORyvB92HhMuMei+Pomh8Oih3TxPvQvxBtXd7TKFrXs=; b=cYuAPq95v2bjnqMdfyAhTbn9wVgLymi4gKjixUDP+cO7CMfR4/fi7HonSlCYGSwsRC juz4svkPR8qlGI/1jbF6uq2yJWXSJ+XmzrgSEaXXO7WOG3oeUozWp50ZdbPfTLH863HW rE32U++MGWqz45csIVcjq1DdTRF2VuTW/xbEr5SkJWoclIRte41zD1rdFWJMivW++xfE gDfBcr8IwjMw8mfQAwFGkiUrPtkaV1PNyIvL3HDqe1HWHyjKWLQtHF+12x5afHVEYOHU lfd2lswe3lQ4L1IXLAN/Vc9XajD/XmsRzdwMGHVkWPCOFGJqp8Wiva+CavFuKgFuYHCZ 5O+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NORyvB92HhMuMei+Pomh8Oih3TxPvQvxBtXd7TKFrXs=; b=uO2lY9yHB/wYNY8V6K+NuTEDlWFaPH5skQGvZY1Vpaf9lKKk+ClUIlnUY+OoEXJaVa 1U7FL4uZS3+lZkxEFPZ2BubfPmJ+bH5gSBCYVXQjzzS+t8Rxk7zEtRZ5p7y7e7QFp4jO oh3wKxL4uZCKboCLzB4GwhuSvnQPoxUvQEOuufvkksTe+fGEmrD/tgJoC3Ts3R4pdMvT w0XRwgjt6l10DHsxQJlEUQpafyRiWfb/AKrVU9+T2tnQvHqGTE5fV5vbgqgFE5I61e/u uc+T7llIMVtJsDfx4heLx1uBrFk7XHVxaWxFgmdETF0cy1M/aRbK/dOlS58wxbu6nn6H nZKg== X-Gm-Message-State: AOAM530cDqKPWHXXqVppa6d7Ko3bQ+jY8i4e5BDu4q8HXOp2RxaOTwaH /Nq7Zi9nb2R0YBWqQ8qEXG4= X-Google-Smtp-Source: ABdhPJyohnd8KppvDlxmWverlmJ9PEfUJInYQHrOa24aW7oR0ao4/RGtPXE31A73I2fqb5HQ5QEs3g== X-Received: by 2002:a17:90b:4d01:b0:1cd:46e8:215a with SMTP id mw1-20020a17090b4d0100b001cd46e8215amr2640027pjb.73.1649926794713; Thu, 14 Apr 2022 01:59:54 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:53 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 22/23] mm/sl[auo]b: move definition of __ksize() to mm/slab.h Date: Thu, 14 Apr 2022 17:57:26 +0900 Message-Id: <20220414085727.643099-23-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cYuAPq95; spf=pass (imf29.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: AB17E120006 X-Stat-Signature: 18yzpfsuh6is4snfcp97wd6yytaw9g5h X-HE-Tag: 1649926795-452691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() is only called by KASAN. Remove export symbol and move definition to mm/slab.h as we don't want to grow its callers. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 11 +---------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index c1aed9d97cf2..e30c0675d6b2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -187,7 +187,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index 45ddb19df319..5a500894317b 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -690,6 +690,8 @@ void free_large_kmalloc(struct folio *folio, void *object); #endif /* CONFIG_SLOB */ +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index af563e64e8aa..8facade42bdd 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -984,15 +984,7 @@ void kfree(const void *x) } EXPORT_SYMBOL(kfree); -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ +/* Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { struct folio *folio; @@ -1007,7 +999,6 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slob.c b/mm/slob.c index e893d182d471..adf794d58eb5 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -576,7 +576,6 @@ size_t __ksize(const void *block) m = (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { From patchwork Thu Apr 14 08:57:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12813174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98871C433FE for ; Thu, 14 Apr 2022 09:00:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38E0D6B007B; Thu, 14 Apr 2022 05:00:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 33DF56B007E; Thu, 14 Apr 2022 05:00:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2054D6B0080; Thu, 14 Apr 2022 05:00:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 147A86B007B for ; Thu, 14 Apr 2022 05:00:02 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D8D022521D for ; Thu, 14 Apr 2022 09:00:01 +0000 (UTC) X-FDA: 79354887402.14.E5EA757 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf06.hostedemail.com (Postfix) with ESMTP id 72273180008 for ; Thu, 14 Apr 2022 09:00:01 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id s14so4119172plk.8 for ; Thu, 14 Apr 2022 02:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B7gcZ2iThFka6LvykgF/itOBPX1z1QotaADm6wrPK4A=; b=aleL4N3zpF/Ea+70rZj2fCuVsct15PZsMcWwS03pX3yQiXe8ck7/m4zT6KNmGYgDvo mqEjk6mtV2uPs6Luaq/+ilfVnpYUeogNBGzEmh3vbyvT3C8ZsU4Qe9Ln6IGBRZeNH7QF 6WKwZNbxfUyLSKI2Ta2d+GF0DizhSqXlmAqPm9M0KhFv6bmyxSAigSZKIGWv9jNeETz7 44/WOwe7f8uK9yTDF1cWYFv8nFc9ZB590hw1Ka/eRUs/8mXqX2mUh8jIOZGvxxoYrCl/ h/O3OsN/P/VTP5eozp7LggK0Y9SO+ja0h87RlxoOuy97ZaBdRfGmt4vHdiCf7pUTz1fq FGnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B7gcZ2iThFka6LvykgF/itOBPX1z1QotaADm6wrPK4A=; b=ChvL+UKuJtBaEnkxAda4D9khQ5++heDVHNP79EZzgG8iQXqA8C8N58UaRlQqWEEead I6JWFKPPOeJZtaZWTlANFYB210UhNjEn7ql6bMdgDFsyPc6ksAN0S+pNuZpTCIwy31S6 YSBG5DjJ0OAefn/0KDnK5idy6ujcR5lF+rOHLHBq5CqoOTOyDa9FMhUQ475b8W/hGEut 5tFknUcBra496TFhtjOoZveEK0my7QvD6UOYIUaiGWP2ryZ3WMSRh0H22HhEbdF2MY/l 8yoll5Kgj8oDZeYp/mc3FA/dvLtkm7U91j0zyv2F8WWMSy9MmqDyMpqkFBPdxb+ls1s7 2ZjA== X-Gm-Message-State: AOAM530znsujPczb+3Y9kzHVm/omfliX/bEUS6GheGfx3JD4LPXZ3LiW fJK0t32ghx9MCaBzW0EvW6I= X-Google-Smtp-Source: ABdhPJz+MKOJRhVGRMjYjrgkLtZKyFXNHXy5iSwnzJ2XplH/gav0LkRz1Gf7Zci/tlCcRcywt3k2lg== X-Received: by 2002:a17:902:9684:b0:158:b28c:41e0 with SMTP id n4-20020a170902968400b00158b28c41e0mr3854458plp.85.1649926800535; Thu, 14 Apr 2022 02:00:00 -0700 (PDT) Received: from hyeyoo.. ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id p9-20020aa79e89000000b00505fada20dfsm1403537pfq.117.2022.04.14.01.59.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Apr 2022 01:59:59 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 23/23] mm/sl[au]b: check if large object is valid in __ksize() Date: Thu, 14 Apr 2022 17:57:27 +0900 Message-Id: <20220414085727.643099-24-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220414085727.643099-1-42.hyeyoo@gmail.com> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 72273180008 X-Stat-Signature: huwyeta77nqynopkiy1it8zxs88z97wz Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=aleL4N3z; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1649926801-853614 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() returns size of objects allocated from slab allocator. When invalid object is passed to __ksize(), returning zero prevents further memory corruption and makes caller be able to check if there is an error. If address of large object is not beginning of folio or size of the folio is too small, it must be invalid. Return zero in such cases. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 8facade42bdd..a14f9990b159 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -994,8 +994,12 @@ size_t __ksize(const void *object) folio = virt_to_folio(object); - if (unlikely(!folio_test_slab(folio))) + if (unlikely(!folio_test_slab(folio))) { + if (object != folio_address(folio) || + folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE) + return 0; return folio_size(folio); + } return slab_ksize(folio_slab(folio)->slab_cache); }