From patchwork Fri Mar 4 06:34:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25008C433F5 for ; Fri, 4 Mar 2022 06:34:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0D008D0003; Fri, 4 Mar 2022 01:34:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 996198D0001; Fri, 4 Mar 2022 01:34:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80F558D0003; Fri, 4 Mar 2022 01:34:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 6D0348D0001 for ; Fri, 4 Mar 2022 01:34:46 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0DEFF74D05 for ; Fri, 4 Mar 2022 06:34:46 +0000 (UTC) X-FDA: 79205740572.30.600976D Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf17.hostedemail.com (Postfix) with ESMTP id 90B254000F for ; Fri, 4 Mar 2022 06:34:45 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id e2so6901093pls.10 for ; Thu, 03 Mar 2022 22:34:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TVdYQpXFu0DWrIT8aY3Z6KXTWb1Uch1rx+HdjmDRFcE=; b=LsASEabx8/b3sjb/G+RHgJLKsqZzcmnsQpo7M0GFhPCRRcuFrPFU7DM1g1X9MwHM21 PCXAVQsHx+MY8cnF0ZNt6zIq0IoAZxJ+YDgCqdORaHNDX7Fz8BwvxFmSI3Z+xQzxLh33 BUyMjlWWgmy5H4BIxoTuTqf3UE5ZxGD7UwaXXV0XkkZ/t7lFPk5JmQnFpv5BflK6/4Xa rKVoS4CylYDxBNLDH1+HfxM1pKl0ZoCw1xFctB8g1uqKLagtbseDlW1WFBCZiHTwGO2N mZrCNFxqr5JLCxO3V2BqG1pVkczUxsM4IWh7nFxGKarULxMOmvPb/3v9bRDemWr79jBL YoEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TVdYQpXFu0DWrIT8aY3Z6KXTWb1Uch1rx+HdjmDRFcE=; b=pQ5VQgYK8eocv3+8Ht8FFjG4scJp6L1BaUGhJ+83iAZUWEA+O8u9c6C6ZejWKAv7es Ux4CDZ0+XsVX+OHL6MV8XYQvKOhLucPEllLRb1V3S2GW9j+zVXnDu9FDjfI3QAZK9ADA 8bVJe/FZb8Rdmt1aX216VRPdr/SGljxbiPQyufs06s0eUiHFk7qeWyhuZNN6PhAsDL9x RJFWYJ7131mb228avWuegyxP+8nrFOG79/S5imbnikMylyJp/vNSoeu/VpVt3Kb9z7LZ AbeSL8F3Y5UCkfKaTDzstw8Bwz3h5Zyrq542mlHsevLJ/k8l0j25w6QD4xqRSjBWTb8s MEhw== X-Gm-Message-State: AOAM532cNMEL2Isk8udQcrkzuGhxwUla73OW1F+DDvgc+r88uYX21/fv hOU63B2cYpHQRFyaxrcWHl9ndqL1hr1PXQ== X-Google-Smtp-Source: ABdhPJwrb7HJV9HhwNvNZ4oa0OmTF8RoFPyx59jmPxqApPtPntlDOJEcyLgJkbpsXbKA494+Pn+H8w== X-Received: by 2002:a17:90a:9306:b0:1bc:9256:5477 with SMTP id p6-20020a17090a930600b001bc92565477mr9339370pjo.170.1646375684474; Thu, 03 Mar 2022 22:34:44 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:44 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 1/5] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Fri, 4 Mar 2022 06:34:23 +0000 Message-Id: <20220304063427.372145-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 90B254000F X-Stat-Signature: 3sepggf6tkk4ac1m39h54a361tfntyns Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LsASEabx; spf=pass (imf17.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646375685-79528 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 35 ++++++++++++++++------------------- mm/slab.c | 31 +++++++++++++++++++++++++++---- mm/slab.h | 19 +++++++++++++++++++ mm/slub.c | 19 ------------------- 4 files changed, 62 insertions(+), 42 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 37bde99b74af..e7b3330db4f3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -224,29 +224,19 @@ void kmem_dump_obj(void *object); * Kmalloc array related definitions */ -#ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#ifdef CONFIG_SLAB +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -564,15 +554,15 @@ static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t fl * Try really hard to succeed the allocation but fail * eventually. */ +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB unsigned int index; -#endif + if (size > KMALLOC_MAX_CACHE_SIZE) return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB + index = kmalloc_index(size); if (!index) @@ -581,10 +571,17 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) return kmem_cache_alloc_trace( kmalloc_caches[kmalloc_type(flags)][index], flags, size); -#endif } return __kmalloc(size, flags); } +#else +static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large(size, flags); + return __kmalloc(size, flags); +} +#endif static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { diff --git a/mm/slab.c b/mm/slab.c index ddf5737c63d9..570af6dc3478 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3624,7 +3624,8 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + return kmalloc_large(size, flags); + cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; @@ -3685,7 +3686,8 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + return kmalloc_large(size, flags); + cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; @@ -3739,14 +3741,21 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { struct kmem_cache *s; size_t i; + struct folio *folio; local_irq_disable(); for (i = 0; i < size; i++) { void *objp = p[i]; - if (!orig_s) /* called via kfree_bulk */ + if (!orig_s) { + /* called via kfree_bulk */ + folio = virt_to_folio(objp); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, objp); + continue; + } s = virt_to_cache(objp); - else + } else s = cache_from_obj(orig_s, objp); if (!s) continue; @@ -3776,11 +3785,20 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; + void *object = (void *) objp; trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; + + folio = virt_to_folio(objp); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, object); + return; + } + local_irq_save(flags); kfree_debugcheck(objp); c = virt_to_cache(objp); @@ -4211,12 +4229,17 @@ void __check_heap_object(const void *ptr, unsigned long n, size_t __ksize(const void *objp) { struct kmem_cache *c; + struct folio *folio; size_t size; BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); + c = virt_to_cache(objp); size = c ? c->object_size : 0; diff --git a/mm/slab.h b/mm/slab.h index c7f2abc2b154..31e98beb47a3 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -664,6 +664,25 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +static __always_inline void kfree_hook(void *x) +{ + kmemleak_free(x); + kasan_kfree_large(x); +} + +static inline void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(order == 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kfree_hook(object); + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* CONFIG_SLOB */ static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slub.c b/mm/slub.c index 261474092e43..04fd084f4709 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1686,12 +1686,6 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) return ptr; } -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3535,19 +3529,6 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order = folio_order(folio); - - if (WARN_ON_ONCE(order == 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same From patchwork Fri Mar 4 06:34:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF3F8C433EF for ; Fri, 4 Mar 2022 06:34:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B6118D0005; Fri, 4 Mar 2022 01:34:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 43F218D0001; Fri, 4 Mar 2022 01:34:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DF608D0005; Fri, 4 Mar 2022 01:34:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 1FE688D0001 for ; Fri, 4 Mar 2022 01:34:49 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CEBDE9B7 for ; Fri, 4 Mar 2022 06:34:48 +0000 (UTC) X-FDA: 79205740656.03.B4E36DD Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf18.hostedemail.com (Postfix) with ESMTP id 4AE021C0015 for ; Fri, 4 Mar 2022 06:34:48 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id s8so2760643pfk.12 for ; Thu, 03 Mar 2022 22:34:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sl+tnBHk3Om3exFNoQL1k9gm+o8msiRC3imZlWXFCFQ=; b=RqVWgGr9J9yOyOmhf/M7Rf5rVoQmBacywljxXUt7owe+MoiBcPp5It1tMurj0PjnXd Ouqe/V6NBZrcvo6w7ZMuEdxfbsjAkSoFES01WeJvCt6D3HdrZvRJXFnMbIvexDn/FyhD iBQzMeqSvC0nBRzJ88nbEK9TTmcnpWFpAxvynTA7kLxRKnDcYMzhsuN3zD9VWmGIrmOM vhEzX4e6v7tTBdZfWd55w0xwJa+3Yup/4gmOEjWsj/dO0PJNxiG4madqTytAua4HdpvS 04mkkhHJhosVGbv66Ql6zJ2KJ/C0GcJtbwewtqAQtFtaP+l7OfjN6qUJehTr+x3SRHe9 UhgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sl+tnBHk3Om3exFNoQL1k9gm+o8msiRC3imZlWXFCFQ=; b=sWWWJrIty+bEoL2PDD52m0uuMLa+R4Dm6sudhamQvoM7Q8YaExUmsFs2lDeb3IOP2j yXPiDZ4aWnc7A6sBSWk7KrmgfgvJTGypfbGLCzteLlj3u5ioscLqzt4TX6C1XVAXo/zd TU2AikOYjC7R8+64cNXQmtDviFBnVsp6ofo9gK+oDvaYKQGWAI+K5RfgEiHenaX4v0Fr M4GfPtInPOSFgQpmEDAfhlJAlyqNdFiOpPZ2HHrQ3NqHkX327om10/iPGjRRHj+mpfyG wWUsW+2ae72TIAsFUZ3YkTxArdNVAvQSsFKrRXPMYkZeuYwlXunZ6rK5YE3/P6JsWXpg XIrA== X-Gm-Message-State: AOAM533x/eVSMsSAw8ObBr0L1rqHHwJeZhcWkPGzG2FPKfl8E45+ul2a M9OdJ5S7ryXBgO/O/wpxRMzwePyR+Yqp2g== X-Google-Smtp-Source: ABdhPJxAx/kaLkY9NudzLA8xsz3v1HTz5vDFFp5IWgY5okd3A/kN+0kX2WWhmiiBZ7RvUzsi5ADCrg== X-Received: by 2002:a63:cd02:0:b0:378:9b24:4a9 with SMTP id i2-20020a63cd02000000b003789b2404a9mr20323397pgg.327.1646375687212; Thu, 03 Mar 2022 22:34:47 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:47 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 2/5] mm/sl[au]b: unify __ksize() Date: Fri, 4 Mar 2022 06:34:24 +0000 Message-Id: <20220304063427.372145-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4AE021C0015 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RqVWgGr9; spf=pass (imf18.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 33t3hzcyo47miyapim7jp5zp5hpjpzph X-HE-Tag: 1646375688-3059 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that SLAB passes large requests to page allocator like SLUB, Unify __ksize(). Only SLOB need to implement own version of __ksize() because it stores size in object header for kmalloc objects. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 30 ------------------------------ mm/slab_common.c | 27 +++++++++++++++++++++++++++ mm/slub.c | 16 ---------------- 3 files changed, 27 insertions(+), 46 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 570af6dc3478..3ddf2181d8e4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4216,33 +4216,3 @@ void __check_heap_object(const void *ptr, unsigned long n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - size_t size; - - BUG_ON(!objp); - if (unlikely(objp == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c = virt_to_cache(objp); - size = c ? c->object_size : 0; - - return size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 23f2ab0713b7..1d2f92e871d2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1245,6 +1245,33 @@ void kfree_sensitive(const void *p) } EXPORT_SYMBOL(kfree_sensitive); +#ifndef CONFIG_SLOB +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object == ZERO_SIZE_PTR)) + return 0; + + folio = virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); +#endif + /** * ksize - get the actual amount of memory allocated for a given object * @objp: Pointer to the object diff --git a/mm/slub.c b/mm/slub.c index 04fd084f4709..6f0ebadd8f30 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4507,22 +4507,6 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif /* CONFIG_HARDENED_USERCOPY */ -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - void kfree(const void *x) { struct folio *folio; From patchwork Fri Mar 4 06:34:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43DF2C433FE for ; Fri, 4 Mar 2022 06:34:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1B8F8D0006; Fri, 4 Mar 2022 01:34:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA2FB8D0001; Fri, 4 Mar 2022 01:34:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6B228D0006; Fri, 4 Mar 2022 01:34:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id A7AD88D0001 for ; Fri, 4 Mar 2022 01:34:51 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 51A868F848 for ; Fri, 4 Mar 2022 06:34:51 +0000 (UTC) X-FDA: 79205740782.27.3A50CFB Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf31.hostedemail.com (Postfix) with ESMTP id EF8F720002 for ; Fri, 4 Mar 2022 06:34:50 +0000 (UTC) Received: by mail-pl1-f176.google.com with SMTP id n15so6909952plf.4 for ; Thu, 03 Mar 2022 22:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kPOOk68I4eD0EZtqHYKQaSrGBjBqSrtM4mnuv4oRjyQ=; b=nDQzt1phNSGQYKWCNk0gp2OnBhDzbFsLtVoMQZYprv1JockjSyVxMyhajm1xm7lr7V AWmFM71q4nLmK+IbMdcN4MrfIWcWUSusSJ2Ak1sFRxCRVgo9+yRWwtHiBLiO5kN6l0Cf yLLWwtivZxP28j4U1PbUfMvuJgkWTHpSgrnltIZ0HxRydMzgP58+XIo8ze8XNWAnBf2r jxc5GN7nej6IJEd/CXES7lSAJZ89+/JWxuMRhSr1m6iMiAkoym1w3oFIxqXIbLz27+YQ I4aOWtlXWIArSgaMdhm6eVGUNNI2Nn3Ars/MW3LDXBiy2RoYV6zkphnKYaYNQ1rT3Fwa B46Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kPOOk68I4eD0EZtqHYKQaSrGBjBqSrtM4mnuv4oRjyQ=; b=Gre/QyuL+DZt3bA06m4l/mJQ9G1IgoIth60LohStTTtAOLb75RZ3gLZaZ9v1BnkbtZ Uvdz2cKFaHYW2yfqaqaxTIjX8p0dVC0MaKTeSykTueiyd6FZFi2be6+jqzB+mSGdZcZ1 Ogmjbq2QUobgbURRPEWOS4oOewoXIGBsASCfA1fTJN7rb6pfMXFNTBC4pDaVZ+Nl0I99 gYsP5javZIQ70vFTGknAi9ZULbRsSgXYD8Pf295xmDUrV60DzfRDUgixi5RkWsnblxZg RRtWB9wJ5/f0hYRDYjjE12ocCdH0SaicDVceCAz+XXmUQPCIFczsO3TOxvNl3EhW009+ qMIg== X-Gm-Message-State: AOAM531NKvCdqn2jqg+HGV4mVOzqJhKX1dKHs1aSPdcVcH77lC58FIBU XBWXO1G+wdpwEurnxPS/XQq1S51teBoEXg== X-Google-Smtp-Source: ABdhPJx6Xk2mOdK3KCJoE/feOSg8DDkvCzteGHm0Tcc9N7Tq32/YEHDYtA7ATrAkhbz8I8OkNz0/Dg== X-Received: by 2002:a17:90a:eb07:b0:1be:f354:9404 with SMTP id j7-20020a17090aeb0700b001bef3549404mr9122314pjz.90.1646375689983; Thu, 03 Mar 2022 22:34:49 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:49 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 3/5] mm/sl[auo]b: move definition of __ksize() to mm/slab.h Date: Fri, 4 Mar 2022 06:34:25 +0000 Message-Id: <20220304063427.372145-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EF8F720002 X-Stat-Signature: nr789s3zspezrc93kcxpb35bubypmrfy Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nDQzt1ph; spf=pass (imf31.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1646375690-540907 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() is only called by KASAN. Remove export symbol and move definition to mm/slab.h as we don't want to grow its callers. [ willy@infradead.org: Move definition to mm/slab.h and reduce comments ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 9 +-------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 10 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index e7b3330db4f3..d2b896553315 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -182,7 +182,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index 31e98beb47a3..79b319d58504 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -685,6 +685,8 @@ static inline void free_large_kmalloc(struct folio *folio, void *object) } #endif /* CONFIG_SLOB */ +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index 1d2f92e871d2..b126fc7247b9 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1247,13 +1247,7 @@ EXPORT_SYMBOL(kfree_sensitive); #ifndef CONFIG_SLOB /** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes + * __ksize -- Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { @@ -1269,7 +1263,6 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); #endif /** diff --git a/mm/slob.c b/mm/slob.c index 60c5842215f1..d8af6c54f133 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -588,7 +588,6 @@ size_t __ksize(const void *block) m = (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { From patchwork Fri Mar 4 06:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 502FBC433F5 for ; Fri, 4 Mar 2022 06:34:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E79F28D0007; Fri, 4 Mar 2022 01:34:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E02A38D0001; Fri, 4 Mar 2022 01:34:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA38B8D0007; Fri, 4 Mar 2022 01:34:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BAF2E8D0001 for ; Fri, 4 Mar 2022 01:34:54 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 76D2D90926 for ; Fri, 4 Mar 2022 06:34:54 +0000 (UTC) X-FDA: 79205740908.27.A4C1ED2 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf02.hostedemail.com (Postfix) with ESMTP id CD4DF80017 for ; Fri, 4 Mar 2022 06:34:53 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id a5so6833145pfv.9 for ; Thu, 03 Mar 2022 22:34:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ks5pvMqQR8K26b0NEwSqAgaMYIVmHGUi4tZdn97VHAs=; b=mn/MfoEj7ex4cmWohzuiwMeU/dZ0xaA4vmjw8XtRuG2VBO7jFKIZkJ6oC7nxEnlPCU 1WZ1wLqzmp4zRfAh1HWUQLkVtbW7dmuR6q4FHPMIIzZ0NAiL9IGsbUVoomTUyxiac2CG F5DiPVczqJKVJNP55QPIQxeP2up+AbCNUuNTSJDU5nuBFZjf5FbOGiL2wHoCDaUA+ZS2 SMuXS50A+ZEdGex9kFFp9bU5DwfFMmhd/sUA088joOH3oLz+XBULwpkKsts4Hv/XUCTP M4oXr61f00V6ykH++i/cZx4YKwEsoqxKtenjUKqtj+ekJxDsba+46cUsSS6vTOg1nwy0 faqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ks5pvMqQR8K26b0NEwSqAgaMYIVmHGUi4tZdn97VHAs=; b=yIiceKmJURCxOju4YCkgM6/jRQ6lxl4c3DZSExpZHaP6z2iQS8CYUlIVA6TzE8DRxw INJ58bjS7Ii7/HBBlPM8tQIpZatvf7zVFMuW1Gg90ZJlvm2NhGtSnP+vjZzv7W2VwJVG U/SPCYgN0jAagvj4EQFBNJyy78Ie3yMYcCgsw2O/8aewFMaDMkFkbZHvt3OjuQCB3L2l SD1wQKDz6E+GqlJw6soR0wzbI41LDOuDxGVDEaDNJGFn18EW4mylj0dEtFW5O+YzH8lw rTQb2+9xeGLFWg9ZDh8x/TFwib4IHykzdQYTcFrbFJqX28xNEPJ1ExpeAC+DHCEX8Lfh FuEg== X-Gm-Message-State: AOAM532R9n4rSEL++Xrpm7xuy2fwHzOkLHPCQCMEDeNwheZ9ePAVGrIi CX/SRLCXEKAsYbi8si6f3ko4yeITUOhmAw== X-Google-Smtp-Source: ABdhPJyEhvk8aBpqhMLNfeybmgloUiwg+1evBGGHCq6W0x0KqLIVEGGOo3A5PVK/zxqLLXfIyyj3nw== X-Received: by 2002:a63:df0e:0:b0:378:4f83:496f with SMTP id u14-20020a63df0e000000b003784f83496fmr27591895pgg.560.1646375692737; Thu, 03 Mar 2022 22:34:52 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:52 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 4/5] mm/slub: limit number of node partial slabs only in cache creation Date: Fri, 4 Mar 2022 06:34:26 +0000 Message-Id: <20220304063427.372145-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CD4DF80017 X-Stat-Signature: jooio1w9su7w165i8p7qyd36gep48t4y Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="mn/MfoEj"; spf=pass (imf02.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646375693-918891 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: SLUB sets number of minimum partial slabs for node (min_partial) using set_min_partial(). SLUB holds at least min_partial slabs even if they're empty to avoid excessive use of page allocator. set_min_partial() limits value of min_partial limits value of min_partial MIN_PARTIAL and MAX_PARTIAL. As set_min_partial() can be called by min_partial_store() too, Only limit value of min_partial in kmem_cache_open() so that it can be changed to value that a user wants. [ rientjes@google.com: Fold set_min_partial() into its callers ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slub.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6f0ebadd8f30..f9ae983a3dc6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3981,15 +3981,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 1; } -static void set_min_partial(struct kmem_cache *s, unsigned long min) -{ - if (min < MIN_PARTIAL) - min = MIN_PARTIAL; - else if (min > MAX_PARTIAL) - min = MAX_PARTIAL; - s->min_partial = min; -} - static void set_cpu_partial(struct kmem_cache *s) { #ifdef CONFIG_SLUB_CPU_PARTIAL @@ -4196,7 +4187,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ - set_min_partial(s, ilog2(s->size) / 2); + s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); set_cpu_partial(s); @@ -5361,7 +5353,7 @@ static ssize_t min_partial_store(struct kmem_cache *s, const char *buf, if (err) return err; - set_min_partial(s, min); + s->min_partial = min; return length; } SLAB_ATTR(min_partial); From patchwork Fri Mar 4 06:34:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECB6DC433EF for ; Fri, 4 Mar 2022 06:34:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 776E18D0008; Fri, 4 Mar 2022 01:34:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FDA18D0001; Fri, 4 Mar 2022 01:34:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5777B8D0008; Fri, 4 Mar 2022 01:34:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 4998D8D0001 for ; Fri, 4 Mar 2022 01:34:58 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 2248580727 for ; Fri, 4 Mar 2022 06:34:58 +0000 (UTC) X-FDA: 79205741076.06.F0D8823 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf09.hostedemail.com (Postfix) with ESMTP id 99AC4140008 for ; Fri, 4 Mar 2022 06:34:56 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id p3-20020a17090a680300b001bbfb9d760eso9862265pjj.2 for ; Thu, 03 Mar 2022 22:34:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PgzWIJiGbElIzAA/+g3O4R+XIWRWlKNYEXwjILpxSlg=; b=VcsP531XOAZHoSzYutr+YrhPB/02f1REI7a2paVkswi+VQ1RJWPGg8afSzCgogHzyW dOUmDbKrKlQ2pIGPipEijTjuCJ2N6uNcjHbyMswPnO5gSGm6KKC8lgnMd5CDOze45fm6 ox2otFm8rAec4bBP8ZSzRbgS3yv+isnJSeAwbZSwGe0Xkpa/AIWIQs8BoVspdQDH/ejc RRFaT/TeqUqitkGA1WFWDZ75k4/o1UNUPoYyOumZB47EYuTWitcDPlm20xlJHkYfJCpF UAvLaQjJ1EFDEDIzDPfSx00I4XvsOPzMJ4rQuu9LeeLw8WyY2JgBU6nlJxw2wLfrXsCY DdbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PgzWIJiGbElIzAA/+g3O4R+XIWRWlKNYEXwjILpxSlg=; b=KIRVeN91qzs1vx6ncY0p8nYKlmzL4adMCZQD8X8nvmCZDtCS736pMF04dRjfXYSP+E x7yW8veijA3phvmeEJO110/T0rUv/lXpXR/FAVJJK+mOH7Fn1IEMGY1tVdNRUJrY+DIn /jV2ra7tsob74E1S8L8VvZNoMtvlUNZxqlZQPTriB5FEClvpuGmJwgd6mZ45iB/KK9an hZZTIV4t1w/Vmwaqzci4rS44fTWDRqyCseYeB5tsbBchoUrM4F8k4G9ihF39ZI1GLgte jVmTmhxsSET4S/EPTuPR63/CLXknKXLh5hVDBXF5tL3MWdYcNz1AJ2IW+cbdQHFNzG9F U+hA== X-Gm-Message-State: AOAM532v0kriFdl+EbGL+b95HUp9XcgZpoekk5tiRu0uUNKO5bo3jmKT BA0L/edOSdyEx5M6x+fOzf44dBARWBzSmA== X-Google-Smtp-Source: ABdhPJxUr2XDiJRBOKg+kXq0iUgKNnV8N4aWOKmy0bwGTHBAdBDB2wi3hQmPcbrds4di6UaI00Dbuw== X-Received: by 2002:a17:902:c401:b0:14f:b686:e6ff with SMTP id k1-20020a170902c40100b0014fb686e6ffmr15079443plk.45.1646375695469; Thu, 03 Mar 2022 22:34:55 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:55 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 5/5] mm/slub: refactor deactivate_slab() Date: Fri, 4 Mar 2022 06:34:27 +0000 Message-Id: <20220304063427.372145-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 99AC4140008 X-Stat-Signature: p4mexj3ctwz1yfa3wrbhefa5tptxqcuy Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=VcsP531X; spf=pass (imf09.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1646375696-837882 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Simplify deactivate_slab() by unlocking n->list_lock and retrying cmpxchg_double() when cmpxchg_double() fails, and perform add_{partial,full} only when it succeed. Releasing and taking n->list_lock again here is not harmful as SLUB avoids deactivating slabs as much as possible. [ vbabka@suse.cz: perform add_{partial,full} when cmpxchg_double() succeed. ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slub.c | 81 ++++++++++++++++++++++--------------------------------- 1 file changed, 32 insertions(+), 49 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f9ae983a3dc6..c1a693ec5874 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2344,8 +2344,8 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); - int lock = 0, free_delta = 0; - enum slab_modes l = M_NONE, m = M_NONE; + int free_delta = 0; + enum slab_modes mode = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; @@ -2387,14 +2387,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * Ensure that the slab is unfrozen while the list presence * reflects the actual number of objects during unfreeze. * - * We setup the list membership and then perform a cmpxchg - * with the count. If there is a mismatch then the slab - * is not unfrozen but the slab is on the wrong list. - * - * Then we restart the process which may have to remove - * the slab from the list that we just put it on again - * because the number of objects in the slab may have - * changed. + * We first perform cmpxchg holding lock and insert to list + * when it succeed. If there is mismatch then slub is not + * unfrozen and number of objects in the slab may have changed. + * Then release lock and retry cmpxchg again. */ redo: @@ -2414,57 +2410,44 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, new.frozen = 0; if (!new.inuse && n->nr_partial >= s->min_partial) - m = M_FREE; + mode = M_FREE; else if (new.freelist) { - m = M_PARTIAL; - if (!lock) { - lock = 1; - /* - * Taking the spinlock removes the possibility that - * acquire_slab() will see a slab that is frozen - */ - spin_lock_irqsave(&n->list_lock, flags); - } - } else { - m = M_FULL; - if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) { - lock = 1; - /* - * This also ensures that the scanning of full - * slabs from diagnostic functions will not see - * any frozen slabs. - */ - spin_lock_irqsave(&n->list_lock, flags); - } + mode = M_PARTIAL; + /* + * Taking the spinlock removes the possibility that + * acquire_slab() will see a slab that is frozen + */ + spin_lock_irqsave(&n->list_lock, flags); + } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) { + mode = M_FULL; + /* + * This also ensures that the scanning of full + * slabs from diagnostic functions will not see + * any frozen slabs. + */ + spin_lock_irqsave(&n->list_lock, flags); } - if (l != m) { - if (l == M_PARTIAL) - remove_partial(n, slab); - else if (l == M_FULL) - remove_full(s, n, slab); - - if (m == M_PARTIAL) - add_partial(n, slab, tail); - else if (m == M_FULL) - add_full(s, n, slab); - } - l = m; if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, - "unfreezing slab")) + "unfreezing slab")) { + if (mode == M_PARTIAL || mode == M_FULL) + spin_unlock_irqrestore(&n->list_lock, flags); goto redo; + } - if (lock) - spin_unlock_irqrestore(&n->list_lock, flags); - if (m == M_PARTIAL) + if (mode == M_PARTIAL) { + add_partial(n, slab, tail); + spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); - else if (m == M_FULL) + } else if (mode == M_FULL) { + add_full(s, n, slab); + spin_unlock_irqrestore(&n->list_lock, flags); stat(s, DEACTIVATE_FULL); - else if (m == M_FREE) { + } else if (mode == M_FREE) { stat(s, DEACTIVATE_EMPTY); discard_slab(s, slab); stat(s, FREE_SLAB);