From patchwork Fri Mar 4 06:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12768534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 502FBC433F5 for ; Fri, 4 Mar 2022 06:34:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E79F28D0007; Fri, 4 Mar 2022 01:34:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E02A38D0001; Fri, 4 Mar 2022 01:34:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA38B8D0007; Fri, 4 Mar 2022 01:34:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BAF2E8D0001 for ; Fri, 4 Mar 2022 01:34:54 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 76D2D90926 for ; Fri, 4 Mar 2022 06:34:54 +0000 (UTC) X-FDA: 79205740908.27.A4C1ED2 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf02.hostedemail.com (Postfix) with ESMTP id CD4DF80017 for ; Fri, 4 Mar 2022 06:34:53 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id a5so6833145pfv.9 for ; Thu, 03 Mar 2022 22:34:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ks5pvMqQR8K26b0NEwSqAgaMYIVmHGUi4tZdn97VHAs=; b=mn/MfoEj7ex4cmWohzuiwMeU/dZ0xaA4vmjw8XtRuG2VBO7jFKIZkJ6oC7nxEnlPCU 1WZ1wLqzmp4zRfAh1HWUQLkVtbW7dmuR6q4FHPMIIzZ0NAiL9IGsbUVoomTUyxiac2CG F5DiPVczqJKVJNP55QPIQxeP2up+AbCNUuNTSJDU5nuBFZjf5FbOGiL2wHoCDaUA+ZS2 SMuXS50A+ZEdGex9kFFp9bU5DwfFMmhd/sUA088joOH3oLz+XBULwpkKsts4Hv/XUCTP M4oXr61f00V6ykH++i/cZx4YKwEsoqxKtenjUKqtj+ekJxDsba+46cUsSS6vTOg1nwy0 faqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ks5pvMqQR8K26b0NEwSqAgaMYIVmHGUi4tZdn97VHAs=; b=yIiceKmJURCxOju4YCkgM6/jRQ6lxl4c3DZSExpZHaP6z2iQS8CYUlIVA6TzE8DRxw INJ58bjS7Ii7/HBBlPM8tQIpZatvf7zVFMuW1Gg90ZJlvm2NhGtSnP+vjZzv7W2VwJVG U/SPCYgN0jAagvj4EQFBNJyy78Ie3yMYcCgsw2O/8aewFMaDMkFkbZHvt3OjuQCB3L2l SD1wQKDz6E+GqlJw6soR0wzbI41LDOuDxGVDEaDNJGFn18EW4mylj0dEtFW5O+YzH8lw rTQb2+9xeGLFWg9ZDh8x/TFwib4IHykzdQYTcFrbFJqX28xNEPJ1ExpeAC+DHCEX8Lfh FuEg== X-Gm-Message-State: AOAM532R9n4rSEL++Xrpm7xuy2fwHzOkLHPCQCMEDeNwheZ9ePAVGrIi CX/SRLCXEKAsYbi8si6f3ko4yeITUOhmAw== X-Google-Smtp-Source: ABdhPJyEhvk8aBpqhMLNfeybmgloUiwg+1evBGGHCq6W0x0KqLIVEGGOo3A5PVK/zxqLLXfIyyj3nw== X-Received: by 2002:a63:df0e:0:b0:378:4f83:496f with SMTP id u14-20020a63df0e000000b003784f83496fmr27591895pgg.560.1646375692737; Thu, 03 Mar 2022 22:34:52 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id v10-20020a056a00148a00b004e0f420dd90sm4900007pfu.40.2022.03.03.22.34.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:34:52 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v2 4/5] mm/slub: limit number of node partial slabs only in cache creation Date: Fri, 4 Mar 2022 06:34:26 +0000 Message-Id: <20220304063427.372145-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220304063427.372145-1-42.hyeyoo@gmail.com> References: <20220304063427.372145-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: CD4DF80017 X-Stat-Signature: jooio1w9su7w165i8p7qyd36gep48t4y Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="mn/MfoEj"; spf=pass (imf02.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646375693-918891 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: SLUB sets number of minimum partial slabs for node (min_partial) using set_min_partial(). SLUB holds at least min_partial slabs even if they're empty to avoid excessive use of page allocator. set_min_partial() limits value of min_partial limits value of min_partial MIN_PARTIAL and MAX_PARTIAL. As set_min_partial() can be called by min_partial_store() too, Only limit value of min_partial in kmem_cache_open() so that it can be changed to value that a user wants. [ rientjes@google.com: Fold set_min_partial() into its callers ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slub.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6f0ebadd8f30..f9ae983a3dc6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3981,15 +3981,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 1; } -static void set_min_partial(struct kmem_cache *s, unsigned long min) -{ - if (min < MIN_PARTIAL) - min = MIN_PARTIAL; - else if (min > MAX_PARTIAL) - min = MAX_PARTIAL; - s->min_partial = min; -} - static void set_cpu_partial(struct kmem_cache *s) { #ifdef CONFIG_SLUB_CPU_PARTIAL @@ -4196,7 +4187,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ - set_min_partial(s, ilog2(s->size) / 2); + s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); set_cpu_partial(s); @@ -5361,7 +5353,7 @@ static ssize_t min_partial_store(struct kmem_cache *s, const char *buf, if (err) return err; - set_min_partial(s, min); + s->min_partial = min; return length; } SLAB_ATTR(min_partial);