From patchwork Tue Oct 1 19:08:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Lameter via B4 Relay X-Patchwork-Id: 13818676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF472CEACFA for ; Tue, 1 Oct 2024 19:10:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:Message-Id: Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date:From: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=kSGJFA0VboWnqCe9mrduIGV+pRwyupq/W26OouIEdsM=; b=KFb4ossANfkwtwNbdS80ONF4jP l71kqeou1Tf62GDUmXgiAD1X+Y7vj3LGGnWp2eOKvylDiuLHRSTruXPgKtLYKRVDhFca3bxtlGE+u TKlk3+fSU8IhJesYGEDGXvhaC+pE6W0ZQyPbybRT5ChXQ875X5Uq2z7r/OrFXm4wQEZPMvL7NSCYs Yiw3Ea/7MoKoWk0Jth9qCDHZXBUFclz8AsR2kVbR9QgH2Tgsa/7Gs2WLaK381kibwmpyVscnu6T5M scCwmT+oiR/DfNOv8Hncj3ixgKCV0+I6rQ71wLKlhCOSXxE6XWalrNRmRdHayaVY2VEn6JICTLzqw uYwYLBgg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sviG9-00000003rC2-1I4g; Tue, 01 Oct 2024 19:10:09 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1sviEX-00000003qz1-0Hha for linux-arm-kernel@lists.infradead.org; Tue, 01 Oct 2024 19:08:30 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7EA415C06BB; Tue, 1 Oct 2024 19:08:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPS id 2FFECC4CEC6; Tue, 1 Oct 2024 19:08:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727809707; bh=GyO/jUejBbPVp0uLrFUzUlBq6yL5G2ISwT1I3w2z89M=; h=From:Date:Subject:To:Cc:Reply-To:From; b=oJHWaWs0HlM4sGBZV92yVUELVxsh8Lb7Dyvw3QV/YVISeQ1kLI9/q3cxnioxI4dxh jbqNC4kYsRikktixA7acN0LdRvERrL1tDB7EEGsKwhw2bvQY4Bul5b5nuHw6BqC67r RTdZqtxwoAAstY4hpPvLH/SLVUW62VS009f8YVZ8P97MFQHs9J1d/HVMhn8KXiGJiq vlQ8Qdw0KUpOx6HpKVbr84haiWUjarjJK6E2uh3pG3KR3RMeLJwXEdwMCXcZ7OK28x TlhYhvqdtJ0K5Geh2lcrF1ZJl0tXs7AqWcdl0Y2nXiZKeYY/cDTsfUpSQCguyjMvRj LRde2SzM4613w== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F58CEACCE; Tue, 1 Oct 2024 19:08:27 +0000 (UTC) From: Christoph Lameter via B4 Relay Date: Tue, 01 Oct 2024 12:08:06 -0700 Subject: [PATCH v3] SLUB: Add support for per object memory policies MIME-Version: 1.0 Message-Id: <20241001-strict_numa-v3-1-ee31405056ee@gentwo.org> X-B4-Tracking: v=1; b=H4sIAJZI/GYC/22NPQvCMBRF/0p5s5F81NY6KS6ODm4iUtOXNoOJJ DEqpf/dkEUKwlvuvZzzRvDoNHrYFCM4jNpra1IQiwLk0Joeie5SBk55SdesIT44LcPVPO8tUXL V3IRgXLQcEvFwqPQ7285w3J32B7iketA+WPfJPyLL419dZIQR7FijeL2Wtaq3PZrwskvr+iyK/ Ac3tJrDPMGK0RKrLh3DGTxN0xe1RYUZ6wAAAA== To: Vlastimil Babka , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Yang Shi , Christoph Lameter Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Huang Shijie , "Christoph Lameter (Ampere)" X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=ed25519-sha256; t=1727809706; l=4104; i=cl@gentwo.org; s=20240811; h=from:subject:message-id; bh=GMfG3NyYovHT1E2KE7mnd1RwqkS8AxhYOPTV8mVoALA=; b=zpHYGB30jvLXP75jwNTHRZeHy39uVClVDsCa4BvL8gpsx3CSh6bEa4XaMsNuX3h0MkWeICSnq 9m/1eHTFnXUC8+TkQzdSJm9e+E5KoxFn0jgfx6SmVu0KPiCvS83/JTf X-Developer-Key: i=cl@gentwo.org; a=ed25519; pk=I7gqGwDi9drzCReFIuf2k9de1FI1BGibsshXI0DIvq8= X-Endpoint-Received: by B4 Relay for cl@gentwo.org/20240811 with auth_id=194 X-Original-From: Christoph Lameter X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241001_120829_224551_510AE0A5 X-CRM114-Status: GOOD ( 20.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: cl@gentwo.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Christoph Lameter The old SLAB allocator used to support memory policies on a per allocation bases. In SLUB the memory policies are applied on a per page frame / folio bases. Doing so avoids having to check memory policies in critical code paths for kmalloc and friends. This worked on general well on Intel/AMD/PowerPC because the interconnect technology is mature and can minimize the latencies through intelligent caching even if a small object is not placed optimally. However, on ARM we have an emergence of new NUMA interconnect technology based more on embedded devices. Caching of remote content can currently be ineffective using the standard building blocks / mesh available on that platform. Such architectures benefit if each slab object is individually placed according to memory policies and other restrictions. This patch adds another kernel parameter slab_strict_numa If that is set then a static branch is activated that will cause the hotpaths of the allocator to evaluate the current memory allocation policy. Each object will be properly placed by paying the price of extra processing and SLUB will no longer defer to the page allocator to apply memory policies at the folio level. This patch improves performance of memcached running on Ampere Altra 2P system (ARM Neoverse N1 processor) by 3.6% due to accurate placement of small kernel objects. Tested-by: Huang Shijie Signed-off-by: Christoph Lameter (Ampere) Signed-off-by: Christoph Lameter (Ampere) --- Changes in v3: - Make the static key a static in slub.c - Use pr_warn / pr_info instead of printk - Link to v2: https://lore.kernel.org/r/20240906-strict_numa-v2-1-f104e6de6d1e@gentwo.org Changes in v2: - Fix various issues - Testing --- mm/slub.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) --- base-commit: e32cde8d2bd7d251a8f9b434143977ddf13dcec6 change-id: 20240819-strict_numa-fc59b33123a2 Best regards, diff --git a/mm/slub.c b/mm/slub.c index 21f71cb6cc06..7ae94f79740d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -218,6 +218,10 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); #endif #endif /* CONFIG_SLUB_DEBUG */ +#ifdef CONFIG_NUMA +static DEFINE_STATIC_KEY_FALSE(strict_numa); +#endif + /* Structure holding parameters for get_partial() call chain */ struct partial_context { gfp_t flags; @@ -3957,6 +3961,28 @@ static __always_inline void *__slab_alloc_node(struct kmem_cache *s, object = c->freelist; slab = c->slab; +#ifdef CONFIG_NUMA + if (static_branch_unlikely(&strict_numa) && + node == NUMA_NO_NODE) { + + struct mempolicy *mpol = current->mempolicy; + + if (mpol) { + /* + * Special BIND rule support. If existing slab + * is in permitted set then do not redirect + * to a particular node. + * Otherwise we apply the memory policy to get + * the node we need to allocate on. + */ + if (mpol->mode != MPOL_BIND || !slab || + !node_isset(slab_nid(slab), mpol->nodes)) + + node = mempolicy_slab_node(); + } + } +#endif + if (!USE_LOCKLESS_FAST_PATH() || unlikely(!object || !slab || !node_match(slab, node))) { object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); @@ -5601,6 +5627,22 @@ static int __init setup_slub_min_objects(char *str) __setup("slab_min_objects=", setup_slub_min_objects); __setup_param("slub_min_objects=", slub_min_objects, setup_slub_min_objects, 0); +#ifdef CONFIG_NUMA +static int __init setup_slab_strict_numa(char *str) +{ + if (nr_node_ids > 1) { + static_branch_enable(&strict_numa); + pr_info("SLUB: Strict NUMA enabled.\n"); + } else + pr_warn("slab_strict_numa parameter set on non NUMA system.\n"); + + return 1; +} + +__setup("slab_strict_numa", setup_slab_strict_numa); +#endif + + #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied