From patchwork Wed Oct 16 12:24:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13838290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1EE5D1AD52 for ; Wed, 16 Oct 2024 12:25:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D4EA6B0089; Wed, 16 Oct 2024 08:25:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 785146B008A; Wed, 16 Oct 2024 08:25:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 625DE6B008C; Wed, 16 Oct 2024 08:25:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 456606B0089 for ; Wed, 16 Oct 2024 08:25:27 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5F251C01DA for ; Wed, 16 Oct 2024 12:25:16 +0000 (UTC) X-FDA: 82679385726.02.4B30F82 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id AD0D11C000F for ; Wed, 16 Oct 2024 12:25:08 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=g6T+kZiF; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729081477; a=rsa-sha256; cv=none; b=j/UNRRbG7NTjUnhWogLyA++VJMjSMlklCgwByZEjEA7ln3+oEq7+zM5LvsF7DcUibON8xP tea43AkT/pZGV5rZ3O/65e0cluT35EhbNVsGhnjnKNEF7nHiEG7Z0+0nQIaQM3mu+OGFjn pI6wmUL0O3o5utT4+K4hDHZwq8bVOr4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=g6T+kZiF; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729081477; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z91ntDsVwdZpjEjKfVCxnJdh7uWomOInGMEWm4OJYtc=; b=EOZW/66xCEilDVJRiCHtrkcsPnRsF2AzL2Eybk4E0ChjhtRrQF8rSnQPrbtp7mJD5fz7td YeYcbO3q85xUY1oIUnfB+W5BQQkz0lMDZAHHLb/Ak8M/8Z82YMjWU61S7o5c5yjfTKUUj9 Ifr5LPeqJKrdXz1YoQPKNGr9Qp+z7mc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4B4E05C076C; Wed, 16 Oct 2024 12:25:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8ED4DC4CECF; Wed, 16 Oct 2024 12:25:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729081522; bh=qY1WKPuZvKCisJh16UHTZvWAJ6UH0NTYbU6n2itLRWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g6T+kZiFSqZ4TagPYf5fsDWvIwMyVmc/rIpDws8S1OTo0y6QOWVDc5qC6gaV/ISqU UWlanIBJ5xIZe2Hfv4/v0MqxDUNOgWAOXwgGWmBdRJpC/JalXiouJtn/fWY3w9FMAu YAl7mmk+yrsbAAifUZMtOv0J1PkjMyPcGQpYYkgmYpzxRH1LefTbW+/1SlTNXu+vDF oPK7kX3o+IaknOAiReieijE3AL3ZCi1UWISEgHd3zgq0Lp6iWu3VZEAd0URT/qaF3I p+yJd1kdNmQPktmGeRMXsX6zNcmMgDX9IOZWUoJjxQ2GpJsz+2qtsgktFeijZ740Ln BTeUsUsoJGaXg== From: Mike Rapoport To: Andrew Morton , Luis Chamberlain Cc: Andreas Larsson , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Christoph Hellwig , Christophe Leroy , Dave Hansen , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Kent Overstreet , "Liam R. Howlett" , Mark Rutland , Masami Hiramatsu , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Oleg Nesterov , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Song Liu , Stafford Horne , Steven Rostedt , Suren Baghdasaryan , Thomas Bogendoerfer , Thomas Gleixner , Uladzislau Rezki , Vineet Gupta , Will Deacon , bpf@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org, Christoph Hellwig Subject: [PATCH v6 2/8] mm: vmalloc: don't account for number of nodes for HUGE_VMAP allocations Date: Wed, 16 Oct 2024 15:24:18 +0300 Message-ID: <20241016122424.1655560-3-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241016122424.1655560-1-rppt@kernel.org> References: <20241016122424.1655560-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: AD0D11C000F X-Rspamd-Server: rspam01 X-Stat-Signature: zy1mco1w5asbpz8qapmzqsei8aca1ger X-HE-Tag: 1729081508-374304 X-HE-Meta: U2FsdGVkX18Whhr5WRNDDZ1OBdbU1tV5eHruCr1vDInXx3wxtCQFHQOPFl5dZhbJ3K7QuLBUCuGxC1CcUcbP16rdbmqKmV86baGBHMm2ZX50v5XqzMWXuMIz7NK+QjJRwYbryKc+y0VSCs9dc8Mo0tD0pIXSG1mxinHB6sbsh5ir8wg4q4GJMAEdwZ0wuFaRnzqNLy7Ys2J/1S83EXUGfkzSsliynZBrxFMkguzwoQqKLuOKnRVSPQIiasBM4ivHmSZ70h9aGCoQm7fd73/kmEhyHcfKLRxG49YEv12AZBv0l1z2NzPVuZRTVzcbIL5NBVYEGob0pZrsA4ayg4Yq3hFU1rEH9Y81UsNfiamjf6ecePgL8fTQwz8c9uZvlj2R7kFGQp3kmYpCGUb4zoIWzE4r7Nz+7Wwjob4LuHfp4M4BdpAYtDv22mR3+vUmA9hXtH6LDM9l6gppfExGY1dZ+HO8ah4VbPw1S/FmrXneYLqzZtrCoXNd3QgrModdFTZEdTUyMunPF3tMxLr9R3FIVF/eHihpzyryd8zqQlUqyfjSRaX/D+VM8qEE8wygHIhXnuykXrybnQGFU33b7fP7P86+WwVfh131pvj2PLS1XrbARkSC+fEU1Zgzv+qNJ0z5e1QyZo0Dt0g96kdFybPlLlQv8sVWHq/Gx3NY9NZ91zN94IUfh+tHsApc8XTwINzuTiXYcpLvY8XDP5N73UqRFSSfW8akw0voblBziRnl5z3yvqkHjYXmZnlP0VcBnSb3Tsy3lgf/Ip2E+atp/fesv8qy3dNMC/bEKXAOyBjMUpdEw4X9rCHHfggqCWRzN/Aj5/Gil85008+L8u1qMxJdVvE4XuntUviP1DeRvRVRYXqmva7vHljwN/wfLGvKAc2NdyVK2EIk814+koCjpqAz2kYg7xMTcCJYFVFY9IT0ar5g9DTNIEOhf+W9EXo05imNIq4NpJJc2SHRqieb/vB GvWBhXyQ CrLvhfsw3eKsWnoC3iA9eXVmA3zXY5WonI/L6YaTCXWsZHURMP3Xubk80AJ/Zrr2NT+KMuZ3J8RRXT78D2c436pAAqYvPNVdYJMjLC4XklgU9MFznCl/oiRcRCc625dqQPA6kCHXMbTbAVwdux1WYHwbeCF2EGUGcsff6yVYjiWpRUqmGxXzh51F/zp374VrAoy85Mi3pLRyhk9DwvtUA6ZfafPQx08+hHNg/BW+7/jRZbZBX5aiSqbST5OHhPQ8QnGi6KEmCgL4TvjflLanqL13mnbnXCs1SAcsQFbHx2e9gsGe0zlsuj3AflNZ6On2VU+ugR4l6mKietC4qtE5c3xifSipqCyTp6VHyTRx0p70Twycsd2rlX7bW3dj5cFWkSlspybTs2ZihZ/guMmoQx/+dkw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly specify node ID will use huge pages only if size_per_node is larger than a huge page. Still the actual allocated memory is not distributed between nodes and there is no advantage in such approach. On the contrary, BPF allocates SZ_2M * num_possible_nodes() for each new bpf_prog_pack, while it could do with a single huge page per pack. Don't account for number of nodes for VM_ALLOW_HUGE_VMAP with NUMA_NO_NODE and use huge pages whenever the requested allocation size is larger than a huge page. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Christoph Hellwig Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: Luis Chamberlain --- mm/vmalloc.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c00..86b2344d7461 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3763,8 +3763,6 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, } if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { - unsigned long size_per_node; - /* * Try huge pages. Only try for PAGE_KERNEL allocations, * others like modules don't yet expect huge pages in @@ -3772,13 +3770,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, * supporting them. */ - size_per_node = size; - if (node == NUMA_NO_NODE) - size_per_node /= num_online_nodes(); - if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) + if (arch_vmap_pmd_supported(prot) && size >= PMD_SIZE) shift = PMD_SHIFT; else - shift = arch_vmap_pte_supported_shift(size_per_node); + shift = arch_vmap_pte_supported_shift(size); align = max(real_align, 1UL << shift); size = ALIGN(real_size, 1UL << shift);