From patchwork Wed Oct 9 18:08:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13828973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5329CEE339 for ; Wed, 9 Oct 2024 18:09:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5676A6B0088; Wed, 9 Oct 2024 14:09:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 517796B00A4; Wed, 9 Oct 2024 14:09:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B9A56B0093; Wed, 9 Oct 2024 14:09:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1EED76B00CF for ; Wed, 9 Oct 2024 14:09:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 22A94A04C5 for ; Wed, 9 Oct 2024 18:09:22 +0000 (UTC) X-FDA: 82654851132.04.CA97C51 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf29.hostedemail.com (Postfix) with ESMTP id 32750120005 for ; Wed, 9 Oct 2024 18:09:25 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=G0kX8d++; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728497228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VoDjrKHpUT224dnSitnnGFqJ2by/iZ5cZnUq7cIEdF4=; b=ugj4sf4frocEmNF9qfB7FWnkMdPt4i3D7EhSo5INcU59lSaySc2PpOcFZX21NxTW2oNHnC IMbWBYdN3T1fs+zG8p4a0G6Mwod+nkUf+FVUb0OHpbFn1+I5fg2+wRSKZrvVYIVN8tpG0A Sg21XEgtEBI3M2JiXGGjHZX3GivWSyw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728497228; a=rsa-sha256; cv=none; b=FaEcuorh+HtYryyhvQ9RG4TmJpFabGrDISjYZz2PgAuQ4cjVAdMsaJ6/3N8kM2m+vxhV3O 6zQzJRWnOzGmM51Dba7LyAlfACL13olc5jpiLrvuYJm9O+03SJ5QGkVpZ55bScTsj430Mn QdVrIFP31e7IjHCwRWG8UrbIIYe3QxI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=G0kX8d++; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A5737A443F5; Wed, 9 Oct 2024 18:09:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E241CC4CED1; Wed, 9 Oct 2024 18:09:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1728497363; bh=FFXSlSIenlvc/JFQhdQdJ8pK7321CYQcMdI24yMcjAo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G0kX8d++3ohcTFZeqD0eAoQ74MtboyreAdFmzFvR9aRtbuXWN3SDchQhtM+5RkLBG jqX7ke47OEJex/CacTt4waa6QacCyJ81RiJRPVr8QqyaDiR8H8hYjx0XTtmyIdnCFG axjODV2+3RwN38yxLyaULcbcJt/U99s04WlG8/Z//h5p001Y3IE3heiFH0uYuCQMJ7 mNBhnDqVayh43rZ21MdGN+lW1QzQwynTVOihJ4leLccP0wbDZ9aKt14dLeFlNvrA6H v0V/WDpoydXU23CoF5XBB201VERoxT8fqH1mP+j/B4wZaAW5cApavv7k/ZamamfcQ5 jt5HNV5iXydvQ== From: Mike Rapoport To: Andrew Morton Cc: Andreas Larsson , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Christoph Hellwig , Christophe Leroy , Dave Hansen , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Kent Overstreet , "Liam R. Howlett" , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Oleg Nesterov , Palmer Dabbelt , Peter Zijlstra , Richard Weinberger , Russell King , Song Liu , Stafford Horne , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Uladzislau Rezki , Vineet Gupta , Will Deacon , bpf@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v5 2/8] mm: vmalloc: don't account for number of nodes for HUGE_VMAP allocations Date: Wed, 9 Oct 2024 21:08:10 +0300 Message-ID: <20241009180816.83591-3-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009180816.83591-1-rppt@kernel.org> References: <20241009180816.83591-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: hk4c8bog98yxzrhppsz9khs4qrmobdwm X-Rspamd-Queue-Id: 32750120005 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728497365-657235 X-HE-Meta: U2FsdGVkX18BWbwlrOf7MWJvhLsXCxL0fSj/DtA9relHzkr03MswgLegJXPFLbpy3btjosGvFrEaNQE/Wua2qf03UMtBHImFiOMDZFrNNYzQr6JA1xVGqnul37aNC+IYHei29CsvzzrJEBIYYK4Vfjk3h81tXwRy5czhXQ8hETBx4crv8W36663LJ3lQQTWven8LEkIoG9W0GZaA9pBYYVrCKM0fFY4GFyNoH5j38yhgTr1xAMHOClCAydvIWacPu1Qqma6jGikzBDwqo8Z/xd9Bo8sf05kyDf+WIPlW63u9K6Hkg5XTrQAFWy5hpUnegnWkzuMHCPv3YM2uEihN6+/tVpfLhxVvgAAJkVGVsIvOfsxX3ha+E58ZuhbCblkSNp9DTfnoxs+k84q9ZbHUHqsXS91RkEz7wIv0qXArVd/6m5OVNSvvkDcqR+DMwVlo6759BxG9OsDP11XtXi/KopiiahAhb7YCS4G2sHNyUtOiPBqFc2uY3oMjhJ1U7Em1pJislrylFQgGOSIX+p6vYSjYvF02vitSDsKx6KqcCT7kbc2vh2KWZaFGTUBunC21qisr+CUEbSoqoYK1UiEH8NQjTg7Jpc0nAjNBAtGBsLU0C1mqSqeQ+GmKVDwqMh859QqrVOQQPzWkZ1WavY1JxvUfFMJ7e8jOGpmGErODGVdt+OtD+f/FS1Xr+wotZww22y7Lxg2XCmH8SDwEKT/fJYyS2RbKYkZMD1nJN43t0N8/T+WBwo+9bmcfqPOH/me+j/QVvp6rH6xDM6AS6kVBbMMNq3/Uh6ARap0woHh5PazyNexVQUAzCtn0tot99E512Srazi5neynVKN9Pj0ohqNQ4UKt7hRmopzytn1Yalm0qJofDwM9aQ8SO/zexf0FOE9K34mtSNRusGREwUtCXBb6wJyS5jEUjIaaxEy3k2JUk+Qfb/6FME5I01Wspg8g0TnrYS3CjbY9qpFLmMwj C/9Ezk7R TJcvvULbDTshS2jrOWLJ5LoYhnovMstVV7xitPpzt6mSG0VyYD5Usl+4RoKtzdWOkMr94MxRLXo63Ccg1Xxlfit3uojPUFhHkYru03YABOfZ3XSJWtmvThWzpWHJJoVOkrZ//MZ9QCZJmnXP9K+J++aeWuqoY9HjpZid+O29NUu9sfFjCGs/IWqMF/yDjADEqMUcGl3GcAn1sQATorM5Wax1P5erzGyWW4zbZkRYrnEVS3aB+6WZCOzcB9KstnWhLa8cnSELWUx9AJW/WnjpQIu6TiNI2gY5s1Zf5Z3tIALC40JOlGaKrjuzNq4GtVam6kQfq07IbjNFzHXFE2j2wyfVSvdvRth9KBy1i7x39yCAZCDLr4Qab4z51hDHVls6jN4ToPJzyGIkruXY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly specify node ID will use huge pages only if size_per_node is larger than a huge page. Still the actual allocated memory is not distributed between nodes and there is no advantage in such approach. On the contrary, BPF allocates SZ_2M * num_possible_nodes() for each new bpf_prog_pack, while it could do with a single huge page per pack. Don't account for number of nodes for VM_ALLOW_HUGE_VMAP with NUMA_NO_NODE and use huge pages whenever the requested allocation size is larger than a huge page. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Christoph Hellwig --- mm/vmalloc.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c00..86b2344d7461 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3763,8 +3763,6 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, } if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { - unsigned long size_per_node; - /* * Try huge pages. Only try for PAGE_KERNEL allocations, * others like modules don't yet expect huge pages in @@ -3772,13 +3770,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, * supporting them. */ - size_per_node = size; - if (node == NUMA_NO_NODE) - size_per_node /= num_online_nodes(); - if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE) + if (arch_vmap_pmd_supported(prot) && size >= PMD_SIZE) shift = PMD_SHIFT; else - shift = arch_vmap_pte_supported_shift(size_per_node); + shift = arch_vmap_pte_supported_shift(size); align = max(real_align, 1UL << shift); size = ALIGN(real_size, 1UL << shift);