From patchwork Wed Aug 7 06:41:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13755784 X-Patchwork-Delegate: iweiny@gmail.com Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF76D1B86E5; Wed, 7 Aug 2024 06:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723013190; cv=none; b=tB+9dbL/tMU3lH8sHNN+BGOW5TeGSYyRLqzV9PYspKT9TMSZg5YIiyksppy0n7szKhRBpbN/nUAAvabRcH/lfjc0lLQxdI7YZnGVbUZejaT8riwFe6ssrTkh+JU4WU3q/h30oCuaVJhY5CEWba0SjKNlLpeMxneZ+fpQIDKHo+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723013190; c=relaxed/simple; bh=lYNswY1STPbQO7nDcEXNTopr5rUcSm9CaKEeKoSuat0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qkbC+G2fBX/tOgEX97erMgdNSOl9uVb07hyaF38mV78hQf1q9AlU1zTYUPPsZauplEKOkYmk3njCqCEV24ix3Hn9w/nZRPubA0T2WbwQ8QMeJTd1SHaWQFeZ+6k9BQHk82102CRzALbJufBoreIPjIQ+iOzgn1Ch//1Gg47ldGY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QAThqQzv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QAThqQzv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4125CC4AF0D; Wed, 7 Aug 2024 06:46:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723013190; bh=lYNswY1STPbQO7nDcEXNTopr5rUcSm9CaKEeKoSuat0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QAThqQzvBHaExUV/N+vjzo1nvKC9jasKkz2r/I7c4XQPhvVMx5sx6XyL0/u6+LmDY aptVm6tt0UvqSvcaERKy6OXuhtmpjFeZW8thz/QCbTTslMIpPalHYA9T3L0BzOOEei qEq5FHv876P0tW17E8xgQvwBUvDIc74AMqQCM3FbubqXIJOroX2/KPt2O+Untmo+9r PeYul4BpovXvAe7ZZKKFteLHKhlMX1kChI5KJGJJje0nG0yEA5z4cKgylXuC3QBBJq zeIY1b0oDAcb9eHusYrHKaCtBXrOTqrW2rqCS3ZkgTuREM4Gzj7VPeF4plbIVLT6T6 UuJJqFdJYXP3A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , Andreas Larsson , Andrew Morton , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Davidlohr Bueso , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , John Paul Adrian Glaubitz , Jonathan Cameron , Jonathan Corbet , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , Samuel Holland , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Will Deacon , Zi Yan , devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, nvdimm@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org, Jonathan Cameron Subject: [PATCH v4 25/26] mm: make range-to-target_node lookup facility a part of numa_memblks Date: Wed, 7 Aug 2024 09:41:09 +0300 Message-ID: <20240807064110.1003856-26-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807064110.1003856-1-rppt@kernel.org> References: <20240807064110.1003856-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" The x86 implementation of range-to-target_node lookup (i.e. phys_to_target_node() and memory_add_physaddr_to_nid()) relies on numa_memblks. Since numa_memblks are now part of the generic code, move these functions from x86 to mm/numa_memblks.c and select CONFIG_NUMA_KEEP_MEMINFO when CONFIG_NUMA_MEMBLKS=y for dax and cxl. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Jonathan Cameron Tested-by: Zi Yan # for x86_64 and arm64 Tested-by: Jonathan Cameron [arm64 + CXL via QEMU] Reviewed-by: Dan Williams Acked-by: David Hildenbrand --- arch/x86/include/asm/sparsemem.h | 9 -------- arch/x86/mm/numa.c | 38 -------------------------------- drivers/cxl/Kconfig | 2 +- drivers/dax/Kconfig | 2 +- include/linux/numa_memblks.h | 7 ++++++ mm/numa.c | 1 + mm/numa_memblks.c | 38 ++++++++++++++++++++++++++++++++ 7 files changed, 48 insertions(+), 49 deletions(-) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 64df897c0ee3..3918c7a434f5 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -31,13 +31,4 @@ #endif /* CONFIG_SPARSEMEM */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_NUMA_KEEP_MEMINFO -extern int phys_to_target_node(phys_addr_t start); -#define phys_to_target_node phys_to_target_node -extern int memory_add_physaddr_to_nid(u64 start); -#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid -#endif -#endif /* __ASSEMBLY__ */ - #endif /* _ASM_X86_SPARSEMEM_H */ diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index d23287611449..64e5cdb2460a 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -453,41 +453,3 @@ u64 __init numa_emu_dma_end(void) return PFN_PHYS(MAX_DMA32_PFN); } #endif /* CONFIG_NUMA_EMU */ - -#ifdef CONFIG_NUMA_KEEP_MEMINFO -static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) -{ - int i; - - for (i = 0; i < mi->nr_blks; i++) - if (mi->blk[i].start <= start && mi->blk[i].end > start) - return mi->blk[i].nid; - return NUMA_NO_NODE; -} - -int phys_to_target_node(phys_addr_t start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - /* - * Prefer online nodes, but if reserved memory might be - * hot-added continue the search with reserved ranges. - */ - if (nid != NUMA_NO_NODE) - return nid; - - return meminfo_to_nid(&numa_reserved_meminfo, start); -} -EXPORT_SYMBOL_GPL(phys_to_target_node); - -int memory_add_physaddr_to_nid(u64 start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - if (nid == NUMA_NO_NODE) - nid = numa_meminfo.blk[0].nid; - return nid; -} -EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); - -#endif diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 99b5c25be079..29c192f20082 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -6,7 +6,7 @@ menuconfig CXL_BUS select FW_UPLOAD select PCI_DOE select FIRMWARE_TABLE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS help CXL is a bus that is electrically compatible with PCI Express, but layers three protocols on that signalling (CXL.io, CXL.cache, and diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig index a88744244149..d656e4c0eb84 100644 --- a/drivers/dax/Kconfig +++ b/drivers/dax/Kconfig @@ -30,7 +30,7 @@ config DEV_DAX_PMEM config DEV_DAX_HMEM tristate "HMEM DAX: direct access to 'specific purpose' memory" depends on EFI_SOFT_RESERVE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS default DEV_DAX help EFI 2.8 platforms, and others, may advertise 'specific purpose' diff --git a/include/linux/numa_memblks.h b/include/linux/numa_memblks.h index 5c6e12ad0b7a..17d4bcc34091 100644 --- a/include/linux/numa_memblks.h +++ b/include/linux/numa_memblks.h @@ -46,6 +46,13 @@ static inline int numa_emu_cmdline(char *str) } #endif /* CONFIG_NUMA_EMU */ +#ifdef CONFIG_NUMA_KEEP_MEMINFO +extern int phys_to_target_node(phys_addr_t start); +#define phys_to_target_node phys_to_target_node +extern int memory_add_physaddr_to_nid(u64 start); +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +#endif /* CONFIG_NUMA_KEEP_MEMINFO */ + #endif /* CONFIG_NUMA_MEMBLKS */ #endif /* __NUMA_MEMBLKS_H */ diff --git a/mm/numa.c b/mm/numa.c index 1f1582dcdf4a..e2eec07707d1 100644 --- a/mm/numa.c +++ b/mm/numa.c @@ -3,6 +3,7 @@ #include #include #include +#include struct pglist_data *node_data[MAX_NUMNODES]; EXPORT_SYMBOL(node_data); diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index c4037faa438b..a28507cf1e7f 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -531,3 +531,41 @@ int __init numa_fill_memblks(u64 start, u64 end) } return 0; } + +#ifdef CONFIG_NUMA_KEEP_MEMINFO +static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) +{ + int i; + + for (i = 0; i < mi->nr_blks; i++) + if (mi->blk[i].start <= start && mi->blk[i].end > start) + return mi->blk[i].nid; + return NUMA_NO_NODE; +} + +int phys_to_target_node(phys_addr_t start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + /* + * Prefer online nodes, but if reserved memory might be + * hot-added continue the search with reserved ranges. + */ + if (nid != NUMA_NO_NODE) + return nid; + + return meminfo_to_nid(&numa_reserved_meminfo, start); +} +EXPORT_SYMBOL_GPL(phys_to_target_node); + +int memory_add_physaddr_to_nid(u64 start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + if (nid == NUMA_NO_NODE) + nid = numa_meminfo.blk[0].nid; + return nid; +} +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +#endif /* CONFIG_NUMA_KEEP_MEMINFO */