From patchwork Tue Jul 23 06:41:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13739556 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 293CC14A609; Tue, 23 Jul 2024 06:46:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721717217; cv=none; b=OUG/IEiAY9P1P5K4uuIAd38QGEZNu+YA+BH40kEvEbb70eppTGOfG9jkYFHbbtBBd4VbhglS1K3iZA6/HQsw/9JmExc1eK3SwNLgT91ZWsBJgNVk5275CT0TuuHHLvWhosSAvhUCsxUKGgVypOmS+j21Qqf3nZAJp4JwcdULP5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721717217; c=relaxed/simple; bh=emeCYa9405g7/z24yhBWS68KWW8+Lb3s/YOK8bOaThI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X39zL+ML5ZNo+iunUSlEywlE8HkFpQPaTX/lZy2tRn4cwkXUIwj1ViENWAgNnCO2TPlcRc5lkEM7tm33RrBtCZmU9CAyz28RbFq7b8GmUQ9tI+XEhbYfD1EBXpUPLU/n5zNV0w1zcOfKJxxAQnXUTPgZy/HpIRJ2aUtumcGCQvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jgAoC+Hc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jgAoC+Hc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDFFDC4AF12; Tue, 23 Jul 2024 06:46:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721717217; bh=emeCYa9405g7/z24yhBWS68KWW8+Lb3s/YOK8bOaThI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jgAoC+Hcal9bWK9k7ho6yBYf06kymWVllDUMPiOaeuFSG/wojxIxKcWyED4DoWkCl ZyPnLzk1UZksRbBGs+vwS6J3QRkp5NCWUyBtV2hPxbzh2xDsZ8HA+/i+k9yezOal8u zhjJSmaqAV92XU7hpKX6nY/kRVxDgU4f5bxdcxuLSfaiLoEIwLORtA6VE/d1oMTTUm Pqc1pcChaNRo/+GbV4+lfx2+t+YfKTxY+qn5wYaSZxtCRIZx8059ZX38rWfyX1IoPG OdZJdrg7fImaUDMXuKWgBftxFVlRAZKp9u8sSnRW4s39ogVzgJG0rOPKH30S5pFCFo G7ZwK7pI1cKjg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , Andreas Larsson , Andrew Morton , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Davidlohr Bueso , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , John Paul Adrian Glaubitz , Jonathan Cameron , Jonathan Corbet , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , Samuel Holland , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Will Deacon , Zi Yan , devicetree@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, nvdimm@lists.linux.dev, sparclinux@vger.kernel.org, x86@kernel.org, Jonathan Cameron Subject: [PATCH v2 24/25] mm: make range-to-target_node lookup facility a part of numa_memblks Date: Tue, 23 Jul 2024 09:41:55 +0300 Message-ID: <20240723064156.4009477-25-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240723064156.4009477-1-rppt@kernel.org> References: <20240723064156.4009477-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Mike Rapoport (Microsoft)" The x86 implementation of range-to-target_node lookup (i.e. phys_to_target_node() and memory_add_physaddr_to_nid()) relies on numa_memblks. Since numa_memblks are now part of the generic code, move these functions from x86 to mm/numa_memblks.c and select CONFIG_NUMA_KEEP_MEMINFO when CONFIG_NUMA_MEMBLKS=y for dax and cxl. Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Jonathan Cameron --- arch/x86/include/asm/sparsemem.h | 9 -------- arch/x86/mm/numa.c | 38 -------------------------------- drivers/cxl/Kconfig | 2 +- drivers/dax/Kconfig | 2 +- include/linux/numa_memblks.h | 7 ++++++ mm/numa.c | 1 + mm/numa_memblks.c | 38 ++++++++++++++++++++++++++++++++ 7 files changed, 48 insertions(+), 49 deletions(-) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 64df897c0ee3..3918c7a434f5 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -31,13 +31,4 @@ #endif /* CONFIG_SPARSEMEM */ -#ifndef __ASSEMBLY__ -#ifdef CONFIG_NUMA_KEEP_MEMINFO -extern int phys_to_target_node(phys_addr_t start); -#define phys_to_target_node phys_to_target_node -extern int memory_add_physaddr_to_nid(u64 start); -#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid -#endif -#endif /* __ASSEMBLY__ */ - #endif /* _ASM_X86_SPARSEMEM_H */ diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 16bc703c9272..8e790528805e 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -449,41 +449,3 @@ u64 __init numa_emu_dma_end(void) return PFN_PHYS(MAX_DMA32_PFN); } #endif /* CONFIG_NUMA_EMU */ - -#ifdef CONFIG_NUMA_KEEP_MEMINFO -static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) -{ - int i; - - for (i = 0; i < mi->nr_blks; i++) - if (mi->blk[i].start <= start && mi->blk[i].end > start) - return mi->blk[i].nid; - return NUMA_NO_NODE; -} - -int phys_to_target_node(phys_addr_t start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - /* - * Prefer online nodes, but if reserved memory might be - * hot-added continue the search with reserved ranges. - */ - if (nid != NUMA_NO_NODE) - return nid; - - return meminfo_to_nid(&numa_reserved_meminfo, start); -} -EXPORT_SYMBOL_GPL(phys_to_target_node); - -int memory_add_physaddr_to_nid(u64 start) -{ - int nid = meminfo_to_nid(&numa_meminfo, start); - - if (nid == NUMA_NO_NODE) - nid = numa_meminfo.blk[0].nid; - return nid; -} -EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); - -#endif diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 99b5c25be079..29c192f20082 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -6,7 +6,7 @@ menuconfig CXL_BUS select FW_UPLOAD select PCI_DOE select FIRMWARE_TABLE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS help CXL is a bus that is electrically compatible with PCI Express, but layers three protocols on that signalling (CXL.io, CXL.cache, and diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig index a88744244149..d656e4c0eb84 100644 --- a/drivers/dax/Kconfig +++ b/drivers/dax/Kconfig @@ -30,7 +30,7 @@ config DEV_DAX_PMEM config DEV_DAX_HMEM tristate "HMEM DAX: direct access to 'specific purpose' memory" depends on EFI_SOFT_RESERVE - select NUMA_KEEP_MEMINFO if (NUMA && X86) + select NUMA_KEEP_MEMINFO if NUMA_MEMBLKS default DEV_DAX help EFI 2.8 platforms, and others, may advertise 'specific purpose' diff --git a/include/linux/numa_memblks.h b/include/linux/numa_memblks.h index 5c6e12ad0b7a..17d4bcc34091 100644 --- a/include/linux/numa_memblks.h +++ b/include/linux/numa_memblks.h @@ -46,6 +46,13 @@ static inline int numa_emu_cmdline(char *str) } #endif /* CONFIG_NUMA_EMU */ +#ifdef CONFIG_NUMA_KEEP_MEMINFO +extern int phys_to_target_node(phys_addr_t start); +#define phys_to_target_node phys_to_target_node +extern int memory_add_physaddr_to_nid(u64 start); +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +#endif /* CONFIG_NUMA_KEEP_MEMINFO */ + #endif /* CONFIG_NUMA_MEMBLKS */ #endif /* __NUMA_MEMBLKS_H */ diff --git a/mm/numa.c b/mm/numa.c index 67a0d7734a98..da27eb151dc5 100644 --- a/mm/numa.c +++ b/mm/numa.c @@ -3,6 +3,7 @@ #include #include #include +#include struct pglist_data *node_data[MAX_NUMNODES]; EXPORT_SYMBOL(node_data); diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index e4358ad92233..8609c6eb3998 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -528,3 +528,41 @@ int __init numa_fill_memblks(u64 start, u64 end) } return 0; } + +#ifdef CONFIG_NUMA_KEEP_MEMINFO +static int meminfo_to_nid(struct numa_meminfo *mi, u64 start) +{ + int i; + + for (i = 0; i < mi->nr_blks; i++) + if (mi->blk[i].start <= start && mi->blk[i].end > start) + return mi->blk[i].nid; + return NUMA_NO_NODE; +} + +int phys_to_target_node(phys_addr_t start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + /* + * Prefer online nodes, but if reserved memory might be + * hot-added continue the search with reserved ranges. + */ + if (nid != NUMA_NO_NODE) + return nid; + + return meminfo_to_nid(&numa_reserved_meminfo, start); +} +EXPORT_SYMBOL_GPL(phys_to_target_node); + +int memory_add_physaddr_to_nid(u64 start) +{ + int nid = meminfo_to_nid(&numa_meminfo, start); + + if (nid == NUMA_NO_NODE) + nid = numa_meminfo.blk[0].nid; + return nid; +} +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +#endif /* CONFIG_NUMA_KEEP_MEMINFO */