From patchwork Wed May 29 17:12:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679297 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B13A1B810 for ; Wed, 29 May 2024 17:13:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002792; cv=none; b=INIVKbANy6zbHULcvXcOcf8iU03SWDKkv2HrE+uF3sEmbEVXBrhwSCx9pYThk90R6B3x+qAxazZ4AyiT41LHDViPJNtzFFs9AFOiIQ9B8ugeoRv6AIvQ6n7S3pE36m1B1sA4PZvwtaDo3wae3/vgu6lAUQeI2oyBEcmXZ0hom88= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002792; c=relaxed/simple; bh=QRqSxhSL1bizFBbKfNS3mIHdVNZ3cu7quVf2ES0tZoU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=b38gZUdvv1rxCAqYR/H/wA9VuYX5lZ2bzoMnC7Q+M2uEfGsijLPP52aeTr9DR/pbfKjPOBvTnJDBiz8KLpLMo/R1r46oEDlDLLq4U81KWh5pEOB5mjnpaxm5mWZede7z2BqkEESSrnJS7EbGLreER68wyGb7oqlnZSr/857FE8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqG7T0pnWz6K6Mx; Thu, 30 May 2024 01:08:53 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 5E7B71400CA; Thu, 30 May 2024 01:13:07 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:13:06 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 1/8] arm64: numa: Introduce a memory_add_physaddr_to_nid() Date: Wed, 29 May 2024 18:12:29 +0100 Message-ID: <20240529171236.32002-2-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) From: Dan Williams Based heavily on Dan William's earlier attempt to introduce this infrastruture for all architectures so I've kept his authorship. [1] arm64 stores it's numa data in memblock. Add a memblock generic way to interrogate that data for memory_Add_physaddr_to_nid. Cc: Mike Rapoport Cc: Jia He Cc: Will Deacon Cc: David Hildenbrand Cc: Andrew Morton Signed-off-by: Dan Williams Link: https://lore.kernel.org/r/159457120334.754248.12908401960465408733.stgit@dwillia2-desk3.amr.corp.intel.com [1] Signed-off-by: Jonathan Cameron --- arch/arm64/include/asm/sparsemem.h | 4 ++++ arch/arm64/mm/init.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 8a8acc220371..8dd1b6a718fa 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -26,4 +26,8 @@ #define SECTION_SIZE_BITS 27 #endif /* CONFIG_ARM64_64K_PAGES */ +#ifndef __ASSEMBLY__ +extern int memory_add_physaddr_to_nid(u64 addr); +#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +#endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 9b5ab6818f7f..f310cbd349ba 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -48,6 +48,35 @@ #include #include +#ifdef CONFIG_NUMA + +static int __memory_add_physaddr_to_nid(u64 addr) +{ + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr); + int nid; + + for_each_online_node(nid) { + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (pfn >= start_pfn && pfn <= end_pfn) + return nid; + } + return NUMA_NO_NODE; +} + +int memory_add_physaddr_to_nid(u64 start) +{ + int nid = __memory_add_physaddr_to_nid(start); + + /* Default to node0 as not all callers are prepared for this to fail */ + if (nid == NUMA_NO_NODE) + return 0; + + return nid; +} +EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); + +#endif /* CONFIG_NUMA */ + /* * We need to be able to catch inadvertent references to memstart_addr * that occur (potentially in generic code) before arm64_memblock_init() From patchwork Wed May 29 17:12:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679298 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24B561B810 for ; Wed, 29 May 2024 17:13:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002822; cv=none; b=r2mAaPV+JYdYgYCyMRNM/lh/JMJ15LqD9nG018aIEnkK5qD0ZB9q5fOfSIkPYoPhY5MHuOoyuL9VzP05g5WVl0GoaAKKJiq1gPreYlpryUA6SY5dXchmP/kamCBoWZltEc+8sA8WJ0x/vDvDPfKyXD8Zai9BtLVvkIjzTtvvDjU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002822; c=relaxed/simple; bh=TKBzr9MdzFqQF0ZQmLs4cQwZQyq+xZgr2sIqcJtyVsA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=i1lArUYOIhGtOfpdLZ57UKoCqxTvi5kRYzFKq/cyWF5wjCbmlzAn4MhhHbI3Sf64NKs3Z6Wk0fa0n+hYqsY/n9x9L2YE+pCWfQKVpIDNWvLaMIe2bV5VEmX+Hpp4nnPb2BTeJzpGl5Tca4ZnHIgC04dfhP09TMQmGBpe5/CfU+Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGCn1gH8z6K9Jp; Thu, 30 May 2024 01:12:37 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 2C4BF140CB9; Thu, 30 May 2024 01:13:38 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:13:37 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 2/8] arm64: memblock: Introduce a generic phys_addr_to_target_node() Date: Wed, 29 May 2024 18:12:30 +0100 Message-ID: <20240529171236.32002-3-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) From: Dan Williams Similar to how generic memory_add_physaddr_to_nid() interrogates memblock data for numa information, introduce get_reserved_pfn_range_from_nid() to enable the same operation for reserved memory ranges. Example memory ranges that are reserved, but still have associated numa-info are persistent memory or Soft Reserved (EFI_MEMORY_SP) memory. This is Dan's patch but with the implementation of phys_addr_to_target_node() made arm64 specific. Cc: Mike Rapoport Cc: Jia He Cc: Will Deacon Cc: David Hildenbrand Cc: Andrew Morton Signed-off-by: Dan Williams Link: https://lore.kernel.org/r/159457120893.754248.7783260004248722175.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Jonathan Cameron --- arch/arm64/include/asm/sparsemem.h | 4 ++++ arch/arm64/mm/init.c | 22 ++++++++++++++++++++++ include/linux/memblock.h | 8 ++++++++ include/linux/mm.h | 14 ++++++++++++++ mm/memblock.c | 22 +++++++++++++++++++--- mm/mm_init.c | 29 ++++++++++++++++++++++++++++- 6 files changed, 95 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 8dd1b6a718fa..5b483ad6d501 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -27,7 +27,11 @@ #endif /* CONFIG_ARM64_64K_PAGES */ #ifndef __ASSEMBLY__ + extern int memory_add_physaddr_to_nid(u64 addr); #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid +extern int phys_to_target_node(phys_addr_t start); +#define phys_to_target_node phys_to_target_node + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index f310cbd349ba..6a2f21b1bb58 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -75,6 +75,28 @@ int memory_add_physaddr_to_nid(u64 start) } EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); +int phys_to_target_node(phys_addr_t start) +{ + unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(start); + int nid = __memory_add_physaddr_to_nid(start); + + if (nid != NUMA_NO_NODE) + return nid; + + /* + * Search reserved memory ranges since the memory address does + * not appear to be online + */ + for_each_node_state(nid, N_POSSIBLE) { + get_reserved_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (pfn >= start_pfn && pfn <= end_pfn) + return nid; + } + + return NUMA_NO_NODE; +} +EXPORT_SYMBOL(phys_to_target_node); + #endif /* CONFIG_NUMA */ /* diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e2082240586d..c7d518a54359 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -281,6 +281,10 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, unsigned long *out_end_pfn, int *out_nid); +void __next_reserved_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid); + /** * for_each_mem_pfn_range - early memory pfn range iterator * @i: an integer used as loop variable @@ -295,6 +299,10 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \ i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) +#define for_each_reserved_pfn_range(i, nid, p_start, p_end, p_nid) \ + for (i = -1, __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid); \ + i >= 0; __next_reserved_pfn_range(&i, nid, p_start, p_end, p_nid)) + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, unsigned long *out_spfn, diff --git a/include/linux/mm.h b/include/linux/mm.h index 9849dfda44d4..0c829b2d44fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3245,9 +3245,23 @@ void free_area_init(unsigned long *max_zone_pfn); unsigned long node_map_pfn_alignment(void); extern unsigned long absent_pages_in_range(unsigned long start_pfn, unsigned long end_pfn); + +/* + * Allow archs to opt-in to keeping get_pfn_range_for_nid() available + * after boot. + */ +#ifdef CONFIG_ARCH_KEEP_MEMBLOCK +#define __init_or_memblock +#else +#define __init_or_memblock __init +#endif + extern void get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn); +extern void get_reserved_pfn_range_for_nid(unsigned int nid, + unsigned long *start_pfn, unsigned long *end_pfn); + #ifndef CONFIG_NUMA static inline int early_pfn_to_nid(unsigned long pfn) { diff --git a/mm/memblock.c b/mm/memblock.c index d09136e040d3..5498d5ea70b4 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1289,11 +1289,11 @@ void __init_memblock __next_mem_range_rev(u64 *idx, int nid, /* * Common iterator interface used to define for_each_mem_pfn_range(). */ -void __init_memblock __next_mem_pfn_range(int *idx, int nid, +static void __init_memblock __next_memblock_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, - unsigned long *out_end_pfn, int *out_nid) + unsigned long *out_end_pfn, int *out_nid, + struct memblock_type *type) { - struct memblock_type *type = &memblock.memory; struct memblock_region *r; int r_nid; @@ -1319,6 +1319,22 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, *out_nid = r_nid; } +void __init_memblock __next_mem_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid) +{ + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid, + &memblock.memory); +} + +void __init_memblock __next_reserved_pfn_range(int *idx, int nid, + unsigned long *out_start_pfn, + unsigned long *out_end_pfn, int *out_nid) +{ + __next_memblock_pfn_range(idx, nid, out_start_pfn, out_end_pfn, out_nid, + &memblock.reserved); +} + /** * memblock_set_node - set node ID on memblock regions * @base: base of area to set node ID for diff --git a/mm/mm_init.c b/mm/mm_init.c index f72b852bd5b8..1f6e29e60673 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1644,7 +1644,7 @@ static inline void alloc_node_mem_map(struct pglist_data *pgdat) { } * provided by memblock_set_node(). If called for a node * with no available memory, the start and end PFNs will be 0. */ -void __init get_pfn_range_for_nid(unsigned int nid, +void __init_or_memblock get_pfn_range_for_nid(unsigned int nid, unsigned long *start_pfn, unsigned long *end_pfn) { unsigned long this_start_pfn, this_end_pfn; @@ -1662,6 +1662,33 @@ void __init get_pfn_range_for_nid(unsigned int nid, *start_pfn = 0; } +/** + * get_reserved_pfn_range_for_nid - Return the start and end page frames for a node + * @nid: The nid to return the range for. If MAX_NUMNODES, the min and max PFN are returned. + * @start_pfn: Passed by reference. On return, it will have the node start_pfn. + * @end_pfn: Passed by reference. On return, it will have the node end_pfn. + * + * Mostly identical to get_pfn_range_for_nid() except it operates on + * reserved ranges rather than online memory. + */ +void __init_or_memblock get_reserved_pfn_range_for_nid(unsigned int nid, + unsigned long *start_pfn, unsigned long *end_pfn) +{ + unsigned long this_start_pfn, this_end_pfn; + int i; + + *start_pfn = -1UL; + *end_pfn = 0; + + for_each_reserved_pfn_range(i, nid, &this_start_pfn, &this_end_pfn, NULL) { + *start_pfn = min(*start_pfn, this_start_pfn); + *end_pfn = max(*end_pfn, this_end_pfn); + } + + if (*start_pfn == -1UL) + *start_pfn = 0; +} + static void __init free_area_init_node(int nid) { pg_data_t *pgdat = NODE_DATA(nid); From patchwork Wed May 29 17:12:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679299 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7AB621B810 for ; Wed, 29 May 2024 17:14:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002853; cv=none; b=sUmhQsjTpePdyQ6vJuIxx84d+EvtoHFk8VP96fno8X2J+EAXfOLEPVUOxiZVXRNZCBuO4PJWmGf4Qb2IgF0c4so0r+WMHFoXF14O4TjcwGVViChB852GAPwVodjoimfXXt2FdTSsgJRsUaD0oBOG5f0o2BhMa5tuP+cHn7pF588= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002853; c=relaxed/simple; bh=Ld0RkCVr6v779tDnOF3KVOIoLCaOUVRNiYRaj6lPLDc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dDY2MUdDPXkPFxNxFG0llPBjZR0MnO9AOYikV4Thh1LCpejne9VdWPJpTsa5XMXFdPvCZz6VUbJ2nLRenAlmHVhaEb99zgGccWFW/RYv9G3uU0VQ6e0TbZMHAtNIg4HsFJLjg885Xz1kBX2w51K1ikDQNxUYyzIdpwWXhDcR5O8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqG8f4kdnz689rg; Thu, 30 May 2024 01:09:54 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id E523114038F; Thu, 30 May 2024 01:14:08 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:14:08 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 3/8] mm: memblock: Add a means to add to memblock.reserved Date: Wed, 29 May 2024 18:12:31 +0100 Message-ID: <20240529171236.32002-4-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) For CXL CFMWS regions, need to add memblocks that may not be in the system memory map so that their nid can be queried later. Add a function to make this easy to do. Signed-off-by: Jonathan Cameron --- include/linux/memblock.h | 2 ++ mm/memblock.c | 11 +++++++++++ 2 files changed, 13 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index c7d518a54359..9ac1ed8c3293 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -113,6 +113,8 @@ static inline void memblock_discard(void) {} void memblock_allow_resize(void); int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid, enum memblock_flags flags); +int memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, int nid, + enum memblock_flags flags); int memblock_add(phys_addr_t base, phys_addr_t size); int memblock_remove(phys_addr_t base, phys_addr_t size); int memblock_phys_free(phys_addr_t base, phys_addr_t size); diff --git a/mm/memblock.c b/mm/memblock.c index 5498d5ea70b4..8d02f75ec186 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -714,6 +714,17 @@ int __init_memblock memblock_add_node(phys_addr_t base, phys_addr_t size, return memblock_add_range(&memblock.memory, base, size, nid, flags); } +int __init_memblock memblock_add_reserved_node(phys_addr_t base, phys_addr_t size, + int nid, enum memblock_flags flags) +{ + phys_addr_t end = base + size - 1; + + memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__, + &base, &end, nid, flags, (void *)_RET_IP_); + + return memblock_add_range(&memblock.reserved, base, size, nid, flags); +} + /** * memblock_add - add new memblock region * @base: base address of the new region From patchwork Wed May 29 17:12:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679300 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 021111B810 for ; Wed, 29 May 2024 17:14:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002883; cv=none; b=cguEm4DoKVWCo9vFSO/4P7jVY/6wNoPIBgwiNldPPBdhuLx1oNq6Dz/rjK2wyMC7gtV5L2hsADnxLFL8ZyWtx5QOArfBKvLwgWVu8rRC0ZKfsvW9G69UUi46g3eHjKuEXIhwnEp+pnbuVlGEyJ4gGFxru/Cv3+I0cenTOZzbbVg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002883; c=relaxed/simple; bh=VpjirnCAOfsYPMgutjru4Qzy3vXk5Oj29Tazw2J+v84=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oLex4ImlPW6l1DqY/VYP4a0BzG5chEZCeZR6XM1/vkM3KRaQdrKAC15/PnTbwgG7yFusuCoBd7LwmFod+EwTMvala4HvWx1oObyvv2CBsIqSqVZvnYCQzpNF/3t54WoOqSa80WBGvUi/cO8zU8ci3Fhk2L8drqQCk43fYNpUNc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGDy5XCQz6K95K; Thu, 30 May 2024 01:13:38 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id B3F3E140B33; Thu, 30 May 2024 01:14:39 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:14:39 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 4/8] arch_numa: Avoid onlining empty NUMA nodes Date: Wed, 29 May 2024 18:12:32 +0100 Message-ID: <20240529171236.32002-5-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) ACPI can declare NUMA nodes for memory that will be along later. CXL Fixed Memory Windows may also be assigned NUMA nodes that are initially empty. Currently the generic arch_numa handling will online these empty nodes. This is both inconsistent with x86 and with itself as if we add memory and remove it again the node goes away. Signed-off-by: Jonathan Cameron --- drivers/base/arch_numa.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c index 5b59d133b6af..0630efb696ab 100644 --- a/drivers/base/arch_numa.c +++ b/drivers/base/arch_numa.c @@ -363,6 +363,11 @@ static int __init numa_register_nodes(void) unsigned long start_pfn, end_pfn; get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); + if (start_pfn >= end_pfn && + !node_state(nid, N_CPU) && + !node_state(nid, N_GENERIC_INITIATOR)) + continue; + setup_node_data(nid, start_pfn, end_pfn); node_set_online(nid); } From patchwork Wed May 29 17:12:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679301 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7953979EF for ; Wed, 29 May 2024 17:15:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002913; cv=none; b=UFFTQdDnQWpI0cAYjKv82AFb37K6XfbIcqW3CpZJrvmS7S/Vbb2ZUb0dA2uOV4+I7yVoNzHA4WGdtgPyV9CJjxzJljt1rEQFuuQZw5CTdq1JBBI1Ry60gjLLdjIVCl/mFnhTTG8Dc8FaMy5nn42LUGb9oB8DHOID3qzHLCCWyaY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002913; c=relaxed/simple; bh=FY0SM+kzbiwYNRG+NpplhaQqM7hgI6jSpOr+TvkOWHo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FpdhTZxLHaQnPtvlV6GJNFG68CXldNX1g08CmWRgzx7I/uONtwgWun9IndObFcrRVp/astfl7C6QQFDrLa+d7QKoH2eA32jOwNZN3kGyJ4WwQJyjqZb6PL0bIA2q64+gDeUFbsT4M0eq9alG+uZERNllKj8aB5omMvFFniOmoxs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGB66sVPz6J9y8; Thu, 30 May 2024 01:11:10 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 8666A1400CA; Thu, 30 May 2024 01:15:10 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:15:10 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 5/8] arch_numa: Make numa_add_memblk() set nid for memblock.reserved regions Date: Wed, 29 May 2024 18:12:33 +0100 Message-ID: <20240529171236.32002-6-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) Setting the reserved region entries to the appropriate Node ID means that they can be used to establish the node to which we should add hotplugged CXL memory within a CXL fixed memory window. Signed-off-by: Jonathan Cameron --- drivers/base/arch_numa.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c index 0630efb696ab..568dbabeb636 100644 --- a/drivers/base/arch_numa.c +++ b/drivers/base/arch_numa.c @@ -208,6 +208,13 @@ int __init numa_add_memblk(int nid, u64 start, u64 end) start, (end - 1), nid); return ret; } + /* Also set reserved nodes nid */ + ret = memblock_set_node(start, (end - start), &memblock.reserved, nid); + if (ret < 0) { + pr_err("memblock [0x%llx - 0x%llx] failed to add on node %d\n", + start, (end - 1), nid); + return ret; + } node_set(nid, numa_nodes_parsed); return ret; From patchwork Wed May 29 17:12:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679302 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 629DFD27E for ; Wed, 29 May 2024 17:15:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002944; cv=none; b=C8nzVIlyJmvgRj2zJkThvfN+vo7t5KvpA9VzYV3Y4aIETeK6tLO/DMBu5XHWqe/GLnZ6qts7SR1fFaxt9qooXvm4qtl0m+l/f87yVS6Zl38cgCB6qVEAEFlGiAn/EQ7yUIq3EGlIw4brzfgGfc5YtH/mpk0fU8sS6Ru7JDgT7UE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002944; c=relaxed/simple; bh=WkxDaENqulOl1gvy56L5sLs3/LyhXGYgJXLIAYtSxPQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=epecs0iOyDkILF0sUMBZgFfzA8gEftnoQNP3uTkKpaZD6B3FjAw8fUghOvzlyNczACpuvCP0Yxrkwl73QxV2QLyaK5TQOjJIUfa0sWJVZLBRPSTHhwOnpJMBffkhRpzUiRnvjhO2wZOgA4RVCGcftdmo90Bldx4rJWHrwGl8XfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGG82hwKz6K9Hp; Thu, 30 May 2024 01:14:40 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 536541400CA; Thu, 30 May 2024 01:15:41 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:15:40 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 6/8] arm64: mm: numa_fill_memblks() to add a memblock.reserved region if match. Date: Wed, 29 May 2024 18:12:34 +0100 Message-ID: <20240529171236.32002-7-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) CXL memory hotplug relies on additional NUMA nodes being created for any CXL Fixed Memory Window if there is no suitable one created by system firmware. To detect if system firmware has created one look for any normal memblock that overlaps with the Fixed Memory Window that has a NUMA node (nid) set. If one is found, add a region with the same nid to memblock.reserved so we can match it later when CXL memory is hotplugged. If not, add a region anyway because a suitable NUMA node will be set later. So for now use NUMA_NO_NODE. Signed-off-by: Jonathan Cameron --- arch/arm64/mm/init.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 6a2f21b1bb58..27941f22db1c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -50,6 +50,32 @@ #ifdef CONFIG_NUMA +/* + * Scan existing memblocks and if this region overlaps with a region with + * a nid set, add a reserved memblock. + */ +int __init numa_fill_memblks(u64 start, u64 end) +{ + struct memblock_region *region; + + for_each_mem_region(region) { + int nid = memblock_get_region_node(region); + + if (nid == NUMA_NO_NODE) + continue; + if (!(end < region->base || start >= region->base + region->size)) { + memblock_add_reserved_node(start, end - start, nid, + MEMBLOCK_RSRV_NOINIT); + return 0; + } + } + + memblock_add_reserved_node(start, end - start, NUMA_NO_NODE, + MEMBLOCK_RSRV_NOINIT); + + return NUMA_NO_MEMBLK; +} + static int __memory_add_physaddr_to_nid(u64 addr) { unsigned long start_pfn, end_pfn, pfn = PHYS_PFN(addr); From patchwork Wed May 29 17:12:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679303 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CB77D27E for ; Wed, 29 May 2024 17:16:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002975; cv=none; b=sZ2xdHtWLudWSCQoWvgmc5IIvDNNlnAGZtHwW1xR7K2RXOi8HzVpBZkRdqLpSFgP8doB+DuT6hKCCSN7wLwypG0ViFsro6Jc2KhSJw2+CE/0VgfHGiRp5vDgf2njZpERz4O0y8fHcS/DEusbhCTV84hj4gVBYTONHdLCCWcol8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717002975; c=relaxed/simple; bh=K03T0Q9x2ykCvP4DRMZCiFiPTWvLisb0BzZPz0JDC20=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=B82awVrMyL+GptIBsXezEzHC5lqqqpJLm4yhWBNGZtEQIR4bs8gCOtRrzZNtLA4CAFkh+yStf+nUPfBjQRPAGIA3h+VlQPImmbevD9ujE/AAJDeybhP7QThVEYYgohesGOkC4teK8QD1iQMEml09xMOkv/K7OH+E9P5r64PRZ4o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGGl2clRz67l0C; Thu, 30 May 2024 01:15:11 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 4EFAF140447; Thu, 30 May 2024 01:16:12 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:16:11 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 7/8] acpi: srat: cxl: Skip zero length CXL fixed memory windows. Date: Wed, 29 May 2024 18:12:35 +0100 Message-ID: <20240529171236.32002-8-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) One reported platform uses this nonsensical entry to represent a disable CFWMS. The acpi_cxl driver already correctly errors out on seeing this, but that leaves an additional confusing node in /sys/devices/system/nodes/possible plus wastes some space. Signed-off-by: Jonathan Cameron --- drivers/acpi/numa/srat.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c index e3f26e71637a..28c963d5c51f 100644 --- a/drivers/acpi/numa/srat.c +++ b/drivers/acpi/numa/srat.c @@ -329,6 +329,11 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header, int node; cfmws = (struct acpi_cedt_cfmws *)header; + + /* At least one firmware reports disabled entries with size 0 */ + if (cfmws->window_size == 0) + return 0; + start = cfmws->base_hpa; end = cfmws->base_hpa + cfmws->window_size; From patchwork Wed May 29 17:12:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 13679315 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C9A1D27E for ; Wed, 29 May 2024 17:16:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717003006; cv=none; b=U1HtyUEcygg+6c4CKNhon8Osu75N5PH9oZTADAyCAfa85cAXOKnRftzsJt5WSll9f5JbLvJ68m5KbxI/2bg/JPcwDCLhDNw0hUtEfVOFzgzsq7ODyS4EzQCdlDz0m13Pkc3NdVGRGrb0OBCgYdjpSRxtT9eOUmcFpWdW54CZpPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717003006; c=relaxed/simple; bh=W0MuivBn95KYv14CJ8fad6piUaTvcrG7q6Io3TmiYbI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=T0o9rfipdwiPkhmeOEq70qLaA5b12zYRoITKis9XI8SuKVXygN+UHTmrfQ50Ae1SBk7qOv12oMCxG+GkUhCEi4Yho2YwXyLJoguyHcuSaNyKrP9uJu6tutndUn3Q8fPHjN+Q0PHx7LKlzXnttw1lyFHZP0Nm1zOtrh7dR506VE4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4VqGCv4KMlz6JBK9; Thu, 30 May 2024 01:12:43 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 30308140D96; Thu, 30 May 2024 01:16:43 +0800 (CST) Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 29 May 2024 18:16:42 +0100 From: Jonathan Cameron To: Dan Williams , , , Sudeep Holla CC: Andrew Morton , David Hildenbrand , Will Deacon , Jia He , Mike Rapoport , , , , Yuquan Wang , Oscar Salvador , Lorenzo Pieralisi , James Morse Subject: [RFC PATCH 8/8] HACK: mm: memory_hotplug: Drop memblock_phys_free() call in try_remove_memory() Date: Wed, 29 May 2024 18:12:36 +0100 Message-ID: <20240529171236.32002-9-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> References: <20240529171236.32002-1-Jonathan.Cameron@huawei.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) I'm not sure what this is balancing, but it if is necessary then the reserved memblock approach can't be used to stash NUMA node assignments as after the first add / remove cycle the entry is dropped so not available if memory is re-added at the same HPA. This patch is here to hopefully spur comments on what this is there for! Signed-off-by: Jonathan Cameron --- mm/memory_hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 431b1f6753c0..3d8dd4749dfc 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -2284,7 +2284,7 @@ static int __ref try_remove_memory(u64 start, u64 size) } if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) { - memblock_phys_free(start, size); + // memblock_phys_free(start, size); memblock_remove(start, size); }