From patchwork Mon May 29 02:19:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Henry Wang X-Patchwork-Id: 13257991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3AE2BC77B7E for ; Mon, 29 May 2023 02:19:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.540433.842169 (Exim 4.92) (envelope-from ) id 1q3STy-0005i5-VX; Mon, 29 May 2023 02:19:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 540433.842169; Mon, 29 May 2023 02:19:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1q3STy-0005hy-St; Mon, 29 May 2023 02:19:38 +0000 Received: by outflank-mailman (input) for mailman id 540433; Mon, 29 May 2023 02:19:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1q3STx-0005T2-RV for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:37 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 422099fa-fdc7-11ed-b231-6b7b168915f2; Mon, 29 May 2023 04:19:37 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B0FAC14; Sun, 28 May 2023 19:20:21 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 02F7F3F64C; Sun, 28 May 2023 19:19:32 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 422099fa-fdc7-11ed-b231-6b7b168915f2 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , Henry Wang Subject: [PATCH v5 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS Date: Mon, 29 May 2023 10:19:05 +0800 Message-Id: <20230529021921.2606623-2-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com> References: <20230529021921.2606623-1-Henry.Wang@arm.com> MIME-Version: 1.0 From: Wei Chen As a memory range described in device tree cannot be split across multiple nodes. And it is very likely than if you have more than 64 nodes, you may need a lot more than 2 regions per node. So the default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense on Arm. So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This avoids having different way to define the value based NUMA vs non-NUMA. Further discussions can be found here[1]. [1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html Signed-off-by: Wei Chen Signed-off-by: Henry Wang Acked-by: Jan Beulich --- v4 -> v5: 1. No change. v3 -> v4: 1. Add Acked-by tag from Jan. v2 -> v3: By checking the discussion in [1] and [2] [1] https://lists.xenproject.org/archives/html/xen-devel/2023-01/msg00595.html [2] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html 1. No change v1 -> v2: 1. Add code comments to explain using NR_MEM_BANKS for Arm 2. Refine commit messages. --- xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++- xen/include/xen/numa.h | 9 +++++++++ 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h index e2bee2bd82..7d6ae36a19 100644 --- a/xen/arch/arm/include/asm/numa.h +++ b/xen/arch/arm/include/asm/numa.h @@ -3,9 +3,26 @@ #include +#include + typedef u8 nodeid_t; -#ifndef CONFIG_NUMA +#ifdef CONFIG_NUMA + +/* + * It is very likely that if you have more than 64 nodes, you may + * need a lot more than 2 regions per node. So, for Arm, we would + * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS. + * And in the future NR_MEM_BANKS will be bumped for new platforms, + * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to + * have different way to define the value based NUMA vs non-NUMA. + * + * Further discussions can be found here: + * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html + */ +#define NR_NODE_MEMBLKS NR_MEM_BANKS + +#else /* Fake one node for now. See also node_online_map. */ #define cpu_to_node(cpu) 0 diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h index 29b8c2df89..b86d0851fc 100644 --- a/xen/include/xen/numa.h +++ b/xen/include/xen/numa.h @@ -13,7 +13,16 @@ #define MAX_NUMNODES 1 #endif +/* + * Some architectures may have different considerations for + * number of node memory blocks. They can define their + * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural + * implementation. If the arch does not have specific implementation, + * the following default NR_NODE_MEMBLKS will be used. + */ +#ifndef NR_NODE_MEMBLKS #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2) +#endif #define vcpu_to_node(v) (cpu_to_node((v)->processor))