From patchwork Fri Mar 18 15:25:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12785499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4EDEC4332F for ; Fri, 18 Mar 2022 15:26:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.292106.496105 (Exim 4.92) (envelope-from ) id 1nVEUO-00039p-EL; Fri, 18 Mar 2022 15:26:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 292106.496105; Fri, 18 Mar 2022 15:26:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nVEUO-000398-6b; Fri, 18 Mar 2022 15:26:04 +0000 Received: by outflank-mailman (input) for mailman id 292106; Fri, 18 Mar 2022 15:26:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nVEUM-0001ka-DW for xen-devel@lists.xenproject.org; Fri, 18 Mar 2022 15:26:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b78aa61c-a6cf-11ec-853c-5f4723681683; Fri, 18 Mar 2022 16:26:01 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86B891515; Fri, 18 Mar 2022 08:26:00 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3531D3F7D7; Fri, 18 Mar 2022 08:25:59 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b78aa61c-a6cf-11ec-853c-5f4723681683 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu Subject: [PATCH v3 5/6] arm/dom0less: assign dom0less guests to cpupools Date: Fri, 18 Mar 2022 15:25:40 +0000 Message-Id: <20220318152541.7460-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220318152541.7460-1-luca.fancellu@arm.com> References: <20220318152541.7460-1-luca.fancellu@arm.com> Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_domctl_createdomain public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Add public function to retrieve a pool id from the device tree cpupool node. Update documentation about the property. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini --- Changes in v3: - Use explicitely sized integer for struct xen_domctl_createdomain cpupool_id member. (Stefano) - Changed code due to previous commit code changes Changes in v2: - Moved cpupool_id from arch specific to common part (Juergen) - Implemented functions to retrieve the cpupool id from the cpupool dtb node. --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain_build.c | 14 +++++++++++++- xen/common/boot_cpupools.c | 24 ++++++++++++++++++++++++ xen/common/domain.c | 2 +- xen/include/public/domctl.h | 4 +++- xen/include/xen/sched.h | 9 +++++++++ 6 files changed, 55 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index a94125394e35..7b4a29a2c293 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -188,6 +188,11 @@ with the following properties: An empty property to request the memory of the domain to be direct-map (guest physical address == physical address). +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 8be01678de05..9c67a483d4a4 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen = dt_find_node_by_path("/chosen"); + const struct dt_device_node *cpupool_node, + *chosen = dt_find_node_by_path("/chosen"); BUG_ON(chosen == NULL); dt_for_each_child_node(chosen, node) @@ -3241,6 +3242,17 @@ void __init create_domUs(void) vpl011_virq - 32 + 1); } + /* Get the optional property domain-cpupool */ + cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + { + int pool_id = btcpupools_get_domain_pool_id(cpupool_node); + if ( pool_id < 0 ) + panic("Error getting cpupool id from domain-cpupool (%d)\n", + pool_id); + d_cfg.cpupool_id = pool_id; + } + /* * The variable max_init_domid is initialized with zero, so here it's * very important to use the pre-increment operator to call diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c index f6f2fa8f2701..feba93a243fc 100644 --- a/xen/common/boot_cpupools.c +++ b/xen/common/boot_cpupools.c @@ -23,6 +23,8 @@ static unsigned int __initdata next_pool_id; #define BTCPUPOOLS_DT_NODE_NO_REG (-1) #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) +#define BTCPUPOOLS_DT_WRONG_NODE (-3) +#define BTCPUPOOLS_DT_CORRUPTED_NODE (-4) static int __init get_logical_cpu_from_hw_id(unsigned int hwid) { @@ -55,6 +57,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node) return cpu_num; } +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + const struct dt_device_node *phandle_node; + int cpu_num; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + return BTCPUPOOLS_DT_WRONG_NODE; + /* + * Get first cpu listed in the cpupool, from its reg it's possible to + * retrieve the cpupool id. + */ + phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !phandle_node ) + return BTCPUPOOLS_DT_CORRUPTED_NODE; + + cpu_num = get_logical_cpu_from_cpu_node(phandle_node); + if ( cpu_num < 0 ) + return cpu_num; + + return pool_cpu_map[cpu_num]; +} + static int __init check_and_get_sched_id(const char* scheduler_name) { int sched_id = sched_get_id_by_name(scheduler_name); diff --git a/xen/common/domain.c b/xen/common/domain.c index 351029f8b239..0827400f4f49 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; - if ( (err = sched_init_domain(d, 0)) != 0 ) + if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 ) goto fail; if ( (err = late_hwdom_init(d)) != 0 ) diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..2f4cf56f438d 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -106,6 +106,8 @@ struct xen_domctl_createdomain { /* Per-vCPU buffer size in bytes. 0 to disable. */ uint32_t vmtrace_size; + uint32_t cpupool_id; + struct xen_arch_domainconfig arch; }; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 5d83465d3915..4e749a604f25 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1182,6 +1182,7 @@ unsigned int btcpupools_get_cpupool_id(unsigned int cpu); #ifdef CONFIG_HAS_DEVICE_TREE void btcpupools_dtb_parse(void); +int btcpupools_get_domain_pool_id(const struct dt_device_node *node); #else static inline void btcpupools_dtb_parse(void) {} #endif @@ -1193,6 +1194,14 @@ static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) { return 0; } +#ifdef CONFIG_HAS_DEVICE_TREE +static inline int +btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + return 0; +} +#endif + #endif #endif /* __SCHED_H__ */