From patchwork Tue Feb 15 10:15:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12746854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CD09C433F5 for ; Tue, 15 Feb 2022 10:16:16 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.272920.467913 (Exim 4.92) (envelope-from ) id 1nJusP-0002sS-Ub; Tue, 15 Feb 2022 10:16:05 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 272920.467913; Tue, 15 Feb 2022 10:16:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusP-0002sL-RP; Tue, 15 Feb 2022 10:16:05 +0000 Received: by outflank-mailman (input) for mailman id 272920; Tue, 15 Feb 2022 10:16:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusO-0002aj-JD for xen-devel@lists.xenproject.org; Tue, 15 Feb 2022 10:16:04 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 47b5351e-8e48-11ec-b215-9bbe72dcb22c; Tue, 15 Feb 2022 11:16:03 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ED08E13D5; Tue, 15 Feb 2022 02:16:02 -0800 (PST) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D2E83F66F; Tue, 15 Feb 2022 02:16:02 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 47b5351e-8e48-11ec-b215-9bbe72dcb22c From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Wei Liu , Anthony PERARD , Juergen Gross Subject: [PATCH 1/5] tools/cpupools: Give a name to unnamed cpupools Date: Tue, 15 Feb 2022 10:15:47 +0000 Message-Id: <20220215101551.23101-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220215101551.23101-1-luca.fancellu@arm.com> References: <20220215101551.23101-1-luca.fancellu@arm.com> With the introduction of boot time cpupools, Xen can create many different cpupools at boot time other than cpupool with id 0. Since these newly created cpupools can't have an entry in Xenstore, create the entry using xen-init-dom0 helper with the usual convention: Pool-. Given the change, remove the check for poolid == 0 from libxl_cpupoolid_to_name(...). Signed-off-by: Luca Fancellu --- tools/helpers/xen-init-dom0.c | 26 +++++++++++++++++++++++++- tools/libs/light/libxl_utils.c | 3 +-- 2 files changed, 26 insertions(+), 3 deletions(-) diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c index c99224a4b607..3539f56faeb0 100644 --- a/tools/helpers/xen-init-dom0.c +++ b/tools/helpers/xen-init-dom0.c @@ -43,7 +43,10 @@ int main(int argc, char **argv) int rc; struct xs_handle *xsh = NULL; xc_interface *xch = NULL; - char *domname_string = NULL, *domid_string = NULL; + char *domname_string = NULL, *domid_string = NULL, *pool_string = NULL; + char pool_path[strlen("/local/pool") + 12], pool_name[strlen("Pool-") + 5]; + xc_cpupoolinfo_t *xcinfo; + unsigned int pool_id = 0; libxl_uuid uuid; /* Accept 0 or 1 argument */ @@ -114,6 +117,27 @@ int main(int argc, char **argv) goto out; } + /* Create an entry in xenstore for each cpupool on the system */ + do { + xcinfo = xc_cpupool_getinfo(xch, pool_id); + if (xcinfo != NULL) { + if (xcinfo->cpupool_id != pool_id) + pool_id = xcinfo->cpupool_id; + snprintf(pool_path, sizeof(pool_path), "/local/pool/%d/name", + pool_id); + snprintf(pool_name, sizeof(pool_name), "Pool-%d", pool_id); + pool_id++; + if (!xs_write(xsh, XBT_NULL, pool_path, pool_name, + strlen(pool_name))) { + fprintf(stderr, "cannot set pool name\n"); + rc = 1; + } + xc_cpupool_infofree(xch, xcinfo); + if (rc) + goto out; + } + } while(xcinfo != NULL); + printf("Done setting up Dom0\n"); out: diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c index b91c2cafa223..81780da3ff40 100644 --- a/tools/libs/light/libxl_utils.c +++ b/tools/libs/light/libxl_utils.c @@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid) snprintf(path, sizeof(path), "/local/pool/%d/name", poolid); s = xs_read(ctx->xsh, XBT_NULL, path, &len); - if (!s && (poolid == 0)) - return strdup("Pool-0"); + return s; } From patchwork Tue Feb 15 10:15:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12746856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B865AC4332F for ; Tue, 15 Feb 2022 10:16:17 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.272922.467936 (Exim 4.92) (envelope-from ) id 1nJusS-0003Qs-GX; Tue, 15 Feb 2022 10:16:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 272922.467936; Tue, 15 Feb 2022 10:16:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusS-0003Qj-Br; Tue, 15 Feb 2022 10:16:08 +0000 Received: by outflank-mailman (input) for mailman id 272922; Tue, 15 Feb 2022 10:16:06 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusQ-0002aj-Jf for xen-devel@lists.xenproject.org; Tue, 15 Feb 2022 10:16:06 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 4894bf6c-8e48-11ec-b215-9bbe72dcb22c; Tue, 15 Feb 2022 11:16:05 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 772011424; Tue, 15 Feb 2022 02:16:04 -0800 (PST) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2B9473F66F; Tue, 15 Feb 2022 02:16:03 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4894bf6c-8e48-11ec-b215-9bbe72dcb22c From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH 2/5] xen/sched: create public function for cpupools creation Date: Tue, 15 Feb 2022 10:15:48 +0000 Message-Id: <20220215101551.23101-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220215101551.23101-1-luca.fancellu@arm.com> References: <20220215101551.23101-1-luca.fancellu@arm.com> Create new public function to create cpupools, it checks for pool id uniqueness before creating the pool and can take a scheduler id or a negative value that means the default Xen scheduler will be used. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- xen/common/sched/cpupool.c | 26 ++++++++++++++++++++++++++ xen/include/xen/sched.h | 17 +++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 8c6e6eb9ccd5..4da12528d6b9 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1218,6 +1218,32 @@ static void cpupool_hypfs_init(void) #endif /* CONFIG_HYPFS */ +struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id) +{ + struct cpupool *pool; + + ASSERT(!spin_is_locked(&cpupool_lock)); + + spin_lock(&cpupool_lock); + /* Check if a cpupool with pool_id exists */ + pool = __cpupool_find_by_id(pool_id, true); + spin_unlock(&cpupool_lock); + + /* Pool exists, return an error */ + if ( pool ) + return NULL; + + if ( sched_id < 0 ) + sched_id = scheduler_get_default()->sched_id; + + pool = cpupool_create(pool_id, sched_id); + + BUG_ON(IS_ERR(pool)); + cpupool_put(pool); + + return pool; +} + static int __init cpupool_init(void) { unsigned int cpu; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 37f78cc4c4c9..a50df1bccdc0 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1145,6 +1145,23 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); unsigned int cpupool_get_id(const struct domain *d); const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); + +/* + * cpupool_create_pool - Creates a cpupool + * @pool_id: id of the pool to be created + * @sched_id: id of the scheduler to be used for the pool + * + * Creates a cpupool with pool_id id, the id must be unique and the function + * will return an error if the pool id exists. + * The sched_id parameter identifies the scheduler to be used, if it is + * negative, the default scheduler of Xen will be used. + * + * returns: + * pointer to the struct cpupool just created, on success + * NULL, on cpupool creation error + */ +struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id); + extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); From patchwork Tue Feb 15 10:15:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12746857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8B922C433EF for ; Tue, 15 Feb 2022 10:16:19 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.272924.467947 (Exim 4.92) (envelope-from ) id 1nJusT-0003if-RO; Tue, 15 Feb 2022 10:16:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 272924.467947; Tue, 15 Feb 2022 10:16:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusT-0003iR-Nc; Tue, 15 Feb 2022 10:16:09 +0000 Received: by outflank-mailman (input) for mailman id 272924; Tue, 15 Feb 2022 10:16:07 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusR-0002aj-Js for xen-devel@lists.xenproject.org; Tue, 15 Feb 2022 10:16:07 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 49757801-8e48-11ec-b215-9bbe72dcb22c; Tue, 15 Feb 2022 11:16:06 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D7F11063; Tue, 15 Feb 2022 02:16:06 -0800 (PST) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ACF783F66F; Tue, 15 Feb 2022 02:16:04 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 49757801-8e48-11ec-b215-9bbe72dcb22c From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, George Dunlap , Dario Faggioli , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH 3/5] xen/sched: retrieve scheduler id by name Date: Tue, 15 Feb 2022 10:15:49 +0000 Message-Id: <20220215101551.23101-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220215101551.23101-1-luca.fancellu@arm.com> References: <20220215101551.23101-1-luca.fancellu@arm.com> Add a public function to retrieve the scheduler id by the scheduler name. Signed-off-by: Luca Fancellu --- xen/common/sched/core.c | 11 +++++++++++ xen/include/xen/sched.h | 11 +++++++++++ 2 files changed, 22 insertions(+) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 8f4b1ca10d1c..9696d3c1d769 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2947,6 +2947,17 @@ void scheduler_enable(void) scheduler_active = true; } +int __init sched_get_id_by_name(const char *sched_name) +{ + unsigned int i; + + for ( i = 0; i < NUM_SCHEDULERS; i++ ) + if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) ) + return schedulers[i]->sched_id; + + return -1; +} + /* Initialise the data structures. */ void __init scheduler_init(void) { diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a50df1bccdc0..a67a9eb2fe9d 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -756,6 +756,17 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); + +/* + * sched_get_id_by_name - retrieves a scheduler id given a scheduler name + * @sched_name: scheduler name as a string + * + * returns: + * positive value being the scheduler id, on success + * negative value if the scheduler name is not found. + */ +int sched_get_id_by_name(const char *sched_name); + void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); From patchwork Tue Feb 15 10:15:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12746858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45F65C433FE for ; Tue, 15 Feb 2022 10:16:20 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.272925.467958 (Exim 4.92) (envelope-from ) id 1nJusV-000404-5Q; Tue, 15 Feb 2022 10:16:11 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 272925.467958; Tue, 15 Feb 2022 10:16:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusV-0003zm-0H; Tue, 15 Feb 2022 10:16:11 +0000 Received: by outflank-mailman (input) for mailman id 272925; Tue, 15 Feb 2022 10:16:10 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusU-0003iX-4j for xen-devel@lists.xenproject.org; Tue, 15 Feb 2022 10:16:10 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 4ab89378-8e48-11ec-8eb8-a37418f5ba1a; Tue, 15 Feb 2022 11:16:08 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13B5B13D5; Tue, 15 Feb 2022 02:16:08 -0800 (PST) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4872E3F66F; Tue, 15 Feb 2022 02:16:06 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4ab89378-8e48-11ec-8eb8-a37418f5ba1a From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , Juergen Gross , Dario Faggioli Subject: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time Date: Tue, 15 Feb 2022 10:15:50 +0000 Message-Id: <20220215101551.23101-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220215101551.23101-1-luca.fancellu@arm.com> References: <20220215101551.23101-1-luca.fancellu@arm.com> Introduce an architecture specific way to create different cpupools at boot time, this is particularly useful on ARM big.LITTLE system where there might be the need to have different cpupools for each type of core, but also systems using NUMA can have different cpu pools for each node. The feature on arm relies on a specification of the cpupools from the device tree to build pools and assign cpus to them. Documentation is created to explain the feature. Signed-off-by: Luca Fancellu --- docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ xen/arch/arm/Kconfig | 9 ++ xen/arch/arm/Makefile | 1 + xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ xen/common/sched/cpupool.c | 4 +- xen/include/xen/sched.h | 11 +++ 6 files changed, 260 insertions(+), 1 deletion(-) create mode 100644 docs/misc/arm/device-tree/cpupools.txt create mode 100644 xen/arch/arm/cpupool.c diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt new file mode 100644 index 000000000000..7298b6394332 --- /dev/null +++ b/docs/misc/arm/device-tree/cpupools.txt @@ -0,0 +1,118 @@ +Boot time cpupools +================== + +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is +possible to create cpupools during boot phase by specifying them in the device +tree. + +Cpupools specification nodes shall be direct childs of /chosen node. +Each cpupool node contains the following properties: + +- compatible (mandatory) + + Must always include the compatiblity string: "xen,cpupool". + +- cpupool-id (mandatory) + + Must be a positive integer number. + +- cpupool-cpus (mandatory) + + Must be a list of device tree phandle to nodes describing cpus (e.g. having + device_type = "cpu"), it can't be empty. + +- cpupool-sched (optional) + + Must be a string having the name of a Xen scheduler, it has no effect when + used in conjunction of a cpupool-id equal to zero, in that case the + default Xen scheduler is selected (sched=<...> boot argument). + + +Constraints +=========== + +The cpupool with id zero is implicitly created even if not specified, that pool +must have at least one cpu assigned, otherwise Xen will stop. + +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's +not assigned to any other cpupool. + +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will +stop. + + +Examples +======== + +A system having two types of core, the following device tree specification will +instruct Xen to have two cpupools: + +- The cpupool with id 0 will have 4 cpus assigned. +- The cpupool with id 1 will have 2 cpus assigned. + +As can be seen from the example, cpupool_a has only two cpus assigned, but since +there are two cpus unassigned, they are automatically assigned to cpupool with +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen +boot arguments, otherwise not all cores will be brought up by Xen and the +cpupool creation process will stop Xen. + + +a72_1: cpu@0 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x0>; + device_type = "cpu"; + [...] +}; + +a72_2: cpu@1 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x1>; + device_type = "cpu"; + [...] +}; + +a53_1: cpu@100 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x100>; + device_type = "cpu"; + [...] +}; + +a53_2: cpu@101 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x101>; + device_type = "cpu"; + [...] +}; + +cpu@102 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x102>; + device_type = "cpu"; + [...] +}; + +cpu@103 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x103>; + device_type = "cpu"; + [...] +}; + +chosen { + + cpupool_a { + compatible = "xen,cpupool"; + cpupool-id = <0>; + cpupool-cpus = <&a53_1 &a53_2>; + }; + cpupool_b { + compatible = "xen,cpupool"; + cpupool-id = <1>; + cpupool-cpus = <&a72_1 &a72_2>; + cpupool-sched = "credit2"; + }; + + [...] + +}; diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index ecfa6822e4d3..64c2879513b7 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -33,6 +33,15 @@ config ACPI Advanced Configuration and Power Interface (ACPI) support for Xen is an alternative to device tree on ARM64. +config BOOT_TIME_CPUPOOLS + bool "Create cpupools at boot time" + depends on ARM + default n + help + + Creates cpupools during boot time and assigns cpus to them. Cpupools + options can be specified in the device tree. + config GICV3 bool "GICv3 driver" depends on ARM_64 && !NEW_VGIC diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index d0dee10102b6..6165da4e77b4 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o obj-y += bootfdt.init.o obj-y += cpuerrata.o obj-y += cpufeature.o +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o obj-y += decode.o obj-y += device.o obj-$(CONFIG_IOREQ_SERVER) += dm.o diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c new file mode 100644 index 000000000000..a9d5b94635b9 --- /dev/null +++ b/xen/arch/arm/cpupool.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/arch/arm/cpupool.c + * + * Code to create cpupools at boot time for arm architecture. + * + * Copyright (C) 2022 Arm Ltd. + */ + +#include + +static struct cpupool *__initdata pool_cpu_map[NR_CPUS]; + +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map) +{ + const struct dt_device_node *chosen, *node; + unsigned int cpu_num, cpupool0_cpu_count = 0; + cpumask_t cpus_to_assign; + + chosen = dt_find_node_by_path("/chosen"); + if ( !chosen ) + return; + + cpumask_copy(&cpus_to_assign, cpu_online_map); + + dt_for_each_child_node(chosen, node) + { + const struct dt_device_node *cpu_node; + unsigned int pool_id; + int i = 0, sched_id = -1; + const char* scheduler_name; + struct cpupool *pool = cpupool0; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + continue; + + if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) ) + panic("Missing cpupool-id property!\n"); + + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) ) + { + sched_id = sched_get_id_by_name(scheduler_name); + if ( sched_id < 0 ) + panic("Scheduler %s does not exists!\n", scheduler_name); + } + + if ( pool_id ) + { + pool = cpupool_create_pool(pool_id, sched_id); + if ( !pool ) + panic("Error creating pool id %u!\n", pool_id); + } + + cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !cpu_node ) + panic("Missing or empty cpupool-cpus property!\n"); + + while ( cpu_node ) + { + register_t cpu_reg; + const __be32 *prop; + + prop = dt_get_property(cpu_node, "reg", NULL); + if ( !prop ) + panic("cpupool-cpus pointed node has no reg property!\n"); + + cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node)); + + /* Check if the cpu is online and in the set to be assigned */ + for_each_cpu ( cpu_num, &cpus_to_assign ) + if ( cpu_logical_map(cpu_num) == cpu_reg ) + break; + + if ( cpu_num >= nr_cpu_ids ) + panic("Cpu found in %s is not online or it's assigned twice!\n", + dt_node_name(node)); + + pool_cpu_map[cpu_num] = pool; + cpumask_clear_cpu(cpu_num, &cpus_to_assign); + + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n", + cpu_reg, pool_id); + + /* Keep track of how many cpus are assigned to Pool-0 */ + if ( !pool_id ) + cpupool0_cpu_count++; + + cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i); + } + } + + /* Assign every non assigned cpu to Pool-0 */ + for_each_cpu ( cpu_num, &cpus_to_assign ) + { + pool_cpu_map[cpu_num] = cpupool0; + cpupool0_cpu_count++; + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n", + cpu_logical_map(cpu_num)); + } + + if ( !cpupool0_cpu_count ) + panic("No cpu assigned to cpupool0!\n"); +} + +struct cpupool *__init arch_get_cpupool(unsigned int cpu) +{ + return pool_cpu_map[cpu]; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 4da12528d6b9..6013d75e2edd 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void) cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); + arch_allocate_cpupools(&cpu_online_map); + spin_lock(&cpupool_lock); cpumask_copy(&cpupool_free_cpus, &cpu_online_map); for_each_cpu ( cpu, &cpupool_free_cpus ) - cpupool_assign_cpu_locked(cpupool0, cpu); + cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu); spin_unlock(&cpupool_lock); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a67a9eb2fe9d..dda7db2ba51f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); +#ifdef CONFIG_BOOT_TIME_CPUPOOLS +void arch_allocate_cpupools(const cpumask_t *cpu_online_map); +struct cpupool *arch_get_cpupool(unsigned int cpu); +#else +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {} +static inline struct cpupool *arch_get_cpupool(unsigned int cpu) +{ + return cpupool0; +} +#endif + #endif /* __SCHED_H__ */ /* From patchwork Tue Feb 15 10:15:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12746859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6085FC433F5 for ; Tue, 15 Feb 2022 10:16:22 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.272926.467967 (Exim 4.92) (envelope-from ) id 1nJusW-0004Jk-JK; Tue, 15 Feb 2022 10:16:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 272926.467967; Tue, 15 Feb 2022 10:16:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusW-0004Ie-EY; Tue, 15 Feb 2022 10:16:12 +0000 Received: by outflank-mailman (input) for mailman id 272926; Tue, 15 Feb 2022 10:16:11 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nJusV-0003iX-BH for xen-devel@lists.xenproject.org; Tue, 15 Feb 2022 10:16:11 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 4bb02719-8e48-11ec-8eb8-a37418f5ba1a; Tue, 15 Feb 2022 11:16:10 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AEDBD1063; Tue, 15 Feb 2022 02:16:09 -0800 (PST) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 44AAE3F66F; Tue, 15 Feb 2022 02:16:08 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4bb02719-8e48-11ec-8eb8-a37418f5ba1a From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Bertrand Marquis , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 5/5] arm/dom0less: assign dom0less guests to cpupools Date: Tue, 15 Feb 2022 10:15:51 +0000 Message-Id: <20220215101551.23101-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220215101551.23101-1-luca.fancellu@arm.com> References: <20220215101551.23101-1-luca.fancellu@arm.com> Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_arch_domainconfig public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Update documentation about the property. Signed-off-by: Luca Fancellu --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain.c | 6 ++++++ xen/arch/arm/domain_build.c | 9 ++++++++- xen/arch/x86/domain.c | 6 ++++++ xen/common/domain.c | 5 ++++- xen/include/public/arch-arm.h | 2 ++ xen/include/public/domctl.h | 2 +- xen/include/xen/domain.h | 3 +++ 8 files changed, 35 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index 71895663a4de..0f1f210fa449 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -182,6 +182,11 @@ with the following properties: Both #address-cells and #size-cells need to be specified because both sub-nodes (described shortly) have reg properties. +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 92a6c509e5c5..be350b28b588 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -788,6 +788,12 @@ fail: return rc; } +unsigned int +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config) +{ + return config->arch.cpupool_id; +} + void arch_domain_destroy(struct domain *d) { /* IOMMU page table is shared with P2M, always call diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 6931c022a2e8..4f239e756775 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3015,7 +3015,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen = dt_find_node_by_path("/chosen"); + const struct dt_device_node *cpupool_node, + *chosen = dt_find_node_by_path("/chosen"); BUG_ON(chosen == NULL); dt_for_each_child_node(chosen, node) @@ -3053,6 +3054,12 @@ void __init create_domUs(void) GUEST_VPL011_SPI - 32 + 1); } + /* Get the optional property domain-cpupool */ + cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + dt_property_read_u32(cpupool_node, "cpupool-id", + &d_cfg.arch.cpupool_id); + /* * The variable max_init_domid is initialized with zero, so here it's * very important to use the pre-increment operator to call diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index ef1812dc1402..3e3cf88c9c82 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -880,6 +880,12 @@ int arch_domain_create(struct domain *d, return rc; } +unsigned int +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config) +{ + return 0; +} + void arch_domain_destroy(struct domain *d) { if ( is_hvm_domain(d) ) diff --git a/xen/common/domain.c b/xen/common/domain.c index 2048ebad86ff..d42ca8292025 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -665,6 +665,8 @@ struct domain *domain_create(domid_t domid, if ( !is_idle_domain(d) ) { + unsigned int domain_cpupool_id; + watchdog_domain_init(d); init_status |= INIT_watchdog; @@ -698,7 +700,8 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; - if ( (err = sched_init_domain(d, 0)) != 0 ) + domain_cpupool_id = arch_get_domain_cpupool_id(config); + if ( (err = sched_init_domain(d, domain_cpupool_id)) != 0 ) goto fail; if ( (err = late_hwdom_init(d)) != 0 ) diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 94b31511ddea..2c5d1ea7f01a 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -321,6 +321,8 @@ struct xen_arch_domainconfig { uint16_t tee_type; /* IN */ uint32_t nr_spis; + /* IN */ + unsigned int cpupool_id; /* * OUT * Based on the property clock-frequency in the DT timer node. diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..31ec083cb06e 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 160c8dbdab33..fb018871bc17 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -63,6 +63,9 @@ void unmap_vcpu_info(struct vcpu *v); int arch_domain_create(struct domain *d, struct xen_domctl_createdomain *config); +unsigned int +arch_get_domain_cpupool_id(const struct xen_domctl_createdomain *config); + void arch_domain_destroy(struct domain *d); void arch_domain_shutdown(struct domain *d);