From patchwork Mon Apr 11 15:20:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AF8BFC433FE for ; Mon, 11 Apr 2022 15:21:30 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303105.517116 (Exim 4.92) (envelope-from ) id 1ndvqu-0005U0-O9; Mon, 11 Apr 2022 15:21:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303105.517116; Mon, 11 Apr 2022 15:21:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqu-0005Sa-IC; Mon, 11 Apr 2022 15:21:16 +0000 Received: by outflank-mailman (input) for mailman id 303105; Mon, 11 Apr 2022 15:21:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqs-0005M5-V7 for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:15 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 052d3d00-b9ab-11ec-8fbc-03012f2f19d4; Mon, 11 Apr 2022 17:21:12 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6FFEC169C; Mon, 11 Apr 2022 08:21:11 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8BA183F73B; Mon, 11 Apr 2022 08:21:10 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 052d3d00-b9ab-11ec-8fbc-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Wei Liu , Anthony PERARD , Juergen Gross Subject: [PATCH v7 1/7] tools/cpupools: Give a name to unnamed cpupools Date: Mon, 11 Apr 2022 16:20:55 +0100 Message-Id: <20220411152101.17539-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> With the introduction of boot time cpupools, Xen can create many different cpupools at boot time other than cpupool with id 0. Since these newly created cpupools can't have an entry in Xenstore, create the entry using xen-init-dom0 helper with the usual convention: Pool-. Given the change, remove the check for poolid == 0 from libxl_cpupoolid_to_name(...). Signed-off-by: Luca Fancellu Reviewed-by: Anthony PERARD --- Changes in v7: - Add R-by from Anthony Changes in v6: - Reworked loop to have only one error path (Anthony) Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - no changes, add R-by Changes in v2: - Remove unused variable, moved xc_cpupool_infofree ahead to simplify the code, use asprintf (Juergen) --- tools/helpers/xen-init-dom0.c | 37 +++++++++++++++++++++++++++++++++- tools/libs/light/libxl_utils.c | 3 +-- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c index c99224a4b607..37eff8868f25 100644 --- a/tools/helpers/xen-init-dom0.c +++ b/tools/helpers/xen-init-dom0.c @@ -43,7 +43,10 @@ int main(int argc, char **argv) int rc; struct xs_handle *xsh = NULL; xc_interface *xch = NULL; - char *domname_string = NULL, *domid_string = NULL; + char *domname_string = NULL, *domid_string = NULL, + *pool_path = NULL, *pool_name = NULL; + xc_cpupoolinfo_t *xcinfo; + unsigned int pool_id = 0; libxl_uuid uuid; /* Accept 0 or 1 argument */ @@ -114,9 +117,41 @@ int main(int argc, char **argv) goto out; } + /* Create an entry in xenstore for each cpupool on the system */ + do { + xcinfo = xc_cpupool_getinfo(xch, pool_id); + if (xcinfo != NULL) { + if (xcinfo->cpupool_id != pool_id) + pool_id = xcinfo->cpupool_id; + xc_cpupool_infofree(xch, xcinfo); + if (asprintf(&pool_path, "/local/pool/%d/name", pool_id) <= 0) { + fprintf(stderr, "cannot allocate memory for pool path\n"); + rc = 1; + goto out; + } + if (asprintf(&pool_name, "Pool-%d", pool_id) <= 0) { + fprintf(stderr, "cannot allocate memory for pool name\n"); + rc = 1; + goto out; + } + pool_id++; + if (!xs_write(xsh, XBT_NULL, pool_path, pool_name, + strlen(pool_name))) { + fprintf(stderr, "cannot set pool name\n"); + rc = 1; + goto out; + } + free(pool_name); + free(pool_path); + pool_path = pool_name = NULL; + } + } while(xcinfo != NULL); + printf("Done setting up Dom0\n"); out: + free(pool_path); + free(pool_name); free(domid_string); free(domname_string); xs_close(xsh); diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c index b91c2cafa223..81780da3ff40 100644 --- a/tools/libs/light/libxl_utils.c +++ b/tools/libs/light/libxl_utils.c @@ -151,8 +151,7 @@ char *libxl_cpupoolid_to_name(libxl_ctx *ctx, uint32_t poolid) snprintf(path, sizeof(path), "/local/pool/%d/name", poolid); s = xs_read(ctx->xsh, XBT_NULL, path, &len); - if (!s && (poolid == 0)) - return strdup("Pool-0"); + return s; } From patchwork Mon Apr 11 15:20:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1BA51C433F5 for ; Mon, 11 Apr 2022 15:21:32 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303104.517112 (Exim 4.92) (envelope-from ) id 1ndvqu-0005Or-Fp; Mon, 11 Apr 2022 15:21:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303104.517112; Mon, 11 Apr 2022 15:21:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqu-0005ON-9x; Mon, 11 Apr 2022 15:21:16 +0000 Received: by outflank-mailman (input) for mailman id 303104; Mon, 11 Apr 2022 15:21:14 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqs-0005MA-RN for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:14 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 0611c4df-b9ab-11ec-a405-831a346695d4; Mon, 11 Apr 2022 17:21:13 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B12C169E; Mon, 11 Apr 2022 08:21:13 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A78F83F73B; Mon, 11 Apr 2022 08:21:11 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0611c4df-b9ab-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v7 2/7] xen/sched: create public function for cpupools creation Date: Mon, 11 Apr 2022 16:20:56 +0100 Message-Id: <20220411152101.17539-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Create new public function to create cpupools, can take as parameter the scheduler id or a negative value that means the default Xen scheduler will be used. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v7: - no changes Changes in v6: - add R-by Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - Fixed comment (Andrew) Changes in v2: - cpupool_create_pool doesn't check anymore for pool id uniqueness before calling cpupool_create. Modified commit message accordingly --- xen/common/sched/cpupool.c | 15 +++++++++++++++ xen/include/xen/sched.h | 16 ++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index a6da4970506a..89a891af7076 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1219,6 +1219,21 @@ static void cpupool_hypfs_init(void) #endif /* CONFIG_HYPFS */ +struct cpupool *__init cpupool_create_pool(unsigned int pool_id, int sched_id) +{ + struct cpupool *pool; + + if ( sched_id < 0 ) + sched_id = scheduler_get_default()->sched_id; + + pool = cpupool_create(pool_id, sched_id); + + BUG_ON(IS_ERR(pool)); + cpupool_put(pool); + + return pool; +} + static int __init cf_check cpupool_init(void) { unsigned int cpu; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ed8539f6d297..0164db996b8b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1153,6 +1153,22 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); unsigned int cpupool_get_id(const struct domain *d); const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); + +/* + * cpupool_create_pool - Creates a cpupool + * @pool_id: id of the pool to be created + * @sched_id: id of the scheduler to be used for the pool + * + * Creates a cpupool with pool_id id. + * The sched_id parameter identifies the scheduler to be used, if it is + * negative, the default scheduler of Xen will be used. + * + * returns: + * pointer to the struct cpupool just created, or Xen will panic in case of + * error + */ +struct cpupool *cpupool_create_pool(unsigned int pool_id, int sched_id); + extern void cf_check dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); From patchwork Mon Apr 11 15:20:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A78DC43219 for ; Mon, 11 Apr 2022 15:21:33 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303106.517140 (Exim 4.92) (envelope-from ) id 1ndvqw-00067Z-6K; Mon, 11 Apr 2022 15:21:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303106.517140; Mon, 11 Apr 2022 15:21:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqw-000677-2G; Mon, 11 Apr 2022 15:21:18 +0000 Received: by outflank-mailman (input) for mailman id 303106; Mon, 11 Apr 2022 15:21:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqu-0005MA-Bg for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:16 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 0712cfd2-b9ab-11ec-a405-831a346695d4; Mon, 11 Apr 2022 17:21:15 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F7491570; Mon, 11 Apr 2022 08:21:14 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5257E3F73B; Mon, 11 Apr 2022 08:21:13 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0712cfd2-b9ab-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, George Dunlap , Dario Faggioli , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v7 3/7] xen/sched: retrieve scheduler id by name Date: Mon, 11 Apr 2022 16:20:57 +0100 Message-Id: <20220411152101.17539-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Add a static function to retrieve the scheduler pointer using the scheduler name. Add a public function to retrieve the scheduler id by the scheduler name that makes use of the new static function. Take the occasion to replace open coded scheduler search with the new static function in scheduler_init. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross Reviewed-by: Dario Faggioli --- Changes in v7: - Add R-by (Dario) Changes in v6: - no changes Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - add R-by Changes in v2: - replace open coded scheduler search in scheduler_init (Juergen) --- xen/common/sched/core.c | 40 ++++++++++++++++++++++++++-------------- xen/include/xen/sched.h | 11 +++++++++++ 2 files changed, 37 insertions(+), 14 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 19ab67818106..48ee01420fb8 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2947,10 +2947,30 @@ void scheduler_enable(void) scheduler_active = true; } +static inline +const struct scheduler *__init sched_get_by_name(const char *sched_name) +{ + unsigned int i; + + for ( i = 0; i < NUM_SCHEDULERS; i++ ) + if ( schedulers[i] && !strcmp(schedulers[i]->opt_name, sched_name) ) + return schedulers[i]; + + return NULL; +} + +int __init sched_get_id_by_name(const char *sched_name) +{ + const struct scheduler *scheduler = sched_get_by_name(sched_name); + + return scheduler ? scheduler->sched_id : -1; +} + /* Initialise the data structures. */ void __init scheduler_init(void) { struct domain *idle_domain; + const struct scheduler *scheduler; int i; scheduler_enable(); @@ -2981,25 +3001,17 @@ void __init scheduler_init(void) schedulers[i]->opt_name); schedulers[i] = NULL; } - - if ( schedulers[i] && !ops.name && - !strcmp(schedulers[i]->opt_name, opt_sched) ) - ops = *schedulers[i]; } - if ( !ops.name ) + scheduler = sched_get_by_name(opt_sched); + if ( !scheduler ) { printk("Could not find scheduler: %s\n", opt_sched); - for ( i = 0; i < NUM_SCHEDULERS; i++ ) - if ( schedulers[i] && - !strcmp(schedulers[i]->opt_name, CONFIG_SCHED_DEFAULT) ) - { - ops = *schedulers[i]; - break; - } - BUG_ON(!ops.name); - printk("Using '%s' (%s)\n", ops.name, ops.opt_name); + scheduler = sched_get_by_name(CONFIG_SCHED_DEFAULT); + BUG_ON(!scheduler); + printk("Using '%s' (%s)\n", scheduler->name, scheduler->opt_name); } + ops = *scheduler; if ( cpu_schedule_up(0) ) BUG(); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 0164db996b8b..4442a1940c25 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -764,6 +764,17 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); + +/* + * sched_get_id_by_name - retrieves a scheduler id given a scheduler name + * @sched_name: scheduler name as a string + * + * returns: + * positive value being the scheduler id, on success + * negative value if the scheduler name is not found. + */ +int sched_get_id_by_name(const char *sched_name); + void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); From patchwork Mon Apr 11 15:20:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 62D87C43217 for ; Mon, 11 Apr 2022 15:21:31 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303107.517151 (Exim 4.92) (envelope-from ) id 1ndvqy-0006Qo-Ew; Mon, 11 Apr 2022 15:21:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303107.517151; Mon, 11 Apr 2022 15:21:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqy-0006QX-B9; Mon, 11 Apr 2022 15:21:20 +0000 Received: by outflank-mailman (input) for mailman id 303107; Mon, 11 Apr 2022 15:21:18 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqw-0005M5-Kk for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:18 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 0818163d-b9ab-11ec-8fbc-03012f2f19d4; Mon, 11 Apr 2022 17:21:17 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7FFBA169C; Mon, 11 Apr 2022 08:21:16 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D76E93F73B; Mon, 11 Apr 2022 08:21:14 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0818163d-b9ab-11ec-8fbc-03012f2f19d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Volodymyr Babchuk , Dario Faggioli , Juergen Gross Subject: [PATCH v7 4/7] xen/cpupool: Create different cpupools at boot time Date: Mon, 11 Apr 2022 16:20:58 +0100 Message-Id: <20220411152101.17539-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Introduce a way to create different cpupools at boot time, this is particularly useful on ARM big.LITTLE system where there might be the need to have different cpupools for each type of core, but also systems using NUMA can have different cpu pools for each node. The feature on arm relies on a specification of the cpupools from the device tree to build pools and assign cpus to them. ACPI is not supported for this feature. With this patch, cpupool0 can now have less cpus than the number of online ones, so update the default case for opt_dom0_max_vcpus. Documentation is created to explain the feature. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini --- Changes in v7: - rename xen/common/boot_cpupools.c to xen/common/sched/boot-cpupool.c (Jan) - reverted xen/common/Makefile, add entry in xen/common/sched/Makefile - changed line in MAINTAINERS under CPU POOLS section (Dario) - Fix documentation, update opt_dom0_max_vcpus to the number of cpu in cpupool0 (Julien) Changes in v6: - Changed docs, return if booted with ACPI in btcpupools_dtb_parse, panic if /chosen does not exists. Changed commit message (Julien) - Add Juergen R-by for the xen/common/sched part that didn't change Changes in v5: - Fixed wrong variable name, swapped schedulers, add scheduler info in the printk (Stefano) - introduce assert in cpupool_init and btcpupools_get_cpupool_id to harden the code Changes in v4: - modify Makefile to put in *.init.o, fixed stubs and macro (Jan) - fixed docs, fix brakets (Stefano) - keep cpu0 in Pool-0 (Julien) - moved printk from btcpupools_allocate_pools to btcpupools_get_cpupool_id - Add to docs constraint about cpu0 and Pool-0 Changes in v3: - Add newline to cpupools.txt and removed "default n" from Kconfig (Jan) - Fixed comment, moved defines, used global cpu_online_map, use HAS_DEVICE_TREE instead of ARM and place arch specific code in header (Juergen) - Fix brakets, x86 code only panic, get rid of scheduler dt node, don't save pool pointer and look for it from the pool list (Stefano) - Changed data structures to allow modification to the code. Changes in v2: - Move feature to common code (Juergen) - Try to decouple dtb parse and cpupool creation to allow more way to specify cpupools (for example command line) - Created standalone dt node for the scheduler so it can be used in future work to set scheduler specific parameters - Use only auto generated ids for cpupools --- MAINTAINERS | 2 +- docs/misc/arm/device-tree/cpupools.txt | 140 +++++++++++++++++ xen/arch/arm/domain_build.c | 5 +- xen/arch/arm/include/asm/smp.h | 3 + xen/common/Kconfig | 7 + xen/common/sched/Makefile | 1 + xen/common/sched/boot-cpupool.c | 207 +++++++++++++++++++++++++ xen/common/sched/cpupool.c | 12 +- xen/include/xen/sched.h | 14 ++ 9 files changed, 388 insertions(+), 3 deletions(-) create mode 100644 docs/misc/arm/device-tree/cpupools.txt create mode 100644 xen/common/sched/boot-cpupool.c diff --git a/MAINTAINERS b/MAINTAINERS index 6a097b43eb9a..7963c9232c07 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -274,7 +274,7 @@ CPU POOLS M: Juergen Gross M: Dario Faggioli S: Supported -F: xen/common/sched/cpupool.c +F: xen/common/sched/*cpupool.c DEVICE TREE M: Stefano Stabellini diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt new file mode 100644 index 000000000000..1f640d680317 --- /dev/null +++ b/docs/misc/arm/device-tree/cpupools.txt @@ -0,0 +1,140 @@ +Boot time cpupools +================== + +When BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is possible to +create cpupools during boot phase by specifying them in the device tree. +ACPI is not supported for this feature. + +Cpupools specification nodes shall be direct childs of /chosen node. +Each cpupool node contains the following properties: + +- compatible (mandatory) + + Must always include the compatiblity string: "xen,cpupool". + +- cpupool-cpus (mandatory) + + Must be a list of device tree phandle to nodes describing cpus (e.g. having + device_type = "cpu"), it can't be empty. + +- cpupool-sched (optional) + + Must be a string having the name of a Xen scheduler. Check the sched=<...> + boot argument for allowed values [1]. When this property is omitted, the Xen + default scheduler will be used. + + +Constraints +=========== + +If no cpupools are specified, all cpus will be assigned to one cpupool +implicitly created (Pool-0). + +If cpupools node are specified, but not every cpu brought up by Xen is assigned, +all the not assigned cpu will be assigned to an additional cpupool. + +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will +stop. + +The boot cpu must be assigned to Pool-0, so the cpupool containing that core +will become Pool-0 automatically. + + +Examples +======== + +A system having two types of core, the following device tree specification will +instruct Xen to have two cpupools: + +- The cpupool described by node cpupool_a will have 4 cpus assigned. +- The cpupool described by node cpupool_b will have 2 cpus assigned. + +The following example can work only if hmp-unsafe=1 is passed to Xen boot +arguments, otherwise not all cores will be brought up by Xen and the cpupool +creation process will stop Xen. + + +a72_1: cpu@0 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x0>; + device_type = "cpu"; + [...] +}; + +a72_2: cpu@1 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x1>; + device_type = "cpu"; + [...] +}; + +a53_1: cpu@100 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x100>; + device_type = "cpu"; + [...] +}; + +a53_2: cpu@101 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x101>; + device_type = "cpu"; + [...] +}; + +a53_3: cpu@102 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x102>; + device_type = "cpu"; + [...] +}; + +a53_4: cpu@103 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x103>; + device_type = "cpu"; + [...] +}; + +chosen { + + cpupool_a { + compatible = "xen,cpupool"; + cpupool-cpus = <&a53_1 &a53_2 &a53_3 &a53_4>; + }; + cpupool_b { + compatible = "xen,cpupool"; + cpupool-cpus = <&a72_1 &a72_2>; + cpupool-sched = "credit2"; + }; + + [...] + +}; + + +A system having the cpupools specification below will instruct Xen to have three +cpupools: + +- The cpupool described by node cpupool_a will have 2 cpus assigned. +- The cpupool described by node cpupool_b will have 2 cpus assigned. +- An additional cpupool will be created, having 2 cpus assigned (created by Xen + with all the unassigned cpus a53_3 and a53_4). + +chosen { + + cpupool_a { + compatible = "xen,cpupool"; + cpupool-cpus = <&a53_1 &a53_2>; + }; + cpupool_b { + compatible = "xen,cpupool"; + cpupool-cpus = <&a72_1 &a72_2>; + cpupool-sched = "null"; + }; + + [...] + +}; + +[1] docs/misc/xen-command-line.pandoc diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 8be01678de05..9aa6ae27c40f 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -73,7 +73,10 @@ custom_param("dom0_mem", parse_dom0_mem); unsigned int __init dom0_max_vcpus(void) { if ( opt_dom0_max_vcpus == 0 ) - opt_dom0_max_vcpus = num_online_cpus(); + { + ASSERT(cpupool0); + opt_dom0_max_vcpus = cpumask_weight(cpupool_valid_cpus(cpupool0)); + } if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS ) opt_dom0_max_vcpus = MAX_VIRT_CPUS; diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h index af5a2fe65266..83c0cd69767b 100644 --- a/xen/arch/arm/include/asm/smp.h +++ b/xen/arch/arm/include/asm/smp.h @@ -34,6 +34,9 @@ extern void init_secondary(void); extern void smp_init_cpus(void); extern void smp_clear_cpu_maps (void); extern int smp_get_max_cpus (void); + +#define cpu_physical_id(cpu) cpu_logical_map(cpu) + #endif /* diff --git a/xen/common/Kconfig b/xen/common/Kconfig index d921c74d615e..70aac5220e75 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -22,6 +22,13 @@ config GRANT_TABLE If unsure, say Y. +config BOOT_TIME_CPUPOOLS + bool "Create cpupools at boot time" + depends on HAS_DEVICE_TREE + help + Creates cpupools during boot time and assigns cpus to them. Cpupools + options can be specified in the device tree. + config ALTERNATIVE_CALL bool diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile index 3537f2a68d69..697bd54bfe93 100644 --- a/xen/common/sched/Makefile +++ b/xen/common/sched/Makefile @@ -1,3 +1,4 @@ +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += boot-cpupool.init.o obj-y += cpupool.o obj-$(CONFIG_SCHED_ARINC653) += arinc653.o obj-$(CONFIG_SCHED_CREDIT) += credit.o diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupool.c new file mode 100644 index 000000000000..9429a5025fc4 --- /dev/null +++ b/xen/common/sched/boot-cpupool.c @@ -0,0 +1,207 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/common/boot_cpupools.c + * + * Code to create cpupools at boot time. + * + * Copyright (C) 2022 Arm Ltd. + */ + +#include +#include + +/* + * pool_cpu_map: Index is logical cpu number, content is cpupool id, (-1) for + * unassigned. + * pool_sched_map: Index is cpupool id, content is scheduler id, (-1) for + * unassigned. + */ +static int __initdata pool_cpu_map[NR_CPUS] = { [0 ... NR_CPUS-1] = -1 }; +static int __initdata pool_sched_map[NR_CPUS] = { [0 ... NR_CPUS-1] = -1 }; +static unsigned int __initdata next_pool_id; + +#define BTCPUPOOLS_DT_NODE_NO_REG (-1) +#define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) + +static int __init get_logical_cpu_from_hw_id(unsigned int hwid) +{ + unsigned int i; + + for ( i = 0; i < nr_cpu_ids; i++ ) + { + if ( cpu_physical_id(i) == hwid ) + return i; + } + + return -1; +} + +static int __init +get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node) +{ + int cpu_num; + const __be32 *prop; + unsigned int cpu_reg; + + prop = dt_get_property(cpu_node, "reg", NULL); + if ( !prop ) + return BTCPUPOOLS_DT_NODE_NO_REG; + + cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node)); + + cpu_num = get_logical_cpu_from_hw_id(cpu_reg); + if ( cpu_num < 0 ) + return BTCPUPOOLS_DT_NODE_NO_LOG_CPU; + + return cpu_num; +} + +static int __init check_and_get_sched_id(const char* scheduler_name) +{ + int sched_id = sched_get_id_by_name(scheduler_name); + + if ( sched_id < 0 ) + panic("Scheduler %s does not exists!\n", scheduler_name); + + return sched_id; +} + +void __init btcpupools_dtb_parse(void) +{ + const struct dt_device_node *chosen, *node; + + if ( !acpi_disabled ) + return; + + chosen = dt_find_node_by_path("/chosen"); + if ( !chosen ) + panic("/chosen missing. Boot time cpupools can't be parsed from DT.\n"); + + dt_for_each_child_node(chosen, node) + { + const struct dt_device_node *phandle_node; + int sched_id = -1; + const char* scheduler_name; + unsigned int i = 0; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + continue; + + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) ) + sched_id = check_and_get_sched_id(scheduler_name); + + phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++); + if ( !phandle_node ) + panic("Missing or empty cpupool-cpus property!\n"); + + while ( phandle_node ) + { + int cpu_num; + + cpu_num = get_logical_cpu_from_cpu_node(phandle_node); + + if ( cpu_num < 0 ) + panic("Error retrieving logical cpu from node %s (%d)\n", + dt_node_name(node), cpu_num); + + if ( pool_cpu_map[cpu_num] != -1 ) + panic("Logical cpu %d already added to a cpupool!\n", cpu_num); + + pool_cpu_map[cpu_num] = next_pool_id; + + phandle_node = dt_parse_phandle(node, "cpupool-cpus", i++); + } + + /* Save scheduler choice for this cpupool id */ + pool_sched_map[next_pool_id] = sched_id; + + /* Let Xen generate pool ids */ + next_pool_id++; + } +} + +void __init btcpupools_allocate_pools(void) +{ + unsigned int i; + bool add_extra_cpupool = false; + int swap_id = -1; + + /* + * If there are no cpupools, the value of next_pool_id is zero, so the code + * below will assign every cpu to cpupool0 as the default behavior. + * When there are cpupools, the code below is assigning all the not + * assigned cpu to a new pool (next_pool_id value is the last id + 1). + * In the same loop we check if there is any assigned cpu that is not + * online. + */ + for ( i = 0; i < nr_cpu_ids; i++ ) + { + if ( cpumask_test_cpu(i, &cpu_online_map) ) + { + /* Unassigned cpu gets next_pool_id pool id value */ + if ( pool_cpu_map[i] < 0 ) + { + pool_cpu_map[i] = next_pool_id; + add_extra_cpupool = true; + } + + /* + * Cpu0 must be in cpupool0, otherwise some operations like moving + * cpus between cpupools, cpu hotplug, destroying cpupools, shutdown + * of the host, might not work in a sane way. + */ + if ( !i && (pool_cpu_map[0] != 0) ) + swap_id = pool_cpu_map[0]; + + if ( swap_id != -1 ) + { + if ( pool_cpu_map[i] == swap_id ) + pool_cpu_map[i] = 0; + else if ( pool_cpu_map[i] == 0 ) + pool_cpu_map[i] = swap_id; + } + } + else + { + if ( pool_cpu_map[i] >= 0 ) + panic("Pool-%d contains cpu%u that is not online!\n", + pool_cpu_map[i], i); + } + } + + /* A swap happened, swap schedulers between cpupool id 0 and the other */ + if ( swap_id != -1 ) + { + int swap_sched = pool_sched_map[swap_id]; + + pool_sched_map[swap_id] = pool_sched_map[0]; + pool_sched_map[0] = swap_sched; + } + + if ( add_extra_cpupool ) + next_pool_id++; + + /* Create cpupools with selected schedulers */ + for ( i = 0; i < next_pool_id; i++ ) + cpupool_create_pool(i, pool_sched_map[i]); +} + +unsigned int __init btcpupools_get_cpupool_id(unsigned int cpu) +{ + ASSERT((cpu < NR_CPUS) && (pool_cpu_map[cpu] >= 0)); + + printk(XENLOG_INFO "Logical CPU %u in Pool-%d (Scheduler id: %d).\n", + cpu, pool_cpu_map[cpu], pool_sched_map[pool_cpu_map[cpu]]); + + return pool_cpu_map[cpu]; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 89a891af7076..86a175f99cd5 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1247,12 +1247,22 @@ static int __init cf_check cpupool_init(void) cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); + btcpupools_dtb_parse(); + + btcpupools_allocate_pools(); + spin_lock(&cpupool_lock); cpumask_copy(&cpupool_free_cpus, &cpu_online_map); for_each_cpu ( cpu, &cpupool_free_cpus ) - cpupool_assign_cpu_locked(cpupool0, cpu); + { + unsigned int pool_id = btcpupools_get_cpupool_id(cpu); + struct cpupool *pool = cpupool_find_by_id(pool_id); + + ASSERT(pool); + cpupool_assign_cpu_locked(pool, cpu); + } spin_unlock(&cpupool_lock); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 4442a1940c25..74b3aae10b94 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1184,6 +1184,20 @@ extern void cf_check dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); +#ifdef CONFIG_BOOT_TIME_CPUPOOLS +void btcpupools_allocate_pools(void); +unsigned int btcpupools_get_cpupool_id(unsigned int cpu); +void btcpupools_dtb_parse(void); + +#else /* !CONFIG_BOOT_TIME_CPUPOOLS */ +static inline void btcpupools_allocate_pools(void) {} +static inline void btcpupools_dtb_parse(void) {} +static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) +{ + return 0; +} +#endif + #endif /* __SCHED_H__ */ /* From patchwork Mon Apr 11 15:20:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 361FDC433EF for ; Mon, 11 Apr 2022 15:21:32 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303108.517155 (Exim 4.92) (envelope-from ) id 1ndvqy-0006V0-RW; Mon, 11 Apr 2022 15:21:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303108.517155; Mon, 11 Apr 2022 15:21:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqy-0006TN-LR; Mon, 11 Apr 2022 15:21:20 +0000 Received: by outflank-mailman (input) for mailman id 303108; Mon, 11 Apr 2022 15:21:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqx-0005MA-0K for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:19 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 08cf3f02-b9ab-11ec-a405-831a346695d4; Mon, 11 Apr 2022 17:21:18 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 98C0B1570; Mon, 11 Apr 2022 08:21:17 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B66953F73B; Mon, 11 Apr 2022 08:21:16 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 08cf3f02-b9ab-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap Subject: [PATCH v7 5/7] xen/cpupool: Don't allow removing cpu0 from cpupool0 Date: Mon, 11 Apr 2022 16:20:59 +0100 Message-Id: <20220411152101.17539-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Cpu0 must remain in cpupool0, otherwise some operations like moving cpus between cpupools, cpu hotplug, destroying cpupools, shutdown of the host, might not work in a sane way. Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v7: - new patch --- xen/common/sched/cpupool.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 86a175f99cd5..0a93bcc631bf 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -572,6 +572,7 @@ static long cf_check cpupool_unassign_cpu_helper(void *info) * possible failures: * - last cpu and still active domains in cpupool * - cpu just being unplugged + * - Attempt to remove boot cpu from cpupool0 */ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) { @@ -582,7 +583,12 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d)\n", c->cpupool_id, cpu); - if ( !cpu_online(cpu) ) + /* + * Cpu0 must remain in cpupool0, otherwise some operations like moving cpus + * between cpupools, cpu hotplug, destroying cpupools, shutdown of the host, + * might not work in a sane way. + */ + if ( (!c->cpupool_id && !cpu) || !cpu_online(cpu) ) return -EINVAL; master_cpu = sched_get_resource_cpu(cpu); From patchwork Mon Apr 11 15:21:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55E60C433FE for ; Mon, 11 Apr 2022 15:21:33 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303109.517173 (Exim 4.92) (envelope-from ) id 1ndvr1-000746-8P; Mon, 11 Apr 2022 15:21:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303109.517173; Mon, 11 Apr 2022 15:21:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvr1-00073h-2B; Mon, 11 Apr 2022 15:21:23 +0000 Received: by outflank-mailman (input) for mailman id 303109; Mon, 11 Apr 2022 15:21:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvqz-0005MA-6x for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:21 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 09e84cbe-b9ab-11ec-a405-831a346695d4; Mon, 11 Apr 2022 17:21:20 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5E95D1570; Mon, 11 Apr 2022 08:21:19 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D0B783F73B; Mon, 11 Apr 2022 08:21:17 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 09e84cbe-b9ab-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , Juergen Gross , Dario Faggioli Subject: [PATCH v7 6/7] arm/dom0less: assign dom0less guests to cpupools Date: Mon, 11 Apr 2022 16:21:00 +0100 Message-Id: <20220411152101.17539-7-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_domctl_createdomain public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Add public function to retrieve a pool id from the device tree cpupool node. Update documentation about the property. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini --- Changes in v7: - Add comment for cpupool_id struct member. (Jan) Changes in v6: - no changes Changes in v5: - no changes Changes in v4: - no changes - add R-by Changes in v3: - Use explicitely sized integer for struct xen_domctl_createdomain cpupool_id member. (Stefano) - Changed code due to previous commit code changes Changes in v2: - Moved cpupool_id from arch specific to common part (Juergen) - Implemented functions to retrieve the cpupool id from the cpupool dtb node. --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain_build.c | 14 +++++++++++++- xen/common/domain.c | 2 +- xen/common/sched/boot-cpupool.c | 24 ++++++++++++++++++++++++ xen/include/public/domctl.h | 5 ++++- xen/include/xen/sched.h | 9 +++++++++ 6 files changed, 56 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index a94125394e35..7b4a29a2c293 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -188,6 +188,11 @@ with the following properties: An empty property to request the memory of the domain to be direct-map (guest physical address == physical address). +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 9aa6ae27c40f..9787104c3d31 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3175,7 +3175,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen = dt_find_node_by_path("/chosen"); + const struct dt_device_node *cpupool_node, + *chosen = dt_find_node_by_path("/chosen"); BUG_ON(chosen == NULL); dt_for_each_child_node(chosen, node) @@ -3244,6 +3245,17 @@ void __init create_domUs(void) vpl011_virq - 32 + 1); } + /* Get the optional property domain-cpupool */ + cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + { + int pool_id = btcpupools_get_domain_pool_id(cpupool_node); + if ( pool_id < 0 ) + panic("Error getting cpupool id from domain-cpupool (%d)\n", + pool_id); + d_cfg.cpupool_id = pool_id; + } + /* * The variable max_init_domid is initialized with zero, so here it's * very important to use the pre-increment operator to call diff --git a/xen/common/domain.c b/xen/common/domain.c index 351029f8b239..0827400f4f49 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; - if ( (err = sched_init_domain(d, 0)) != 0 ) + if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 ) goto fail; if ( (err = late_hwdom_init(d)) != 0 ) diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupool.c index 9429a5025fc4..240bae4cebb8 100644 --- a/xen/common/sched/boot-cpupool.c +++ b/xen/common/sched/boot-cpupool.c @@ -22,6 +22,8 @@ static unsigned int __initdata next_pool_id; #define BTCPUPOOLS_DT_NODE_NO_REG (-1) #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) +#define BTCPUPOOLS_DT_WRONG_NODE (-3) +#define BTCPUPOOLS_DT_CORRUPTED_NODE (-4) static int __init get_logical_cpu_from_hw_id(unsigned int hwid) { @@ -56,6 +58,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node) return cpu_num; } +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + const struct dt_device_node *phandle_node; + int cpu_num; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + return BTCPUPOOLS_DT_WRONG_NODE; + /* + * Get first cpu listed in the cpupool, from its reg it's possible to + * retrieve the cpupool id. + */ + phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !phandle_node ) + return BTCPUPOOLS_DT_CORRUPTED_NODE; + + cpu_num = get_logical_cpu_from_cpu_node(phandle_node); + if ( cpu_num < 0 ) + return cpu_num; + + return pool_cpu_map[cpu_num]; +} + static int __init check_and_get_sched_id(const char* scheduler_name) { int sched_id = sched_get_id_by_name(scheduler_name); diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..84e75829b980 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -106,6 +106,9 @@ struct xen_domctl_createdomain { /* Per-vCPU buffer size in bytes. 0 to disable. */ uint32_t vmtrace_size; + /* CPU pool to use; specify 0 or a specific existing pool */ + uint32_t cpupool_id; + struct xen_arch_domainconfig arch; }; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 74b3aae10b94..32d2a6294b6d 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1188,6 +1188,7 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi); void btcpupools_allocate_pools(void); unsigned int btcpupools_get_cpupool_id(unsigned int cpu); void btcpupools_dtb_parse(void); +int btcpupools_get_domain_pool_id(const struct dt_device_node *node); #else /* !CONFIG_BOOT_TIME_CPUPOOLS */ static inline void btcpupools_allocate_pools(void) {} @@ -1196,6 +1197,14 @@ static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) { return 0; } +#ifdef CONFIG_HAS_DEVICE_TREE +static inline int +btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + return 0; +} +#endif + #endif #endif /* __SCHED_H__ */ From patchwork Mon Apr 11 15:21:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 12809312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BEA21C433EF for ; Mon, 11 Apr 2022 15:21:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.303110.517184 (Exim 4.92) (envelope-from ) id 1ndvr3-0007S5-RA; Mon, 11 Apr 2022 15:21:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 303110.517184; Mon, 11 Apr 2022 15:21:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvr3-0007Rd-M6; Mon, 11 Apr 2022 15:21:25 +0000 Received: by outflank-mailman (input) for mailman id 303110; Mon, 11 Apr 2022 15:21:22 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ndvr0-0005MA-Je for xen-devel@lists.xenproject.org; Mon, 11 Apr 2022 15:21:22 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 0acacead-b9ab-11ec-a405-831a346695d4; Mon, 11 Apr 2022 17:21:21 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 09A45169C; Mon, 11 Apr 2022 08:21:21 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 95F053F73B; Mon, 11 Apr 2022 08:21:19 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0acacead-b9ab-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v7 7/7] xen/cpupool: Allow cpupool0 to use different scheduler Date: Mon, 11 Apr 2022 16:21:01 +0100 Message-Id: <20220411152101.17539-8-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220411152101.17539-1-luca.fancellu@arm.com> References: <20220411152101.17539-1-luca.fancellu@arm.com> Currently cpupool0 can use only the default scheduler, and cpupool_create has an hardcoded behavior when creating the pool 0 that doesn't allocate new memory for the scheduler, but uses the default scheduler structure in memory. With this commit it is possible to allocate a different scheduler for the cpupool0 when using the boot time cpupool. To achieve this the hardcoded behavior in cpupool_create is removed and the cpupool0 creation is moved. When compiling without boot time cpupools enabled, the current behavior is maintained (except that cpupool0 scheduler memory will be allocated). Signed-off-by: Luca Fancellu Reviewed-by: Juergen Gross --- Changes in v7: - no changes Changes in v6: - Add R-by Changes in v5: - no changes Changes in v4: - no changes Changes in v3: - fix typo in commit message (Juergen) - rebase changes Changes in v2: - new patch --- xen/common/sched/boot-cpupool.c | 5 ++++- xen/common/sched/cpupool.c | 8 +------- xen/include/xen/sched.h | 5 ++++- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/xen/common/sched/boot-cpupool.c b/xen/common/sched/boot-cpupool.c index 240bae4cebb8..5955e6f9a98b 100644 --- a/xen/common/sched/boot-cpupool.c +++ b/xen/common/sched/boot-cpupool.c @@ -205,8 +205,11 @@ void __init btcpupools_allocate_pools(void) if ( add_extra_cpupool ) next_pool_id++; + /* Keep track of cpupool id 0 with the global cpupool0 */ + cpupool0 = cpupool_create_pool(0, pool_sched_map[0]); + /* Create cpupools with selected schedulers */ - for ( i = 0; i < next_pool_id; i++ ) + for ( i = 1; i < next_pool_id; i++ ) cpupool_create_pool(i, pool_sched_map[i]); } diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 0a93bcc631bf..f6e3d97e5288 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -312,10 +312,7 @@ static struct cpupool *cpupool_create(unsigned int poolid, c->cpupool_id = q->cpupool_id + 1; } - if ( poolid == 0 ) - c->sched = scheduler_get_default(); - else - c->sched = scheduler_alloc(sched_id); + c->sched = scheduler_alloc(sched_id); if ( IS_ERR(c->sched) ) { ret = PTR_ERR(c->sched); @@ -1248,9 +1245,6 @@ static int __init cf_check cpupool_init(void) cpupool_hypfs_init(); - cpupool0 = cpupool_create(0, 0); - BUG_ON(IS_ERR(cpupool0)); - cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); btcpupools_dtb_parse(); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 32d2a6294b6d..6040fa3b3830 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1191,7 +1191,10 @@ void btcpupools_dtb_parse(void); int btcpupools_get_domain_pool_id(const struct dt_device_node *node); #else /* !CONFIG_BOOT_TIME_CPUPOOLS */ -static inline void btcpupools_allocate_pools(void) {} +static inline void btcpupools_allocate_pools(void) +{ + cpupool0 = cpupool_create_pool(0, -1); +} static inline void btcpupools_dtb_parse(void) {} static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) {