From patchwork Fri Mar 29 15:09:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9F841575 for ; Fri, 29 Mar 2019 15:12:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B250D298AF for ; Fri, 29 Mar 2019 15:12:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE18E298C1; Fri, 29 Mar 2019 15:12:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1C8AC29811 for ; Fri, 29 Mar 2019 15:12:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9tA7-0007CB-Gq; Fri, 29 Mar 2019 15:11:19 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9tA6-0007AO-Jk for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:11:18 +0000 X-Inumbo-ID: b4482648-5234-11e9-bdc4-07bd181bddd4 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b4482648-5234-11e9-bdc4-07bd181bddd4; Fri, 29 Mar 2019 15:09:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 85A1DAF95; Fri, 29 Mar 2019 15:09:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:09:29 +0100 Message-Id: <20190329150934.17694-45-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 44/49] xen: round up max vcpus to scheduling granularity X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Make sure the number of vcpus is always a multiple of the scheduling granularity. Note that we don't support a scheduling granularity above one on ARM. Signed-off-by: Juergen Gross --- xen/arch/x86/dom0_build.c | 1 + xen/common/domain.c | 1 + xen/common/domctl.c | 1 + xen/include/xen/sched.h | 5 +++++ 4 files changed, 8 insertions(+) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 77b5646424..76a81dd4a9 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -258,6 +258,7 @@ unsigned int __init dom0_max_vcpus(void) max_vcpus = opt_dom0_max_vcpus_min; if ( opt_dom0_max_vcpus_max < max_vcpus ) max_vcpus = opt_dom0_max_vcpus_max; + max_vcpus = sched_max_vcpus(max_vcpus); limit = dom0_pvh ? HVM_MAX_VCPUS : MAX_VIRT_CPUS; if ( max_vcpus > limit ) max_vcpus = limit; diff --git a/xen/common/domain.c b/xen/common/domain.c index b448d20d40..d338a2204c 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -290,6 +290,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config) return -EINVAL; } + config->max_vcpus = sched_max_vcpus(config->max_vcpus); if ( config->max_vcpus < 1 ) { dprintk(XENLOG_INFO, "No vCPUS\n"); diff --git a/xen/common/domctl.c b/xen/common/domctl.c index ccde1ba706..80837a2a5e 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -542,6 +542,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) { unsigned int i, max = op->u.max_vcpus.max; + max = sched_max_vcpus(max); ret = -EINVAL; if ( (d == current->domain) || /* no domain_pause() */ (max != d->max_vcpus) ) /* max_vcpus set up in createdomain */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 52a1abfca9..314a453a60 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -490,6 +490,11 @@ extern struct vcpu *idle_vcpu[NR_CPUS]; extern unsigned int sched_granularity; +static inline unsigned int sched_max_vcpus(unsigned int n_vcpus) +{ + return DIV_ROUND_UP(n_vcpus, sched_granularity) * sched_granularity; +} + static inline bool is_system_domain(const struct domain *d) { return d->domain_id >= DOMID_FIRST_RESERVED;