From patchwork Mon Sep 30 05:21:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11166025 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1690417D4 for ; Mon, 30 Sep 2019 05:23:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F12EF20815 for ; Mon, 30 Sep 2019 05:23:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F12EF20815 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8G-00023v-Uj; Mon, 30 Sep 2019 05:22:00 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8F-00022T-At for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 05:21:59 +0000 X-Inumbo-ID: 300de21a-e342-11e9-b588-bc764e2007e4 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 300de21a-e342-11e9-b588-bc764e2007e4; Mon, 30 Sep 2019 05:21:43 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5B7C3B061; Mon, 30 Sep 2019 05:21:40 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 30 Sep 2019 07:21:22 +0200 Message-Id: <20190930052135.11257-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190930052135.11257-1-jgross@suse.com> References: <20190930052135.11257-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v5 06/19] xen/sched: add a percpu resource index X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Add a percpu variable holding the index of the cpu in the current sched_resource structure. This index is used to get the correct vcpu of a sched_unit on a specific cpu. For now this index will be zero for all cpus, but with core scheduling it will be possible to have higher values, too. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- RFC V2: new patch (carved out from RFC V1 patch 49) V4: - make function parameter const (Jan Beulich) --- xen/common/schedule.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 37002b4c0e..c8e2999407 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -77,6 +77,7 @@ static void poll_timer_fn(void *data); /* This is global for now so that private implementations can reach it */ DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); +static DEFINE_PER_CPU_READ_MOSTLY(unsigned int, sched_res_idx); /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); @@ -144,6 +145,12 @@ static struct scheduler sched_idle_ops = { .switch_sched = sched_idle_switch_sched, }; +static inline struct vcpu *sched_unit2vcpu_cpu(const struct sched_unit *unit, + unsigned int cpu) +{ + return unit->domain->vcpu[unit->unit_id + per_cpu(sched_res_idx, cpu)]; +} + static inline struct scheduler *dom_scheduler(const struct domain *d) { if ( likely(d->cpupool != NULL) ) @@ -2030,7 +2037,7 @@ static void sched_slave(void) pcpu_schedule_unlock_irq(lock, cpu); - sched_context_switch(vprev, next->vcpu_list, now); + sched_context_switch(vprev, sched_unit2vcpu_cpu(next, cpu), now); } /* @@ -2091,7 +2098,7 @@ static void schedule(void) pcpu_schedule_unlock_irq(lock, cpu); - vnext = next->vcpu_list; + vnext = sched_unit2vcpu_cpu(next, cpu); sched_context_switch(vprev, vnext, now); }