From patchwork Fri Sep 27 07:00:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11163959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44D3317EE for ; Fri, 27 Sep 2019 07:03:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2B320207FF for ; Fri, 27 Sep 2019 07:03:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B320207FF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGk-0004ny-HC; Fri, 27 Sep 2019 07:02:22 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGi-0004jN-6z for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:02:20 +0000 X-Inumbo-ID: 93fa860c-e0f4-11e9-966c-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 93fa860c-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:07 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 60F05B17F; Fri, 27 Sep 2019 07:01:04 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:37 +0200 Message-Id: <20190927070050.12405-34-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 33/46] xen/sched: add a percpu resource index X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Add a percpu variable holding the index of the cpu in the current sched_resource structure. This index is used to get the correct vcpu of a sched_unit on a specific cpu. For now this index will be zero for all cpus, but with core scheduling it will be possible to have higher values, too. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- RFC V2: new patch (carved out from RFC V1 patch 49) V4: - make function parameter const (Jan Beulich) --- xen/common/schedule.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index b11a1c2538..f79cd2a5a6 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -75,6 +75,7 @@ static void poll_timer_fn(void *data); /* This is global for now so that private implementations can reach it */ DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); +static DEFINE_PER_CPU_READ_MOSTLY(unsigned int, sched_res_idx); /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); @@ -142,6 +143,12 @@ static struct scheduler sched_idle_ops = { .switch_sched = sched_idle_switch_sched, }; +static inline struct vcpu *sched_unit2vcpu_cpu(const struct sched_unit *unit, + unsigned int cpu) +{ + return unit->domain->vcpu[unit->unit_id + per_cpu(sched_res_idx, cpu)]; +} + static inline struct scheduler *dom_scheduler(const struct domain *d) { if ( likely(d->cpupool != NULL) ) @@ -2028,7 +2035,7 @@ static void sched_slave(void) pcpu_schedule_unlock_irq(lock, cpu); - sched_context_switch(vprev, next->vcpu_list, now); + sched_context_switch(vprev, sched_unit2vcpu_cpu(next, cpu), now); } /* @@ -2089,7 +2096,7 @@ static void schedule(void) pcpu_schedule_unlock_irq(lock, cpu); - vnext = next->vcpu_list; + vnext = sched_unit2vcpu_cpu(next, cpu); sched_context_switch(vprev, vnext, now); }