From patchwork Fri Sep 27 07:00:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11163919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B902217EE for ; Fri, 27 Sep 2019 07:02:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EA2A207FF for ; Fri, 27 Sep 2019 07:02:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EA2A207FF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGU-0004Jb-Gk; Fri, 27 Sep 2019 07:02:06 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkGT-0004Gx-5g for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:02:05 +0000 X-Inumbo-ID: 919739a4-e0f4-11e9-966c-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 919739a4-e0f4-11e9-966c-12813bfff9fa; Fri, 27 Sep 2019 07:01:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A2E97AFE3; Fri, 27 Sep 2019 07:01:03 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:34 +0200 Message-Id: <20190927070050.12405-31-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 30/46] xen/sched: add support for multiple vcpus per sched unit where missing X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In several places there is support for multiple vcpus per sched unit missing. Add that missing support (with the exception of initial allocation) and missing helpers for that. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- RFC V2: - fix vcpu_runstate_helper() V1: - add special handling for idle unit in unit_runnable() and unit_runnable_state() V2: - handle affinity_broken correctly (Jan Beulich) V3: - type for cpu ->unsigned int (Jan Beulich) --- xen/common/domain.c | 5 ++++- xen/common/schedule.c | 9 +++++---- xen/include/xen/sched-if.h | 16 +++++++++++++++- 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 466b9c1b73..ea1225367d 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1273,7 +1273,10 @@ int vcpu_reset(struct vcpu *v) v->async_exception_mask = 0; memset(v->async_exception_state, 0, sizeof(v->async_exception_state)); #endif - v->affinity_broken = 0; + if ( v->affinity_broken & VCPU_AFFINITY_OVERRIDE ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); + if ( v->affinity_broken & VCPU_AFFINITY_WAIT ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_WAIT); clear_bit(_VPF_blocked, &v->pause_flags); clear_bit(_VPF_in_reset, &v->pause_flags); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 4f7f195915..fa3d88938a 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -250,8 +250,9 @@ static inline void vcpu_runstate_change( s_time_t delta; struct sched_unit *unit = v->sched_unit; - ASSERT(v->runstate.state != new_state); ASSERT(spin_is_locked(get_sched_res(v->processor)->schedule_lock)); + if ( v->runstate.state == new_state ) + return; vcpu_urgent_count_update(v); @@ -1727,14 +1728,14 @@ static void sched_switch_units(struct sched_resource *sr, (next->vcpu_list->runstate.state == RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, prev->next_time); - ASSERT(prev->vcpu_list->runstate.state == RUNSTATE_running); + ASSERT(unit_running(prev)); TRACE_4D(TRC_SCHED_SWITCH, prev->domain->domain_id, prev->unit_id, next->domain->domain_id, next->unit_id); sched_unit_runstate_change(prev, false, now); - ASSERT(next->vcpu_list->runstate.state != RUNSTATE_running); + ASSERT(!unit_running(next)); sched_unit_runstate_change(next, true, now); /* @@ -1856,7 +1857,7 @@ void sched_context_switched(struct vcpu *vprev, struct vcpu *vnext) while ( atomic_read(&next->rendezvous_out_cnt) ) cpu_relax(); } - else if ( vprev != vnext ) + else if ( vprev != vnext && sched_granularity == 1 ) context_saved(vprev); } diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 7e568a9d9f..983f2ece83 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -81,6 +81,11 @@ static inline bool is_unit_online(const struct sched_unit *unit) return false; } +static inline unsigned int unit_running(const struct sched_unit *unit) +{ + return unit->runstate_cnt[RUNSTATE_running]; +} + /* Returns true if at least one vcpu of the unit is runnable. */ static inline bool unit_runnable(const struct sched_unit *unit) { @@ -126,7 +131,16 @@ static inline bool unit_runnable_state(const struct sched_unit *unit) static inline void sched_set_res(struct sched_unit *unit, struct sched_resource *res) { - unit->vcpu_list->processor = res->master_cpu; + unsigned int cpu = cpumask_first(res->cpus); + struct vcpu *v; + + for_each_sched_unit_vcpu ( unit, v ) + { + ASSERT(cpu < nr_cpu_ids); + v->processor = cpu; + cpu = cpumask_next(cpu, res->cpus); + } + unit->res = res; }