From patchwork Mon May 6 06:56:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10930535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E149F1398 for ; Mon, 6 May 2019 06:58:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D064828397 for ; Mon, 6 May 2019 06:58:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C4D2E2860A; Mon, 6 May 2019 06:58:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5215228590 for ; Mon, 6 May 2019 06:58:45 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYZ-0001uY-7H; Mon, 06 May 2019 06:56:59 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYX-0001tL-9b for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:56:57 +0000 X-Inumbo-ID: 220b4e55-6fcc-11e9-843c-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 220b4e55-6fcc-11e9-843c-bc764e045a96; Mon, 06 May 2019 06:56:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1DBB7AEF7; Mon, 6 May 2019 06:56:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:11 +0200 Message-Id: <20190506065644.7415-13-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 12/45] xen/sched: add scheduler helpers hiding vcpu X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add the following helpers using a sched_item as input instead of a vcpu: - is_idle_item() similar to is_idle_vcpu() - item_runnable() like vcpu_runnable() - sched_set_res() to set the current processor of an item - sched_item_cpu() to get the current processor of an item - sched_{set|clear}_pause_flags[_atomic]() to modify pause_flags of the associated vcpu(s) - sched_idle_item() to get the sched_item pointer of the idle vcpu of a specific physical cpu Signed-off-by: Juergen Gross --- xen/common/sched_credit.c | 3 +-- xen/common/schedule.c | 19 ++++++++-------- xen/include/xen/sched-if.h | 56 ++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 62 insertions(+), 16 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 9e7c849b94..8cfe54ec36 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1673,8 +1673,7 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) SCHED_STAT_CRANK(migrate_queued); WARN_ON(vc->is_urgent); runq_remove(speer); - vc->processor = cpu; - vc->sched_item->res = per_cpu(sched_res, cpu); + sched_set_res(vc->sched_item, per_cpu(sched_res, cpu)); /* * speer will start executing directly on cpu, without having to * go through runq_insert(). So we must update the runnable count diff --git a/xen/common/schedule.c b/xen/common/schedule.c index f4850a57f6..d56dc567ac 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -317,12 +317,11 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) struct domain *d = v->domain; struct sched_item *item; - v->processor = processor; - if ( (item = sched_alloc_item(v)) == NULL ) return 1; - item->res = per_cpu(sched_res, processor); + sched_set_res(item, per_cpu(sched_res, processor)); + /* Initialise the per-vcpu timers. */ init_timer(&v->periodic_timer, vcpu_periodic_timer_fn, v, v->processor); @@ -436,8 +435,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_set_affinity(v, &cpumask_all, &cpumask_all); - v->processor = new_p; - v->sched_item->res = per_cpu(sched_res, new_p); + sched_set_res(v->sched_item, per_cpu(sched_res, new_p)); /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, @@ -775,8 +773,9 @@ void restore_vcpu_affinity(struct domain *d) spinlock_t *lock; unsigned int old_cpu = v->processor; struct sched_item *item = v->sched_item; + struct sched_resource *res; - ASSERT(!vcpu_runnable(v)); + ASSERT(!item_runnable(item)); /* * Re-assign the initial processor as after resume we have no @@ -807,12 +806,12 @@ void restore_vcpu_affinity(struct domain *d) } } - v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); - item->res = per_cpu(sched_res, v->processor); + res = per_cpu(sched_res, cpumask_any(cpumask_scratch_cpu(cpu))); + sched_set_res(item, res); lock = item_schedule_lock_irq(item); - item->res = sched_pick_resource(vcpu_scheduler(v), item); - v->processor = item->res->processor; + res = sched_pick_resource(vcpu_scheduler(v), item); + sched_set_res(item, res); spin_unlock_irq(lock); if ( old_cpu != v->processor ) diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 5e024dceb0..51c3477580 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -49,6 +49,57 @@ DECLARE_PER_CPU(struct scheduler *, scheduler); DECLARE_PER_CPU(struct cpupool *, cpupool); DECLARE_PER_CPU(struct sched_resource *, sched_res); +static inline bool is_idle_item(const struct sched_item *item) +{ + return is_idle_vcpu(item->vcpu); +} + +static inline bool item_runnable(const struct sched_item *item) +{ + return vcpu_runnable(item->vcpu); +} + +static inline void sched_set_res(struct sched_item *item, + struct sched_resource *res) +{ + item->vcpu->processor = res->processor; + item->res = res; +} + +static inline unsigned int sched_item_cpu(struct sched_item *item) +{ + return item->res->processor; +} + +static inline void sched_set_pause_flags(struct sched_item *item, + unsigned int bit) +{ + __set_bit(bit, &item->vcpu->pause_flags); +} + +static inline void sched_clear_pause_flags(struct sched_item *item, + unsigned int bit) +{ + __clear_bit(bit, &item->vcpu->pause_flags); +} + +static inline void sched_set_pause_flags_atomic(struct sched_item *item, + unsigned int bit) +{ + set_bit(bit, &item->vcpu->pause_flags); +} + +static inline void sched_clear_pause_flags_atomic(struct sched_item *item, + unsigned int bit) +{ + clear_bit(bit, &item->vcpu->pause_flags); +} + +static inline struct sched_item *sched_idle_item(unsigned int cpu) +{ + return idle_vcpu[cpu]->sched_item; +} + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: @@ -363,10 +414,7 @@ static inline void sched_migrate(const struct scheduler *s, if ( s->migrate ) s->migrate(s, item, cpu); else - { - item->vcpu->processor = cpu; - item->res = per_cpu(sched_res, cpu); - } + sched_set_res(item, per_cpu(sched_res, cpu)); } static inline struct sched_resource *sched_pick_resource(