From patchwork Fri Aug 9 14:57:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11086639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B40851398 for ; Fri, 9 Aug 2019 15:00:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E1FD1FF29 for ; Fri, 9 Aug 2019 15:00:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 929451FFCD; Fri, 9 Aug 2019 15:00:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6BBD31FF83 for ; Fri, 9 Aug 2019 15:00:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Lr-0006P7-Nf; Fri, 09 Aug 2019 14:58:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Lq-0006Om-C3 for xen-devel@lists.xenproject.org; Fri, 09 Aug 2019 14:58:42 +0000 X-Inumbo-ID: 2afa70a4-bab6-11e9-8fca-4bf3aad99f02 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2afa70a4-bab6-11e9-8fca-4bf3aad99f02; Fri, 09 Aug 2019 14:58:38 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 95019AF83; Fri, 9 Aug 2019 14:58:37 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 9 Aug 2019 16:57:47 +0200 Message-Id: <20190809145833.1020-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190809145833.1020-1-jgross@suse.com> References: <20190809145833.1020-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 02/48] xen/sched: move per-vcpu scheduler private data pointer to sched_unit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This prepares making the different schedulers vcpu agnostic. Note that some scheduler specific accessor function are misnamed after this patch. This will be corrected in later patches. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched_arinc653.c | 4 ++-- xen/common/sched_credit.c | 6 +++--- xen/common/sched_credit2.c | 10 +++++----- xen/common/sched_null.c | 4 ++-- xen/common/sched_rt.c | 4 ++-- xen/common/schedule.c | 14 +++++++------- xen/include/xen/sched.h | 2 +- 7 files changed, 22 insertions(+), 22 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 2059314791..c12b36b2d8 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -53,7 +53,7 @@ * Return a pointer to the ARINC 653-specific scheduler data information * associated with the given VCPU (vc) */ -#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv) +#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_unit->priv) /** * Return the global scheduler private data given the scheduler ops pointer @@ -647,7 +647,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, ASSERT(!pdata && svc && is_idle_vcpu(svc->vc)); - idle_vcpu[cpu]->sched_priv = vdata; + idle_vcpu[cpu]->sched_unit->priv = vdata; return &sd->_lock; } diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 464194a578..e835a4930a 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -83,7 +83,7 @@ ((struct csched_private *)((_ops)->sched_data)) #define CSCHED_PCPU(_c) \ ((struct csched_pcpu *)per_cpu(schedule_data, _c).sched_priv) -#define CSCHED_VCPU(_vcpu) ((struct csched_vcpu *) (_vcpu)->sched_priv) +#define CSCHED_VCPU(_vcpu) ((struct csched_vcpu *) (_vcpu)->sched_unit->priv) #define CSCHED_DOM(_dom) ((struct csched_dom *) (_dom)->sched_priv) #define RUNQ(_cpu) (&(CSCHED_PCPU(_cpu)->runq)) @@ -634,7 +634,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu, ASSERT(svc && is_idle_vcpu(svc->vcpu)); - idle_vcpu[cpu]->sched_priv = vdata; + idle_vcpu[cpu]->sched_unit->priv = vdata; /* * We are holding the runqueue lock already (it's been taken in @@ -1017,7 +1017,7 @@ static void csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc = unit->vcpu_list; - struct csched_vcpu *svc = vc->sched_priv; + struct csched_vcpu *svc = unit->priv; spinlock_t *lock; BUG_ON( is_idle_vcpu(vc) ); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 2120da6f98..a2403e4198 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -573,7 +573,7 @@ static inline struct csched2_pcpu *csched2_pcpu(unsigned int cpu) static inline struct csched2_vcpu *csched2_vcpu(const struct vcpu *v) { - return v->sched_priv; + return v->sched_unit->priv; } static inline struct csched2_dom *csched2_dom(const struct domain *d) @@ -971,7 +971,7 @@ _runq_assign(struct csched2_vcpu *svc, struct csched2_runqueue_data *rqd) static void runq_assign(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu *svc = vc->sched_priv; + struct csched2_vcpu *svc = vc->sched_unit->priv; ASSERT(svc->rqd == NULL); @@ -998,7 +998,7 @@ _runq_deassign(struct csched2_vcpu *svc) static void runq_deassign(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu *svc = vc->sched_priv; + struct csched2_vcpu *svc = vc->sched_unit->priv; ASSERT(svc->rqd == c2rqd(ops, vc->processor)); @@ -3109,7 +3109,7 @@ static void csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc = unit->vcpu_list; - struct csched2_vcpu *svc = vc->sched_priv; + struct csched2_vcpu *svc = unit->priv; struct csched2_dom * const sdom = svc->sdom; spinlock_t *lock; @@ -3891,7 +3891,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu, ASSERT(!local_irq_is_enabled()); write_lock(&prv->lock); - idle_vcpu[cpu]->sched_priv = vdata; + idle_vcpu[cpu]->sched_unit->priv = vdata; rqi = init_pdata(prv, pdata, cpu); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index fd031c989b..bdba237982 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -116,7 +116,7 @@ static inline struct null_private *null_priv(const struct scheduler *ops) static inline struct null_vcpu *null_vcpu(const struct vcpu *v) { - return v->sched_priv; + return v->sched_unit->priv; } static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu, @@ -422,7 +422,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops, ASSERT(nvc && is_idle_vcpu(nvc->vcpu)); - idle_vcpu[cpu]->sched_priv = vdata; + idle_vcpu[cpu]->sched_unit->priv = vdata; /* * We are holding the runqueue lock already (it's been taken in diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index da76a41436..0f97c0f2a5 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -235,7 +235,7 @@ static inline struct rt_private *rt_priv(const struct scheduler *ops) static inline struct rt_vcpu *rt_vcpu(const struct vcpu *vcpu) { - return vcpu->sched_priv; + return vcpu->sched_unit->priv; } static inline struct list_head *rt_runq(const struct scheduler *ops) @@ -760,7 +760,7 @@ rt_switch_sched(struct scheduler *new_ops, unsigned int cpu, dprintk(XENLOG_DEBUG, "RTDS: timer initialized on cpu %u\n", cpu); } - idle_vcpu[cpu]->sched_priv = vdata; + idle_vcpu[cpu]->sched_unit->priv = vdata; return &prv->lock; } diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 2c1a72c3c9..038ebf5ae9 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -81,7 +81,7 @@ static spinlock_t * sched_idle_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { - idle_vcpu[cpu]->sched_priv = NULL; + idle_vcpu[cpu]->sched_unit->priv = NULL; return &sched_free_cpu_lock; } @@ -327,8 +327,8 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) init_timer(&v->poll_timer, poll_timer_fn, v, v->processor); - v->sched_priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv); - if ( v->sched_priv == NULL ) + unit->priv = sched_alloc_vdata(dom_scheduler(d), unit, d->sched_priv); + if ( unit->priv == NULL ) { v->sched_unit = NULL; xfree(unit); @@ -423,7 +423,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) { spinlock_t *lock; - vcpudata = v->sched_priv; + vcpudata = v->sched_unit->priv; migrate_timer(&v->periodic_timer, new_p); migrate_timer(&v->singleshot_timer, new_p); @@ -441,7 +441,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) */ spin_unlock_irq(lock); - v->sched_priv = vcpu_priv[v->vcpu_id]; + v->sched_unit->priv = vcpu_priv[v->vcpu_id]; if ( !d->is_dying ) sched_move_irqs(v); @@ -473,7 +473,7 @@ void sched_destroy_vcpu(struct vcpu *v) if ( test_and_clear_bool(v->is_urgent) ) atomic_dec(&per_cpu(schedule_data, v->processor).urgent_count); sched_remove_unit(vcpu_scheduler(v), unit); - sched_free_vdata(vcpu_scheduler(v), v->sched_priv); + sched_free_vdata(vcpu_scheduler(v), unit->priv); xfree(unit); v->sched_unit = NULL; } @@ -1922,7 +1922,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c) */ old_lock = pcpu_schedule_lock_irqsave(cpu, &flags); - vpriv_old = idle->sched_priv; + vpriv_old = idle->sched_unit->priv; ppriv_old = sd->sched_priv; new_lock = sched_switch_sched(new_ops, cpu, ppriv, vpriv); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d7dd182885..a389ba5e1a 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -162,7 +162,6 @@ struct vcpu struct timer poll_timer; /* timeout for SCHEDOP_poll */ struct sched_unit *sched_unit; - void *sched_priv; /* scheduler-specific data */ struct vcpu_runstate_info runstate; #ifndef CONFIG_COMPAT @@ -277,6 +276,7 @@ struct vcpu struct sched_unit { struct domain *domain; struct vcpu *vcpu_list; + void *priv; /* scheduler private data */ int unit_id; };