From patchwork Fri Mar 29 15:08:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877237 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59FED1575 for ; Fri, 29 Mar 2019 15:11:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 42FDE2983C for ; Fri, 29 Mar 2019 15:11:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 36D1929844; Fri, 29 Mar 2019 15:11:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 15E9829864 for ; Fri, 29 Mar 2019 15:11:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8h-0003fU-8G; Fri, 29 Mar 2019 15:09:51 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8e-0003as-Ew for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:48 +0000 X-Inumbo-ID: ae5aae0e-5234-11e9-a601-3b593fbb0589 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ae5aae0e-5234-11e9-a601-3b593fbb0589; Fri, 29 Mar 2019 15:09:43 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 896C9B022; Fri, 29 Mar 2019 15:09:42 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:08:58 +0100 Message-Id: <20190329150934.17694-14-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 13/49] xen/sched: let pick_cpu return a scheduler resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Robert VanVossen , Dario Faggioli , Julien Grall , Josh Whitehead , Meng Xu , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of returning a physical cpu number let pick_cpu() return a scheduler resource instead. Rename pick_cpu() to pick_resource() to reflect that change. Signed-off-by: Juergen Gross --- xen/common/sched_arinc653.c | 12 ++++++------ xen/common/sched_credit.c | 16 ++++++++-------- xen/common/sched_credit2.c | 22 +++++++++++----------- xen/common/sched_null.c | 20 +++++++++++--------- xen/common/sched_rt.c | 18 +++++++++--------- xen/common/schedule.c | 8 +++++--- xen/include/xen/perfc_defn.h | 2 +- xen/include/xen/sched-if.h | 4 ++-- 8 files changed, 53 insertions(+), 49 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index f5af8b972d..a775be4cbc 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -601,15 +601,15 @@ a653sched_do_schedule( } /** - * Xen scheduler callback function to select a CPU for the VCPU to run on + * Xen scheduler callback function to select a resource for the VCPU to run on * * @param ops Pointer to this instance of the scheduler structure * @param item Pointer to struct sched_item * - * @return Number of selected physical CPU + * @return Scheduler resource to run on */ -static int -a653sched_pick_cpu(const struct scheduler *ops, struct sched_item *item) +static struct sched_resource * +a653sched_pick_resource(const struct scheduler *ops, struct sched_item *item) { struct vcpu *vc = item->vcpu; cpumask_t *online; @@ -627,7 +627,7 @@ a653sched_pick_cpu(const struct scheduler *ops, struct sched_item *item) || (cpu >= nr_cpu_ids) ) cpu = vc->processor; - return cpu; + return per_cpu(sched_res, cpu); } /** @@ -730,7 +730,7 @@ static const struct scheduler sched_arinc653_def = { .do_schedule = a653sched_do_schedule, - .pick_cpu = a653sched_pick_cpu, + .pick_resource = a653sched_pick_resource, .switch_sched = a653_switch_sched, diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index fc068a1c5f..14b749dc1a 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -867,8 +867,8 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit) return cpu; } -static int -csched_cpu_pick(const struct scheduler *ops, struct sched_item *item) +static struct sched_resource * +csched_res_pick(const struct scheduler *ops, struct sched_item *item) { struct vcpu *vc = item->vcpu; struct csched_vcpu *svc = CSCHED_VCPU(vc); @@ -881,7 +881,7 @@ csched_cpu_pick(const struct scheduler *ops, struct sched_item *item) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_VCPU_MIGRATING, &svc->flags); - return _csched_cpu_pick(ops, vc, 1); + return per_cpu(sched_res, _csched_cpu_pick(ops, vc, 1)); } static inline void @@ -981,7 +981,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) /* * If it's been active a while, check if we'd be better off * migrating it to run elsewhere (see multi-core and multi-thread - * support in csched_cpu_pick()). + * support in csched_res_pick()). */ new_cpu = _csched_cpu_pick(ops, current, 0); @@ -1036,11 +1036,11 @@ csched_item_insert(const struct scheduler *ops, struct sched_item *item) BUG_ON( is_idle_vcpu(vc) ); - /* csched_cpu_pick() looks in vc->processor's runq, so we need the lock. */ + /* csched_res_pick() looks in vc->processor's runq, so we need the lock. */ lock = vcpu_schedule_lock_irq(vc); - vc->processor = csched_cpu_pick(ops, item); - item->res = per_cpu(sched_res, vc->processor); + item->res = csched_res_pick(ops, item); + vc->processor = item->res->processor; spin_unlock_irq(lock); @@ -2290,7 +2290,7 @@ static const struct scheduler sched_credit_def = { .adjust_affinity= csched_aff_cntl, .adjust_global = csched_sys_cntl, - .pick_cpu = csched_cpu_pick, + .pick_resource = csched_res_pick, .do_schedule = csched_schedule, .dump_cpu_state = csched_dump_pcpu, diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 614d71d948..c8ae585272 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -625,9 +625,9 @@ static inline bool has_cap(const struct csched2_vcpu *svc) * runq, _always_ happens by means of tickling: * - when a vcpu wakes up, it calls csched2_item_wake(), which calls * runq_tickle(); - * - when a migration is initiated in schedule.c, we call csched2_cpu_pick(), + * - when a migration is initiated in schedule.c, we call csched2_res_pick(), * csched2_item_migrate() (which calls migrate()) and csched2_item_wake(). - * csched2_cpu_pick() looks for the least loaded runq and return just any + * csched2_res_pick() looks for the least loaded runq and return just any * of its processors. Then, csched2_item_migrate() just moves the vcpu to * the chosen runq, and it is again runq_tickle(), called by * csched2_item_wake() that actually decides what pcpu to use within the @@ -676,7 +676,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) } /* - * In csched2_cpu_pick(), it may not be possible to actually look at remote + * In csched2_res_pick(), it may not be possible to actually look at remote * runqueues (the trylock-s on their spinlocks can fail!). If that happens, * we pick, in order of decreasing preference: * 1) svc's current pcpu, if it is part of svc's soft affinity; @@ -2201,8 +2201,8 @@ csched2_context_saved(const struct scheduler *ops, struct sched_item *item) } #define MAX_LOAD (STIME_MAX) -static int -csched2_cpu_pick(const struct scheduler *ops, struct sched_item *item) +static struct sched_resource * +csched2_res_pick(const struct scheduler *ops, struct sched_item *item) { struct csched2_private *prv = csched2_priv(ops); struct vcpu *vc = item->vcpu; @@ -2214,7 +2214,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct sched_item *item) ASSERT(!cpumask_empty(&prv->active_queues)); - SCHED_STAT_CRANK(pick_cpu); + SCHED_STAT_CRANK(pick_resource); /* Locking: * - Runqueue lock of vc->processor is already locked @@ -2423,7 +2423,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct sched_item *item) (unsigned char *)&d); } - return new_cpu; + return per_cpu(sched_res, new_cpu); } /* Working state of the load-balancing algorithm */ @@ -3120,11 +3120,11 @@ csched2_item_insert(const struct scheduler *ops, struct sched_item *item) ASSERT(!is_idle_vcpu(vc)); ASSERT(list_empty(&svc->runq_elem)); - /* csched2_cpu_pick() expects the pcpu lock to be held */ + /* csched2_res_pick() expects the pcpu lock to be held */ lock = vcpu_schedule_lock_irq(vc); - vc->processor = csched2_cpu_pick(ops, item); - item->res = per_cpu(sched_res, vc->processor); + item->res = csched2_res_pick(ops, item); + vc->processor = item->res->processor; spin_unlock_irq(lock); @@ -4113,7 +4113,7 @@ static const struct scheduler sched_credit2_def = { .adjust_affinity= csched2_aff_cntl, .adjust_global = csched2_sys_cntl, - .pick_cpu = csched2_cpu_pick, + .pick_resource = csched2_res_pick, .migrate = csched2_item_migrate, .do_schedule = csched2_schedule, .context_saved = csched2_context_saved, diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 114b32e2e1..a08f23993c 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -269,9 +269,11 @@ static void null_free_domdata(const struct scheduler *ops, void *data) * * So this is not part of any hot path. */ -static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v) +static struct sched_resource * +pick_res(struct null_private *prv, struct sched_item *item) { unsigned int bs; + struct vcpu *v = item->vcpu; unsigned int cpu = v->processor, new_cpu; cpumask_t *cpus = cpupool_domain_cpumask(v->domain); @@ -335,7 +337,7 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v) __trace_var(TRC_SNULL_PICKED_CPU, 1, sizeof(d), &d); } - return new_cpu; + return per_cpu(sched_res, new_cpu); } static void vcpu_assign(struct null_private *prv, struct vcpu *v, @@ -429,8 +431,8 @@ static void null_item_insert(const struct scheduler *ops, lock = vcpu_schedule_lock_irq(v); retry: - cpu = v->processor = pick_cpu(prv, v); - item->res = per_cpu(sched_res, cpu); + item->res = pick_res(prv, item); + cpu = v->processor = item->res->processor; spin_unlock(lock); @@ -586,11 +588,11 @@ static void null_item_sleep(const struct scheduler *ops, SCHED_STAT_CRANK(vcpu_sleep); } -static int null_cpu_pick(const struct scheduler *ops, struct sched_item *item) +static struct sched_resource * +null_res_pick(const struct scheduler *ops, struct sched_item *item) { - struct vcpu *v = item->vcpu; - ASSERT(!is_idle_vcpu(v)); - return pick_cpu(null_priv(ops), v); + ASSERT(!is_idle_vcpu(item->vcpu)); + return pick_res(null_priv(ops), item); } static void null_item_migrate(const struct scheduler *ops, @@ -909,7 +911,7 @@ const struct scheduler sched_null_def = { .wake = null_item_wake, .sleep = null_item_sleep, - .pick_cpu = null_cpu_pick, + .pick_resource = null_res_pick, .migrate = null_item_migrate, .do_schedule = null_schedule, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 44b86fc08d..2bd4637592 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -632,12 +632,12 @@ replq_reinsert(const struct scheduler *ops, struct rt_vcpu *svc) } /* - * Pick a valid CPU for the vcpu vc - * Valid CPU of a vcpu is intesection of vcpu's affinity - * and available cpus + * Pick a valid resource for the vcpu vc + * Valid resource of a vcpu is intesection of vcpu's affinity + * and available resources */ -static int -rt_cpu_pick(const struct scheduler *ops, struct sched_item *item) +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, struct sched_item *item) { struct vcpu *vc = item->vcpu; cpumask_t cpus; @@ -652,7 +652,7 @@ rt_cpu_pick(const struct scheduler *ops, struct sched_item *item) : cpumask_cycle(vc->processor, &cpus); ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); - return cpu; + return per_cpu(sched_res, cpu); } /* @@ -901,8 +901,8 @@ rt_item_insert(const struct scheduler *ops, struct sched_item *item) BUG_ON( is_idle_vcpu(vc) ); /* This is safe because vc isn't yet being scheduled */ - vc->processor = rt_cpu_pick(ops, item); - item->res = per_cpu(sched_res, vc->processor); + item->res = rt_res_pick(ops, item); + vc->processor = item->res->processor; lock = vcpu_schedule_lock_irq(vc); @@ -1571,7 +1571,7 @@ static const struct scheduler sched_rtds_def = { .adjust = rt_dom_cntl, - .pick_cpu = rt_cpu_pick, + .pick_resource = rt_res_pick, .do_schedule = rt_schedule, .sleep = rt_item_sleep, .wake = rt_item_wake, diff --git a/xen/common/schedule.c b/xen/common/schedule.c index db297f6144..62490454ea 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -697,7 +697,8 @@ static void vcpu_migrate_finish(struct vcpu *v) break; /* Select a new CPU. */ - new_cpu = SCHED_OP(vcpu_scheduler(v), pick_cpu, v->sched_item); + new_cpu = SCHED_OP(vcpu_scheduler(v), pick_resource, + v->sched_item)->processor; if ( (new_lock == per_cpu(schedule_data, new_cpu).schedule_lock) && cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) ) break; @@ -803,8 +804,9 @@ void restore_vcpu_affinity(struct domain *d) v->sched_item->res = per_cpu(sched_res, v->processor); lock = vcpu_schedule_lock_irq(v); - v->processor = SCHED_OP(vcpu_scheduler(v), pick_cpu, v->sched_item); - v->sched_item->res = per_cpu(sched_res, v->processor); + v->sched_item->res = SCHED_OP(vcpu_scheduler(v), pick_resource, + v->sched_item); + v->processor = v->sched_item->res->processor; spin_unlock_irq(lock); if ( old_cpu != v->processor ) diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h index ef6f86b91e..1ad4384080 100644 --- a/xen/include/xen/perfc_defn.h +++ b/xen/include/xen/perfc_defn.h @@ -69,7 +69,7 @@ PERFCOUNTER(migrate_on_runq, "csched2: migrate_on_runq") PERFCOUNTER(migrate_no_runq, "csched2: migrate_no_runq") PERFCOUNTER(runtime_min_timer, "csched2: runtime_min_timer") PERFCOUNTER(runtime_max_timer, "csched2: runtime_max_timer") -PERFCOUNTER(pick_cpu, "csched2: pick_cpu") +PERFCOUNTER(pick_resource, "csched2: pick_resource") PERFCOUNTER(need_fallback_cpu, "csched2: need_fallback_cpu") PERFCOUNTER(migrated, "csched2: migrated") PERFCOUNTER(migrate_resisted, "csched2: migrate_resisted") diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 43235951a3..10a97a5dc2 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -193,8 +193,8 @@ struct scheduler { struct task_slice (*do_schedule) (const struct scheduler *, s_time_t, bool_t tasklet_work_scheduled); - int (*pick_cpu) (const struct scheduler *, - struct sched_item *); + struct sched_resource * (*pick_resource) (const struct scheduler *, + struct sched_item *); void (*migrate) (const struct scheduler *, struct sched_item *, unsigned int); int (*adjust) (const struct scheduler *, struct domain *,