Message ID | 148664874799.595.17524826454909514715.stgit@Solace.fritz.box (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 2017-02-09 at 14:59 +0100, Dario Faggioli wrote: > In fact, whether or not a pCPU has been tickled, and is > therefore about to re-schedule, is something we look at > and base decisions on in various places. > > So, let's make sure that we do that basing on accurate > information. > > While there, also tweak a little bit smt_idle_mask_clear() > (used for implementing SMT support), so that it only alter > the relevant cpumask when there is the actual need for this. > (This is only for reduced overhead, behavior remains the > same). > So, while working on other things with this series applied, I noticed some strange behavior, for which it turned out this patch is responsible. More specifically... > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index addee7b..bfb4891 100644 > @@ -1289,9 +1300,8 @@ runq_tickle(const struct scheduler *ops, struct > csched2_vcpu *new, s_time_t now) > sizeof(d), > (unsigned char *)&d); > } > - __cpumask_set_cpu(ipid, &rqd->tickled); > - smt_idle_mask_clear(ipid, &rqd->smt_idle); > - cpu_raise_softirq(ipid, SCHEDULE_SOFTIRQ); > + > + tickle_cpu(cpu, rqd); > ... this, quite obviously, wants to be: tickle_cpu(ipid, rqd); I'll fix this in v2. Regards, Dario
On Thu, Feb 9, 2017 at 11:48 PM, Dario Faggioli <dario.faggioli@citrix.com> wrote: > On Thu, 2017-02-09 at 14:59 +0100, Dario Faggioli wrote: >> In fact, whether or not a pCPU has been tickled, and is >> therefore about to re-schedule, is something we look at >> and base decisions on in various places. >> >> So, let's make sure that we do that basing on accurate >> information. >> >> While there, also tweak a little bit smt_idle_mask_clear() >> (used for implementing SMT support), so that it only alter >> the relevant cpumask when there is the actual need for this. >> (This is only for reduced overhead, behavior remains the >> same). >> > So, while working on other things with this series applied, I noticed > some strange behavior, for which it turned out this patch is > responsible. > > More specifically... > >> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c >> index addee7b..bfb4891 100644 >> @@ -1289,9 +1300,8 @@ runq_tickle(const struct scheduler *ops, struct >> csched2_vcpu *new, s_time_t now) >> sizeof(d), >> (unsigned char *)&d); >> } >> - __cpumask_set_cpu(ipid, &rqd->tickled); >> - smt_idle_mask_clear(ipid, &rqd->smt_idle); >> - cpu_raise_softirq(ipid, SCHEDULE_SOFTIRQ); >> + >> + tickle_cpu(cpu, rqd); >> > ... this, quite obviously, wants to be: > > tickle_cpu(ipid, rqd); > > I'll fix this in v2. You mean v3? :-) You can add: Reviewed-by: George Dunlap <george.dunlap@citrix.com>
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index addee7b..bfb4891 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -470,12 +470,15 @@ void smt_idle_mask_set(unsigned int cpu, const cpumask_t *idlers, } /* - * Clear the bits of all the siblings of cpu from mask. + * Clear the bits of all the siblings of cpu from mask (if necessary). */ static inline void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) { - cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu)); + const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu); + + if ( cpumask_subset(cpu_siblings, mask) ) + cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu)); } /* @@ -1122,6 +1125,14 @@ static inline void runq_remove(struct csched2_vcpu *svc) void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_vcpu *, s_time_t); +static inline void +tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd) +{ + __cpumask_set_cpu(cpu, &rqd->tickled); + smt_idle_mask_clear(cpu, &rqd->smt_idle); + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); +} + /* * Check what processor it is best to 'wake', for picking up a vcpu that has * just been put (back) in the runqueue. Logic is as follows: @@ -1289,9 +1300,8 @@ runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now) sizeof(d), (unsigned char *)&d); } - __cpumask_set_cpu(ipid, &rqd->tickled); - smt_idle_mask_clear(ipid, &rqd->smt_idle); - cpu_raise_softirq(ipid, SCHEDULE_SOFTIRQ); + + tickle_cpu(cpu, rqd); if ( unlikely(new->tickled_cpu != -1) ) SCHED_STAT_CRANK(tickled_cpu_overwritten); @@ -1493,7 +1503,9 @@ csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) SCHED_STAT_CRANK(vcpu_sleep); if ( curr_on_cpu(vc->processor) == vc ) - cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); + { + tickle_cpu(vc->processor, svc->rqd); + } else if ( vcpu_on_runq(svc) ) { ASSERT(svc->rqd == c2rqd(ops, vc->processor)); @@ -1816,8 +1828,8 @@ static void migrate(const struct scheduler *ops, svc->migrate_rqd = trqd; __set_bit(_VPF_migrating, &svc->vcpu->pause_flags); __set_bit(__CSFLAG_runq_migrate_request, &svc->flags); - cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); SCHED_STAT_CRANK(migrate_requested); + tickle_cpu(cpu, svc->rqd); } else {
In fact, whether or not a pCPU has been tickled, and is therefore about to re-schedule, is something we look at and base decisions on in various places. So, let's make sure that we do that basing on accurate information. While there, also tweak a little bit smt_idle_mask_clear() (used for implementing SMT support), so that it only alter the relevant cpumask when there is the actual need for this. (This is only for reduced overhead, behavior remains the same). Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> --- Cc: George Dunlap <george.dunlap@eu.citrix.com> Cc: Anshul Makkar <anshul.makkar@citrix.com> --- xen/common/sched_credit2.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)