Message ID | 149146657883.21348.4003508836159742945.stgit@Solace.fritz.box (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 06/04/17 09:16, Dario Faggioli wrote: > Chacking whether or not a vCPU can be 'stolen' > from a peer pCPU's runqueue is relatively cheap. > > Therefore, let's do that as early as possible, > avoiding potentially useless complex checks, and > cpumask manipulations. > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> > --- > Cc: George Dunlap <george.dunlap@eu.citrix.com> > Cc: Anshul Makkar <anshul.makkar@citrix.com> > --- > Changes from v1: > * fixed a typo in a comment. > --- > xen/common/sched_credit.c | 17 +++++++++-------- > 1 file changed, 9 insertions(+), 8 deletions(-) > > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c > index 63a8675..5a3f13f 100644 > --- a/xen/common/sched_credit.c > +++ b/xen/common/sched_credit.c > @@ -708,12 +708,10 @@ static inline int > __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask) > { > /* > - * Don't pick up work that's in the peer's scheduling tail or hot on > - * peer PCPU. Only pick up work that prefers and/or is allowed to run > - * on our CPU. > + * Don't pick up work that's hot on peer PCPU, or that can't (or > + * would prefer not to) run on cpu. > */ > - return !vc->is_running && > - !__csched_vcpu_is_cache_hot(vc) && > + return !__csched_vcpu_is_cache_hot(vc) && > cpumask_test_cpu(dest_cpu, mask); +1 to moving the check earlier; but there's a risk that this function will be used somewhere else and that the is_running check will be missed. Should we maybe add an ASSERT() here (perhaps with a comment that the check can be added back in if necessary)? -George
On Fri, 2017-04-07 at 11:49 +0100, George Dunlap wrote: > On 06/04/17 09:16, Dario Faggioli wrote: > > --- a/xen/common/sched_credit.c > > +++ b/xen/common/sched_credit.c > > @@ -708,12 +708,10 @@ static inline int > > __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, > > cpumask_t *mask) > > { > > /* > > - * Don't pick up work that's in the peer's scheduling tail or > > hot on > > - * peer PCPU. Only pick up work that prefers and/or is allowed > > to run > > - * on our CPU. > > + * Don't pick up work that's hot on peer PCPU, or that can't > > (or > > + * would prefer not to) run on cpu. > > */ > > - return !vc->is_running && > > - !__csched_vcpu_is_cache_hot(vc) && > > + return !__csched_vcpu_is_cache_hot(vc) && > > cpumask_test_cpu(dest_cpu, mask); > > +1 to moving the check earlier; but there's a risk that this function > will be used somewhere else and that the is_running check will be > missed. > > Should we maybe add an ASSERT() here (perhaps with a comment that the > check can be added back in if necessary)? > Ok, makes sense. Will do. Dario
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 63a8675..5a3f13f 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -708,12 +708,10 @@ static inline int __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask) { /* - * Don't pick up work that's in the peer's scheduling tail or hot on - * peer PCPU. Only pick up work that prefers and/or is allowed to run - * on our CPU. + * Don't pick up work that's hot on peer PCPU, or that can't (or + * would prefer not to) run on cpu. */ - return !vc->is_running && - !__csched_vcpu_is_cache_hot(vc) && + return !__csched_vcpu_is_cache_hot(vc) && cpumask_test_cpu(dest_cpu, mask); } @@ -1622,7 +1620,9 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) BUG_ON( is_idle_vcpu(vc) ); /* - * If the vcpu has no useful soft affinity, skip this vcpu. + * If the vcpu is still in peer_cpu's scheduling tail, or if it + * has no useful soft affinity, skip it. + * * In fact, what we want is to check if we have any "soft-affine * work" to steal, before starting to look at "hard-affine work". * @@ -1633,8 +1633,9 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) * vCPUs with useful soft affinities in some sort of bitmap * or counter. */ - if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY - && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity) ) + if ( vc->is_running || + (balance_step == CSCHED_BALANCE_SOFT_AFFINITY + && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity)) ) continue; csched_balance_cpumask(vc, balance_step, cpumask_scratch);
Chacking whether or not a vCPU can be 'stolen' from a peer pCPU's runqueue is relatively cheap. Therefore, let's do that as early as possible, avoiding potentially useless complex checks, and cpumask manipulations. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> --- Cc: George Dunlap <george.dunlap@eu.citrix.com> Cc: Anshul Makkar <anshul.makkar@citrix.com> --- Changes from v1: * fixed a typo in a comment. --- xen/common/sched_credit.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)