diff mbox

[2/6] xen: credit: (micro) optimize csched_runq_steal().

Message ID 148845109247.23452.8451353979591593272.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli March 2, 2017, 10:38 a.m. UTC
Chacking whether or not a vCPU can be 'stolen'
from a peer pCPU's runqueue is relatively cheap.

Therefore, let's do that  as early as possible,
avoiding potentially useless complex checks, and
cpumask manipulations.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/sched_credit.c |   17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

Comments

Anshul Makkar March 3, 2017, 9:48 a.m. UTC | #1
On 02/03/17 10:38, Dario Faggioli wrote:
> Chacking whether or not a vCPU can be 'stolen'
> from a peer pCPU's runqueue is relatively cheap.
>
> Therefore, let's do that  as early as possible,
> avoiding potentially useless complex checks, and
> cpumask manipulations.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>   xen/common/sched_credit.c |   17 +++++++++--------
>   1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index 63a8675..2b13e99 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -708,12 +708,10 @@ static inline int
>   __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
>   {
>       /*
> -     * Don't pick up work that's in the peer's scheduling tail or hot on
> -     * peer PCPU. Only pick up work that prefers and/or is allowed to run
> -     * on our CPU.
> +     * Don't pick up work that's or hot on peer PCPU, or that can't (or
Not clear.
> +     * would prefer not to) run on cpu.
>        */
> -    return !vc->is_running &&
> -           !__csched_vcpu_is_cache_hot(vc) &&
> +    return !__csched_vcpu_is_cache_hot(vc) &&
>              cpumask_test_cpu(dest_cpu, mask);
!vc->is_running doesn't ease the complexity and doesn't save much on cpu 
cycles. Infact, I think (!vc->is_running) makes the intention to check 
for migration much more clear to understand.

Yeah, apart from the above reasons, its save to remove this check from here.
>   }
>   
> @@ -1622,7 +1620,9 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>           BUG_ON( is_idle_vcpu(vc) );
>   
>           /*
> -         * If the vcpu has no useful soft affinity, skip this vcpu.
> +         * If the vcpu is still in peer_cpu's scheduling tail, or if it
> +         * has no useful soft affinity, skip it.
> +         *
>            * In fact, what we want is to check if we have any "soft-affine
>            * work" to steal, before starting to look at "hard-affine work".
>            *
> @@ -1633,8 +1633,9 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>            * vCPUs with useful soft affinities in some sort of bitmap
>            * or counter.
>            */
> -        if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
> -             && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity) )
> +        if ( vc->is_running ||
> +             (balance_step == CSCHED_BALANCE_SOFT_AFFINITY
> +              && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity)) )
>               continue;
>   
>           csched_balance_cpumask(vc, balance_step, cpumask_scratch);
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
Dario Faggioli March 3, 2017, 1:53 p.m. UTC | #2
On Fri, 2017-03-03 at 09:48 +0000, anshul makkar wrote:
> On 02/03/17 10:38, Dario Faggioli wrote:
> > --- a/xen/common/sched_credit.c
> > +++ b/xen/common/sched_credit.c
> > @@ -708,12 +708,10 @@ static inline int
> >   __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu,
> > cpumask_t *mask)
> >   {
> >       /*
> > -     * Don't pick up work that's in the peer's scheduling tail or
> > hot on
> > -     * peer PCPU. Only pick up work that prefers and/or is allowed
> > to run
> > -     * on our CPU.
> > +     * Don't pick up work that's or hot on peer PCPU, or that
> > can't (or
> Not clear.
>
Well, there's actually a typo (redundant 'or'). Good catch. :-)
> > 
> > +     * would prefer not to) run on cpu.
> >        */
> > -    return !vc->is_running &&
> > -           !__csched_vcpu_is_cache_hot(vc) &&
> > +    return !__csched_vcpu_is_cache_hot(vc) &&
> >              cpumask_test_cpu(dest_cpu, mask);
> !vc->is_running doesn't ease the complexity and doesn't save much on
> cpu 
> cycles. Infact, I think (!vc->is_running) makes the intention to
> check 
> for migration much more clear to understand.
> 
But the point is not saving the overhead of a !vc->is_running check
here, it is actually to pull it out from within this function and check
that before. And that's ok because the value won't change, and is good
thing because what we save is a call to

  __vcpu_has_soft_affinity()

and, potentially, to

  csched_balance_cpumask()

i.e., more specifically...
            *
> > @@ -1633,8 +1633,9 @@ csched_runq_steal(int peer_cpu, int cpu, int
> > pri, int balance_step)
> >            * vCPUs with useful soft affinities in some sort of
> > bitmap
> >            * or counter.
> >            */
> > -        if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
> > -             && !__vcpu_has_soft_affinity(vc, vc-
> > >cpu_hard_affinity) )
> > +        if ( vc->is_running ||
> > +             (balance_step == CSCHED_BALANCE_SOFT_AFFINITY
> > +              && !__vcpu_has_soft_affinity(vc, vc-
> > >cpu_hard_affinity)) )
> >               continue;
> >   
> >           csched_balance_cpumask(vc, balance_step,
> > cpumask_scratch);
> > 
...these ones here.

I agree that the check was a good fit for that function, but --with the
updated comments-- I don't think it's too terrible to have it outside.

Or were you suggesting to have it in _both_ places? If that's the case,
no... I agree it's cheap, but that would look confusing to me (I
totally see myself, in 3 months, sending a patch to remove the
redundant is_running check! :-P).

Thanks and Regards,
Dario
diff mbox

Patch

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 63a8675..2b13e99 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -708,12 +708,10 @@  static inline int
 __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
 {
     /*
-     * Don't pick up work that's in the peer's scheduling tail or hot on
-     * peer PCPU. Only pick up work that prefers and/or is allowed to run
-     * on our CPU.
+     * Don't pick up work that's or hot on peer PCPU, or that can't (or
+     * would prefer not to) run on cpu.
      */
-    return !vc->is_running &&
-           !__csched_vcpu_is_cache_hot(vc) &&
+    return !__csched_vcpu_is_cache_hot(vc) &&
            cpumask_test_cpu(dest_cpu, mask);
 }
 
@@ -1622,7 +1620,9 @@  csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
         BUG_ON( is_idle_vcpu(vc) );
 
         /*
-         * If the vcpu has no useful soft affinity, skip this vcpu.
+         * If the vcpu is still in peer_cpu's scheduling tail, or if it
+         * has no useful soft affinity, skip it.
+         *
          * In fact, what we want is to check if we have any "soft-affine
          * work" to steal, before starting to look at "hard-affine work".
          *
@@ -1633,8 +1633,9 @@  csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
          * vCPUs with useful soft affinities in some sort of bitmap
          * or counter.
          */
-        if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY
-             && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity) )
+        if ( vc->is_running ||
+             (balance_step == CSCHED_BALANCE_SOFT_AFFINITY
+              && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity)) )
             continue;
 
         csched_balance_cpumask(vc, balance_step, cpumask_scratch);