diff mbox

[v3,4/7] xen: credit2: always mark a tickled pCPU as... tickled!

Message ID 148828274464.26730.9460898039870001654.stgit@Solace.fritz.box (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli Feb. 28, 2017, 11:52 a.m. UTC
In fact, whether or not a pCPU has been tickled, and is
therefore about to re-schedule, is something we look at
and base decisions on in various places.

So, let's make sure that we do that basing on accurate
information.

While there, also tweak a little bit smt_idle_mask_clear()
(used for implementing SMT support), so that it only alter
the relevant cpumask when there is the actual need for this.
(This is only for reduced overhead, behavior remains the
same).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Cc: Anshul Makkar <anshul.makkar@citrix.com>
---
Changes from v2:
* fixed a bug I found myself in runq_tickle().
---
 xen/common/sched_credit2.c |   26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

Comments

Jan Beulich March 1, 2017, 9:35 a.m. UTC | #1
>>> On 28.02.17 at 12:52, <dario.faggioli@citrix.com> wrote:
> In fact, whether or not a pCPU has been tickled, and is
> therefore about to re-schedule, is something we look at
> and base decisions on in various places.
> 
> So, let's make sure that we do that basing on accurate
> information.
> 
> While there, also tweak a little bit smt_idle_mask_clear()
> (used for implementing SMT support), so that it only alter
> the relevant cpumask when there is the actual need for this.
> (This is only for reduced overhead, behavior remains the
> same).
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

I would have wanted to commit this (and then also patch 5), but it
depends on earlier patches (possibly just trivially, but I didn't want
to chance it). Considering this is (aiui) a bug fix, it would of course
have been nice if this was at the start of the series, to ease
backporting (if desired or even necessary).

Jan
Dario Faggioli March 1, 2017, 1:07 p.m. UTC | #2
On Wed, 2017-03-01 at 02:35 -0700, Jan Beulich wrote:
> I would have wanted to commit this (and then also patch 5), but it
> depends on earlier patches (possibly just trivially, but I didn't
> want
> to chance it). Considering this is (aiui) a bug fix, it would of
> course
> have been nice if this was at the start of the series, to ease
> backporting (if desired or even necessary).
>
I think you are right. I usually do put bugfixes at the top but, for
some reasons, I wasn't considering these two patches as backport
candidates.

But indeed they're fixes, and indeed they qualify.

I'll resend the series with this one and patch 5 as the first two
patches.

George (and everyone else): please ignore this, and look directly at
v4.

Sorry for the inconvenience.

Regards,
Dario
diff mbox

Patch

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index b0ec5f8..feb0f83 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -523,12 +523,15 @@  void smt_idle_mask_set(unsigned int cpu, const cpumask_t *idlers,
 }
 
 /*
- * Clear the bits of all the siblings of cpu from mask.
+ * Clear the bits of all the siblings of cpu from mask (if necessary).
  */
 static inline
 void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask)
 {
-    cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu));
+    const cpumask_t *cpu_siblings = per_cpu(cpu_sibling_mask, cpu);
+
+    if ( cpumask_subset(cpu_siblings, mask) )
+        cpumask_andnot(mask, mask, per_cpu(cpu_sibling_mask, cpu));
 }
 
 /*
@@ -1118,6 +1121,14 @@  static inline void runq_remove(struct csched2_vcpu *svc)
 
 void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_vcpu *, s_time_t);
 
+static inline void
+tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd)
+{
+    __cpumask_set_cpu(cpu, &rqd->tickled);
+    smt_idle_mask_clear(cpu, &rqd->smt_idle);
+    cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
+}
+
 /*
  * Check what processor it is best to 'wake', for picking up a vcpu that has
  * just been put (back) in the runqueue. Logic is as follows:
@@ -1285,9 +1296,8 @@  runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now)
                     sizeof(d),
                     (unsigned char *)&d);
     }
-    __cpumask_set_cpu(ipid, &rqd->tickled);
-    smt_idle_mask_clear(ipid, &rqd->smt_idle);
-    cpu_raise_softirq(ipid, SCHEDULE_SOFTIRQ);
+
+    tickle_cpu(ipid, rqd);
 
     if ( unlikely(new->tickled_cpu != -1) )
         SCHED_STAT_CRANK(tickled_cpu_overwritten);
@@ -1489,7 +1499,9 @@  csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc)
     SCHED_STAT_CRANK(vcpu_sleep);
 
     if ( curr_on_cpu(vc->processor) == vc )
-        cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
+    {
+        tickle_cpu(vc->processor, svc->rqd);
+    }
     else if ( vcpu_on_runq(svc) )
     {
         ASSERT(svc->rqd == c2rqd(ops, vc->processor));
@@ -1812,8 +1824,8 @@  static void migrate(const struct scheduler *ops,
         svc->migrate_rqd = trqd;
         __set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
         __set_bit(__CSFLAG_runq_migrate_request, &svc->flags);
-        cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
         SCHED_STAT_CRANK(migrate_requested);
+        tickle_cpu(cpu, svc->rqd);
     }
     else
     {