diff mbox

[03/14] xen: sched: fi position of TRC_SCHED_DOM_{ADD, REM}

Message ID 20160205183349.4543.23589.stgit@Solace.station (mailing list archive)
State New, archived
Headers show

Commit Message

Dario Faggioli Feb. 5, 2016, 6:33 p.m. UTC
so that they actually live in the functions that
do the scheduling related domain initialization and
destruction.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/common/domain.c   |    1 -
 xen/common/schedule.c |    4 ++--
 2 files changed, 2 insertions(+), 3 deletions(-)

Comments

Konrad Rzeszutek Wilk Feb. 15, 2016, 4:22 p.m. UTC | #1
On Fri, Feb 05, 2016 at 07:33:50PM +0100, Dario Faggioli wrote:

On the title you have 'fi', but I think you meant 'fix'.

> so that they actually live in the functions that
> do the scheduling related domain initialization and
> destruction.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

.. would it make sense to have an overall high-level
'DOM_ADD' and 'DOM_REM' trace ?

Especially as it is useful for figuring out how long
an domain destruction takes time (based on the initial
trace to say this TRC_SCHED_REM)?

> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  xen/common/domain.c   |    1 -
>  xen/common/schedule.c |    4 ++--
>  2 files changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 425767c..ddc7484 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -867,7 +867,6 @@ void domain_destroy(struct domain *d)
>      cpupool_rm_domain(d);
>  
>      /* Delete from task list and task hashtable. */
> -    TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id);
>      spin_lock(&domlist_update_lock);
>      pd = &domain_list;
>      while ( *pd != d ) 
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index c87922f..27695e3 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -241,8 +241,6 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
>      if ( v->sched_priv == NULL )
>          return 1;
>  
> -    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
> -
>      /* Idle VCPUs are scheduled immediately, so don't put them in runqueue. */
>      if ( is_idle_domain(d) )
>      {
> @@ -369,12 +367,14 @@ void sched_destroy_vcpu(struct vcpu *v)
>  int sched_init_domain(struct domain *d)
>  {
>      SCHED_STAT_CRANK(dom_init);
> +    TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id);
>      return SCHED_OP(DOM2OP(d), init_domain, d);
>  }
>  
>  void sched_destroy_domain(struct domain *d)
>  {
>      SCHED_STAT_CRANK(dom_destroy);
> +    TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id);
>      SCHED_OP(DOM2OP(d), destroy_domain, d);
>  }
>  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
Dario Faggioli Feb. 15, 2016, 4:37 p.m. UTC | #2
On Mon, 2016-02-15 at 11:22 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Feb 05, 2016 at 07:33:50PM +0100, Dario Faggioli wrote:
> 
> On the title you have 'fi', but I think you meant 'fix'.
> 
Indeed, sorry for that.

> > so that they actually live in the functions that
> > do the scheduling related domain initialization and
> > destruction.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> 
> .. would it make sense to have an overall high-level
> 'DOM_ADD' and 'DOM_REM' trace ?
> 
> Especially as it is useful for figuring out how long
> an domain destruction takes time (based on the initial
> trace to say this TRC_SCHED_REM)?
> 
I think it makes sense, and I can either do this in this patch (and
resend) or as a follow up.

I guess the only possible concern is that we may introduce too much
tracing, to the point that it hurts performance, even when not enabled.
(I was starting to think about this anyway, as I've got other series,
either posted or in my queues, that adds a few more tracepoints,
because they're so damn useful! ;-P)

In that case, I guess we could think of doing something similar to what
ftrace does in Linux --for disabling and enabling tracing and
tracepoints-- which should be more lightweight than what we do.

In any case, none of this applies to a "DOM_ADD" / "DOM_REM" tracing
events. :-)

Thanks and Regards,
Dario
Konrad Rzeszutek Wilk Feb. 15, 2016, 4:41 p.m. UTC | #3
On Mon, Feb 15, 2016 at 05:37:05PM +0100, Dario Faggioli wrote:
> On Mon, 2016-02-15 at 11:22 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Feb 05, 2016 at 07:33:50PM +0100, Dario Faggioli wrote:
> > 
> > On the title you have 'fi', but I think you meant 'fix'.
> > 
> Indeed, sorry for that.
> 
> > > so that they actually live in the functions that
> > > do the scheduling related domain initialization and
> > > destruction.
> > > 
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > 
> > .. would it make sense to have an overall high-level
> > 'DOM_ADD' and 'DOM_REM' trace ?
> > 
> > Especially as it is useful for figuring out how long
> > an domain destruction takes time (based on the initial
> > trace to say this TRC_SCHED_REM)?
> > 
> I think it makes sense, and I can either do this in this patch (and
> resend) or as a follow up.
> 
> I guess the only possible concern is that we may introduce too much
> tracing, to the point that it hurts performance, even when not enabled.

.. How often do we do these operations?

> (I was starting to think about this anyway, as I've got other series,
> either posted or in my queues, that adds a few more tracepoints,
> because they're so damn useful! ;-P)
> 
> In that case, I guess we could think of doing something similar to what
> ftrace does in Linux --for disabling and enabling tracing and
> tracepoints-- which should be more lightweight than what we do.
> 
> In any case, none of this applies to a "DOM_ADD" / "DOM_REM" tracing
> events. :-)

Correct. This patch does fix the mis-use of the trace points.
> 
> Thanks and Regards,
> Dario
> 
> -- 
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
diff mbox

Patch

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 425767c..ddc7484 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -867,7 +867,6 @@  void domain_destroy(struct domain *d)
     cpupool_rm_domain(d);
 
     /* Delete from task list and task hashtable. */
-    TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id);
     spin_lock(&domlist_update_lock);
     pd = &domain_list;
     while ( *pd != d ) 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c87922f..27695e3 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -241,8 +241,6 @@  int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     if ( v->sched_priv == NULL )
         return 1;
 
-    TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
-
     /* Idle VCPUs are scheduled immediately, so don't put them in runqueue. */
     if ( is_idle_domain(d) )
     {
@@ -369,12 +367,14 @@  void sched_destroy_vcpu(struct vcpu *v)
 int sched_init_domain(struct domain *d)
 {
     SCHED_STAT_CRANK(dom_init);
+    TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id);
     return SCHED_OP(DOM2OP(d), init_domain, d);
 }
 
 void sched_destroy_domain(struct domain *d)
 {
     SCHED_STAT_CRANK(dom_destroy);
+    TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id);
     SCHED_OP(DOM2OP(d), destroy_domain, d);
 }