diff mbox

[v4,06/10] x86/hvm: Setup TSC scaling ratio

Message ID 1453067939-9121-7-git-send-email-haozhong.zhang@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Haozhong Zhang Jan. 17, 2016, 9:58 p.m. UTC
This patch adds a field tsc_scaling_ratio in struct hvm_vcpu to
record the TSC scaling ratio, and sets it up when tsc_set_info() is
called for a vcpu or when a vcpu is restored or reset.

Before applying the TSC scaling ratio to CPU, we check its validity in
tsc_set_info(). If an invalid ratio is given, we will leave the default
value in tsc_scaling_ratio (i.e. ratio = 1) and setup guest TSC as if no
TSC scaling is used:
* For TSC_MODE_FAULT,
  - if a user-specified TSC frequency is given, we will set the guest
    TSC frequency to it; otherwise, we set it to the host TSC frequency.
  - if guest TSC frequency does not equal to host TSC frequency, we will
    emulate guest TSC (i.e. d->arch.vtsc is set to 1). In both cases,
    guest TSC runs in the guest TSC frequency.
* For TSC_MODE_PVRDTSCP,
  - we set the guest TSC frequency to the host TSC frequency.
  - guest rdtsc is executed natively in the host TSC frequency as
    before.
  - if rdtscp is not available to guest, it will be emulated; otherwise,
    it will be executed natively. In both cases, guest rdtscp gets TSC
    in the host TSC frequency as before.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v4:
 (addressing Boris Ostrovsky's comments)
 * Factor out common code in hvm_validate_tsc_scaling_ratio() and
   hvm_setup_tsc_scaling()
   - replace bool_t hvm_validate_tsc_scaling_ratio(u32 gtsc_khz) with
     u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz) which returns 0 if TSC 
     scaling is not available or gtsc_khz is bogus.
 (addressing Jan Beulich's comments) 
 * Move adding setup_tsc_scaling in hvm_funcs to patch 9 where it is
   first used.

 xen/arch/x86/hvm/hvm.c            | 35 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c        |  6 ++++--
 xen/arch/x86/time.c               | 19 ++++++++++++++++---
 xen/include/asm-x86/hvm/hvm.h     |  3 +++
 xen/include/asm-x86/hvm/svm/svm.h |  3 ---
 xen/include/asm-x86/hvm/vcpu.h    |  2 ++
 6 files changed, 60 insertions(+), 8 deletions(-)

Comments

Jan Beulich Feb. 5, 2016, 1:54 p.m. UTC | #1
>>> On 17.01.16 at 22:58, <haozhong.zhang@intel.com> wrote:
> +u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
> +{
> +    u64 ratio;
> +
> +    if ( !hvm_tsc_scaling_supported )
> +        return 0;
> +
> +    /*
> +     * The multiplication of the first two terms may overflow a 64-bit
> +     * integer, so use mul_u64_u32_div() instead to keep precision.
> +     */
> +    ratio = mul_u64_u32_div(1ULL << hvm_funcs.tsc_scaling_ratio_frac_bits,
> +                            gtsc_khz, cpu_khz);

Is this the only use for this new math64 function? If so, I don't
see the point of adding that function, because (leaving limited
significant bits aside) the above simply is

(gtsc_khz << hvm_funcs.tsc_scaling_ratio_frac_bits) / cpu_khz

which can be had without any multiplication. Personally, if indeed
the only use I'd favor converting the above to inline assembly
here instead of adding that helper function (just like we have a
number of asm()-s in x86/time.c for similar reasons).

> +void hvm_setup_tsc_scaling(struct vcpu *v)
> +{
> +    v->arch.hvm_vcpu.tsc_scaling_ratio =
> +        hvm_get_tsc_scaling_ratio(v->domain->arch.tsc_khz);
> +}

So why again is this per-vCPU setup of per-vCPU state when it
only depends on a per-domain input? If this was per-domain, its
setup could be where it belongs - in arch_hvm_load().

> @@ -5504,6 +5536,9 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
>      hvm_set_segment_register(v, x86_seg_gdtr, &reg);
>      hvm_set_segment_register(v, x86_seg_idtr, &reg);
>  
> +    if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
> +        hvm_setup_tsc_scaling(v);

Could you remind me why this is needed? What state of the guest
would have changed making this necessary? Is this perhaps just
because it's per-vCPU instead of per-domain?

Jan
Haozhong Zhang Feb. 16, 2016, 9:44 a.m. UTC | #2
On 02/05/16 21:54, Jan Beulich wrote:
> >>> On 17.01.16 at 22:58, <haozhong.zhang@intel.com> wrote:
> > +u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
> > +{
> > +    u64 ratio;
> > +
> > +    if ( !hvm_tsc_scaling_supported )
> > +        return 0;
> > +
> > +    /*
> > +     * The multiplication of the first two terms may overflow a 64-bit
> > +     * integer, so use mul_u64_u32_div() instead to keep precision.
> > +     */
> > +    ratio = mul_u64_u32_div(1ULL << hvm_funcs.tsc_scaling_ratio_frac_bits,
> > +                            gtsc_khz, cpu_khz);
> 
> Is this the only use for this new math64 function? If so, I don't
> see the point of adding that function, because (leaving limited
> significant bits aside) the above simply is
> 
> (gtsc_khz << hvm_funcs.tsc_scaling_ratio_frac_bits) / cpu_khz
> 
> which can be had without any multiplication. Personally, if indeed
> the only use I'd favor converting the above to inline assembly
> here instead of adding that helper function (just like we have a
> number of asm()-s in x86/time.c for similar reasons).
>

OK, I'll rewrite it as asm(). mul_u64_u32_div() will not be used any
more and will be removed.

I'll also inline another math64 function mul_u64_u64_shr() in its single
caller hvm_scale_tsc(). Then the math64 patch will be dropped in the
next version.

> > +void hvm_setup_tsc_scaling(struct vcpu *v)
> > +{
> > +    v->arch.hvm_vcpu.tsc_scaling_ratio =
> > +        hvm_get_tsc_scaling_ratio(v->domain->arch.tsc_khz);
> > +}
> 
> So why again is this per-vCPU setup of per-vCPU state when it
> only depends on a per-domain input? If this was per-domain, its
> setup could be where it belongs - in arch_hvm_load().
>

It's a per-domain state. I'll the state to x86's struct arch_domain
where other TSC fields are (or struct hvm_domain, because this is only
used for HVM?).

Then it will be setup in tsc_set_info() after guest tsc frequency is
determined.

> > @@ -5504,6 +5536,9 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
> >      hvm_set_segment_register(v, x86_seg_gdtr, &reg);
> >      hvm_set_segment_register(v, x86_seg_idtr, &reg);
> >  
> > +    if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
> > +        hvm_setup_tsc_scaling(v);
> 
> Could you remind me why this is needed? What state of the guest
> would have changed making this necessary? Is this perhaps just
> because it's per-vCPU instead of per-domain?
> 
> Jan
> 

Yes, just because I mistakenly made it per-vcpu. So it will be not
necessary in this patch after tsc_scaling_ratio become per-domain.

Haozhong
Jan Beulich Feb. 16, 2016, 10:12 a.m. UTC | #3
>>> On 16.02.16 at 10:44, <haozhong.zhang@intel.com> wrote:
> On 02/05/16 21:54, Jan Beulich wrote:
>> >>> On 17.01.16 at 22:58, <haozhong.zhang@intel.com> wrote:
>> > +void hvm_setup_tsc_scaling(struct vcpu *v)
>> > +{
>> > +    v->arch.hvm_vcpu.tsc_scaling_ratio =
>> > +        hvm_get_tsc_scaling_ratio(v->domain->arch.tsc_khz);
>> > +}
>> 
>> So why again is this per-vCPU setup of per-vCPU state when it
>> only depends on a per-domain input? If this was per-domain, its
>> setup could be where it belongs - in arch_hvm_load().
>>
> 
> It's a per-domain state. I'll the state to x86's struct arch_domain
> where other TSC fields are (or struct hvm_domain, because this is only
> used for HVM?).

struct hvm_domain please.

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 6d30d8b..64a7760 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -65,6 +65,7 @@ 
 #include <asm/mtrr.h>
 #include <asm/apic.h>
 #include <asm/vm_event.h>
+#include <asm/math64.h>
 #include <public/sched.h>
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
@@ -301,6 +302,34 @@  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
     return 1;
 }
 
+/*
+ * Get the ratio to scale host TSC frequency to gtsc_khz. zero will be
+ * returned if TSC scaling is unavailable or ratio cannot be handled
+ * by host CPU. Otherwise, a non-zero ratio will be returned.
+ */
+u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
+{
+    u64 ratio;
+
+    if ( !hvm_tsc_scaling_supported )
+        return 0;
+
+    /*
+     * The multiplication of the first two terms may overflow a 64-bit
+     * integer, so use mul_u64_u32_div() instead to keep precision.
+     */
+    ratio = mul_u64_u32_div(1ULL << hvm_funcs.tsc_scaling_ratio_frac_bits,
+                            gtsc_khz, cpu_khz);
+
+    return ratio > hvm_funcs.max_tsc_scaling_ratio ? 0 : ratio;
+}
+
+void hvm_setup_tsc_scaling(struct vcpu *v)
+{
+    v->arch.hvm_vcpu.tsc_scaling_ratio =
+        hvm_get_tsc_scaling_ratio(v->domain->arch.tsc_khz);
+}
+
 void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 guest_tsc, u64 at_tsc)
 {
     uint64_t tsc;
@@ -2026,6 +2055,9 @@  static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     if ( hvm_funcs.load_cpu_ctxt(v, &ctxt) < 0 )
         return -EINVAL;
 
+    if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
+        hvm_setup_tsc_scaling(v);
+
     v->arch.hvm_vcpu.msr_tsc_aux = ctxt.msr_tsc_aux;
 
     seg.limit = ctxt.idtr_limit;
@@ -5504,6 +5536,9 @@  void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
     hvm_set_segment_register(v, x86_seg_gdtr, &reg);
     hvm_set_segment_register(v, x86_seg_idtr, &reg);
 
+    if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
+        hvm_setup_tsc_scaling(v);
+
     /* Sync AP's TSC with BSP's. */
     v->arch.hvm_vcpu.cache_tsc_offset =
         v->domain->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 8b316a0..17bce5c 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -808,7 +808,7 @@  static uint64_t svm_scale_tsc(const struct vcpu *v, uint64_t tsc)
 {
     ASSERT(cpu_has_tsc_ratio && !v->domain->arch.vtsc);
 
-    return scale_tsc(tsc, vcpu_tsc_ratio(v));
+    return scale_tsc(tsc, v->arch.hvm_vcpu.tsc_scaling_ratio);
 }
 
 static uint64_t svm_get_tsc_offset(uint64_t host_tsc, uint64_t guest_tsc,
@@ -985,7 +985,7 @@  static inline void svm_tsc_ratio_save(struct vcpu *v)
 static inline void svm_tsc_ratio_load(struct vcpu *v)
 {
     if ( cpu_has_tsc_ratio && !v->domain->arch.vtsc ) 
-        wrmsrl(MSR_AMD64_TSC_RATIO, vcpu_tsc_ratio(v));
+        wrmsrl(MSR_AMD64_TSC_RATIO, v->arch.hvm_vcpu.tsc_scaling_ratio);
 }
 
 static void svm_ctxt_switch_from(struct vcpu *v)
@@ -1192,6 +1192,8 @@  static int svm_vcpu_initialise(struct vcpu *v)
 
     svm_guest_osvw_init(v);
 
+    v->arch.hvm_vcpu.tsc_scaling_ratio = DEFAULT_TSC_RATIO;
+
     return 0;
 }
 
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index a243bc3..2ce6447 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1864,7 +1864,8 @@  void tsc_set_info(struct domain *d,
          */
         if ( tsc_mode == TSC_MODE_DEFAULT && host_tsc_is_safe() &&
              (has_hvm_container_domain(d) ?
-              d->arch.tsc_khz == cpu_khz || hvm_tsc_scaling_supported :
+              (d->arch.tsc_khz == cpu_khz ||
+               hvm_get_tsc_scaling_ratio(d->arch.tsc_khz)) :
               incarnation == 0) )
         {
     case TSC_MODE_NEVER_EMULATE:
@@ -1878,7 +1879,8 @@  void tsc_set_info(struct domain *d,
         d->arch.vtsc = !boot_cpu_has(X86_FEATURE_RDTSCP) ||
                        !host_tsc_is_safe();
         enable_tsc_scaling = has_hvm_container_domain(d) &&
-                             hvm_tsc_scaling_supported && !d->arch.vtsc;
+                             !d->arch.vtsc &&
+                             hvm_get_tsc_scaling_ratio(gtsc_khz ?: cpu_khz);
         d->arch.tsc_khz = (enable_tsc_scaling && gtsc_khz) ? gtsc_khz : cpu_khz;
         set_time_scale(&d->arch.vtsc_to_ns, d->arch.tsc_khz * 1000 );
         d->arch.ns_to_vtsc = scale_reciprocal(d->arch.vtsc_to_ns);
@@ -1897,9 +1899,20 @@  void tsc_set_info(struct domain *d,
     if ( has_hvm_container_domain(d) )
     {
         hvm_set_rdtsc_exiting(d, d->arch.vtsc);
-        if ( d->vcpu && d->vcpu[0] && incarnation == 0 )
+        if ( d->vcpu && d->vcpu[0] )
         {
             /*
+             * TSC scaling ratio on BSP is set here during domain
+             * creation, while the same TSC scaling ratio on APs will
+             * be set in hvm_vcpu_reset_state().
+             */
+            if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
+                hvm_setup_tsc_scaling(d->vcpu[0]);
+
+            if ( incarnation )
+                return;
+
+            /*
              * set_tsc_offset() is called from hvm_vcpu_initialise() before
              * tsc_set_info(). New vtsc mode may require recomputing TSC
              * offset.
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 79ea59e..f2d1419 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -263,6 +263,9 @@  u64 hvm_get_guest_tsc_fixed(struct vcpu *v, u64 at_tsc);
 
 #define hvm_tsc_scaling_supported (!!hvm_funcs.default_tsc_scaling_ratio)
 
+u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz);
+void hvm_setup_tsc_scaling(struct vcpu *v);
+
 int hvm_set_mode(struct vcpu *v, int mode);
 void hvm_init_guest_time(struct domain *d);
 void hvm_set_guest_time(struct vcpu *v, u64 guest_time);
diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h
index d60ec23..c954b7e 100644
--- a/xen/include/asm-x86/hvm/svm/svm.h
+++ b/xen/include/asm-x86/hvm/svm/svm.h
@@ -97,9 +97,6 @@  extern u32 svm_feature_flags;
 /* TSC rate */
 #define DEFAULT_TSC_RATIO       0x0000000100000000ULL
 #define TSC_RATIO_RSVD_BITS     0xffffff0000000000ULL
-#define TSC_RATIO(g_khz, h_khz) ( (((u64)(g_khz)<<32)/(u64)(h_khz)) & \
-                                  ~TSC_RATIO_RSVD_BITS )
-#define vcpu_tsc_ratio(v)       TSC_RATIO((v)->domain->arch.tsc_khz, cpu_khz)
 
 extern void svm_host_osvw_reset(void);
 extern void svm_host_osvw_init(void);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 152d9f3..901c988 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -175,6 +175,8 @@  struct hvm_vcpu {
     u64                 msr_tsc_adjust;
     u64                 msr_xss;
 
+    u64                 tsc_scaling_ratio;
+
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;