diff mbox series

[RFC,v3,10/21] KVM: x86: Fix software TSC upscaling in kvm_update_guest_time()

Message ID 20240522001817.619072-11-dwmw2@infradead.org (mailing list archive)
State New, archived
Headers show
Series Cleaning up the KVM clock mess | expand

Commit Message

David Woodhouse May 22, 2024, 12:17 a.m. UTC
From: David Woodhouse <dwmw@amazon.co.uk>

There was some confusion in kvm_update_guest_time() when software needs
to advance the guest TSC.

In master clock mode, there are two points of time which need to be taken
into account. First there is the master clock reference point, stored in
kvm->arch.master_kernel_ns (and associated host TSC ->master_cycle_now).
Secondly, there is the time *now*, at the point kvm_update_guest_time()
is being called.

With software TSC upscaling, the guest TSC is getting further and further
ahead of the host TSC as time elapses. So at time "now", the guest TSC
should be further ahead of the host, than it was at master_kernel_ns.

The adjustment in kvm_update_guest_time() was not taking that into
account, and was only advancing the guest TSC by the appropriate amount
for master_kernel_ns, *not* the current time.

Fix it to calculate them both correctly.

Since the KVM clock reference point in master_kernel_ns might actually
be *earlier* than the reference point used for the guest TSC
(vcpu->last_tsc_nsec), this might lead to a negative delta. Fix the
compute_guest_tsc() function to cope with negative numbers, which
then means there is no need to force a master clock update when the
guest TSC is written.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/x86.c | 73 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 56 insertions(+), 17 deletions(-)

Comments

Paul Durrant May 24, 2024, 1:26 p.m. UTC | #1
On 22/05/2024 01:17, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> There was some confusion in kvm_update_guest_time() when software needs
> to advance the guest TSC.
> 
> In master clock mode, there are two points of time which need to be taken
> into account. First there is the master clock reference point, stored in
> kvm->arch.master_kernel_ns (and associated host TSC ->master_cycle_now).
> Secondly, there is the time *now*, at the point kvm_update_guest_time()
> is being called.
> 
> With software TSC upscaling, the guest TSC is getting further and further
> ahead of the host TSC as time elapses. So at time "now", the guest TSC
> should be further ahead of the host, than it was at master_kernel_ns.
> 
> The adjustment in kvm_update_guest_time() was not taking that into
> account, and was only advancing the guest TSC by the appropriate amount
> for master_kernel_ns, *not* the current time.
> 
> Fix it to calculate them both correctly.
> 
> Since the KVM clock reference point in master_kernel_ns might actually
> be *earlier* than the reference point used for the guest TSC
> (vcpu->last_tsc_nsec), this might lead to a negative delta. Fix the
> compute_guest_tsc() function to cope with negative numbers, which
> then means there is no need to force a master clock update when the
> guest TSC is written.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk > ---
>   arch/x86/kvm/x86.c | 73 +++++++++++++++++++++++++++++++++++-----------
>   1 file changed, 56 insertions(+), 17 deletions(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>
Sean Christopherson Aug. 14, 2024, 4:57 a.m. UTC | #2
On Wed, May 22, 2024, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> There was some confusion in kvm_update_guest_time() when software needs
> to advance the guest TSC.
> 
> In master clock mode, there are two points of time which need to be taken
> into account. First there is the master clock reference point, stored in
> kvm->arch.master_kernel_ns (and associated host TSC ->master_cycle_now).
> Secondly, there is the time *now*, at the point kvm_update_guest_time()
> is being called.
> 
> With software TSC upscaling, the guest TSC is getting further and further
> ahead of the host TSC as time elapses. So at time "now", the guest TSC
> should be further ahead of the host, than it was at master_kernel_ns.
> 
> The adjustment in kvm_update_guest_time() was not taking that into
> account, and was only advancing the guest TSC by the appropriate amount
> for master_kernel_ns, *not* the current time.
> 
> Fix it to calculate them both correctly.
> 
> Since the KVM clock reference point in master_kernel_ns might actually
> be *earlier* than the reference point used for the guest TSC
> (vcpu->last_tsc_nsec), this might lead to a negative delta. Fix the
> compute_guest_tsc() function to cope with negative numbers, which
> then means there is no need to force a master clock update when the
> guest TSC is written.

Please do this in a separate patch.  There's no need to squeeze it in here, and
this change is complex/subtle enough as it is.

> @@ -3300,8 +3306,6 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
>  		kernel_ns = get_kvmclock_base_ns();
>  	}
>  
> -	tsc_timestamp = kvm_read_l1_tsc(v, host_tsc);
> -
>  	/*
>  	 * We may have to catch up the TSC to match elapsed wall clock
>  	 * time for two reasons, even if kvmclock is used.
> @@ -3313,11 +3317,46 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
>  	 *	very slowly.
>  	 */
>  	if (vcpu->tsc_catchup) {
> -		u64 tsc = compute_guest_tsc(v, kernel_ns);

Random side topic, kernel_ns is a s64, shouldn't it be a u64?

> -		if (tsc > tsc_timestamp) {
> -			adjust_tsc_offset_guest(v, tsc - tsc_timestamp);
> -			tsc_timestamp = tsc;
> +		uint64_t now_host_tsc, now_guest_tsc;
> +		int64_t adjustment;
> +
> +		/*
> +		 * First, calculate what the guest TSC should be at the
> +		 * time (kernel_ns) which will be placed in the hvclock.
> +		 * This may be the *current* time, or it may be the time
> +		 * of the master clock reference. This is 'tsc_timestamp'.
> +		 */
> +		tsc_timestamp = compute_guest_tsc(v, kernel_ns);
> +
> +		now_guest_tsc = tsc_timestamp;
> +		now_host_tsc = host_tsc;
> +
> +#ifdef CONFIG_X86_64
> +		/*
> +		 * If the master clock was used, calculate what the guest
> +		 * TSC should be *now* in order to advance to that.
> +		 */
> +		if (use_master_clock) {
> +			int64_t now_kernel_ns;
> +
> +			if (!kvm_get_time_and_clockread(&now_kernel_ns,

Doesn't this need to be called under protection of the seqcount?

Ahh, but with that change, then get_cpu_tsc_khz() isn't guaranteed to be from
the same CPU.

Oof, disabling IRQs to protect against migration is complete overkill, and at
this point dumb luck as much as anything.  Saving IRQs was added by commit
commit 18068523d3a0 ("KVM: paravirtualized clocksource: host part") before there
was any coordination with timekeeping.  And then after the coordination and
locking was added, commit c09664bb4418 ("KVM: x86: fix deadlock in clock-in-progress
request handling") moved the locking/coordination out of IRQ protection, and thus
made disabling IRQs completely pointless, except for protecting get_cpu_tsc_khz()
and now kvm_get_time_and_clockread().

Ha!  And if we slowly unwind that mess, this all ends up being _excrutiatingly_
close to the same code as get_kvmclock().  Sadly, I don't think it's close enough
to be reusable, unless we want to play macro games.

> +							&now_host_tsc)) {
> +				now_kernel_ns = get_kvmclock_base_ns();
> +				now_host_tsc = rdtsc();
> +			}
> +			now_guest_tsc = compute_guest_tsc(v, now_kernel_ns);

I find the mixed state of kernel_ns and host_tsc to be terribly confusing.  It's
hard to see and remember that kernel_ns/host_tsc aren't "now" when use_master_clock
is true.

For TSC upscaling, I think we can have kernel_ns/host_tsc always be "now", we just
need to snapshot the master clock tsc+ns, and then shove those into kernel_ns and
host_tsc after doing the software upscaling.  That simplifies the TSC upscaling
code considerably, and IMO makes it more obvious how tsc_timestamp is computed,
and what its role is.

When all is said and done, I think we can get to this?

	/*
	 * If the host uses TSC clock, then passthrough TSC as stable
	 * to the guest.
	 */
	do {
		seq = read_seqcount_begin(&ka->pvclock_sc);

		use_master_clock = ka->use_master_clock;

		/*
		 * The TSC read and the call to get_cpu_tsc_khz() must happen
		 * on the same CPU.
		 */
		get_cpu();

		tgt_tsc_hz = get_cpu_tsc_khz();

		if (use_master_clock &&
		    !kvm_get_time_and_clockread(&kernel_ns, &host_tsc) &&
		    WARN_ON_ONCE(!read_seqcount_retry(&ka->pvclock_sc, seq)))
			use_master_clock = false;

		put_cpu();

		if (!use_master_clock)
			break;

		master_host_tsc = ka->master_cycle_now;
		master_kernel_ns = ka->master_kernel_ns;
	while (read_seqcount_retry(&ka->pvclock_sc, seq))

	if (unlikely(!tgt_tsc_hz)) {
		kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
		return 1;
	}
	if (!use_master_clock) {
		host_tsc = rdtsc();
		kernel_ns = get_kvmclock_base_ns();
	}

	/*
	 * We may have to catch up the TSC to match elapsed wall clock
	 * time for two reasons, even if kvmclock is used.
	 *   1) CPU could have been running below the maximum TSC rate
	 *   2) Broken TSC compensation resets the base at each VCPU
	 *      entry to avoid unknown leaps of TSC even when running
	 *      again on the same CPU.  This may cause apparent elapsed
	 *      time to disappear, and the guest to stand still or run
	 *	very slowly.
	 */
	if (vcpu->tsc_catchup) {
		int64_t adjustment;

		/*
		 * Calculate the delta between what the guest TSC *should* be,
		 * and what it actually is according to kvm_read_l1_tsc().
		 */
		adjustment = compute_guest_tsc(v, kernel_ns) -
			     kvm_read_l1_tsc(v, host_tsc);
		if (adjustment > 0)
			adjust_tsc_offset_guest(v, adjustment);
	}

	/*
	 * Now that TSC upscaling is out of the way, the remaining calculations
	 * are all relative to the reference time that's placed in hv_clock.
	 * If the master clock is NOT in use, the reference time is "now".  If
	 * master clock is in use, the reference time comes from there.
	 */
	if (use_master_clock) {
		host_tsc = master_host_tsc;
		kernel_ns = master_kernel_ns;
	}
	tsc_timestamp = kvm_read_l1_tsc(v, host_tsc);
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 89918ba266cd..e09dc44978ea 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2491,10 +2491,19 @@  static int kvm_set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz)
 
 static u64 compute_guest_tsc(struct kvm_vcpu *vcpu, s64 kernel_ns)
 {
-	u64 tsc = pvclock_scale_delta(kernel_ns-vcpu->arch.this_tsc_nsec,
-				      vcpu->arch.virtual_tsc_mult,
-				      vcpu->arch.virtual_tsc_shift);
-	tsc += vcpu->arch.this_tsc_write;
+	s64 delta = kernel_ns - vcpu->arch.this_tsc_nsec;
+	u64 tsc = vcpu->arch.this_tsc_write;
+
+	/* pvclock_scale_delta cannot cope with negative deltas */
+	if (delta >= 0)
+		tsc += pvclock_scale_delta(delta,
+					   vcpu->arch.virtual_tsc_mult,
+					   vcpu->arch.virtual_tsc_shift);
+	else
+		tsc -= pvclock_scale_delta(-delta,
+					   vcpu->arch.virtual_tsc_mult,
+					   vcpu->arch.virtual_tsc_shift);
+
 	return tsc;
 }
 
@@ -2505,7 +2514,7 @@  static inline bool gtod_is_based_on_tsc(int mode)
 }
 #endif
 
-static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu, bool new_generation)
+static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_X86_64
 	struct kvm_arch *ka = &vcpu->kvm->arch;
@@ -2522,12 +2531,9 @@  static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu, bool new_generation)
 
 	/*
 	 * Request a masterclock update if the masterclock needs to be toggled
-	 * on/off, or when starting a new generation and the masterclock is
-	 * enabled (compute_guest_tsc() requires the masterclock snapshot to be
-	 * taken _after_ the new generation is created).
+	 * on/off.
 	 */
-	if ((ka->use_master_clock && new_generation) ||
-	    (ka->use_master_clock != use_master_clock))
+	if ((ka->use_master_clock != use_master_clock))
 		kvm_make_request(KVM_REQ_MASTERCLOCK_UPDATE, vcpu);
 
 	trace_kvm_track_tsc(vcpu->vcpu_id, ka->nr_vcpus_matched_tsc,
@@ -2705,7 +2711,7 @@  static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
 	vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec;
 	vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write;
 
-	kvm_track_tsc_matching(vcpu, !matched);
+	kvm_track_tsc_matching(vcpu);
 }
 
 static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value)
@@ -3300,8 +3306,6 @@  static int kvm_guest_time_update(struct kvm_vcpu *v)
 		kernel_ns = get_kvmclock_base_ns();
 	}
 
-	tsc_timestamp = kvm_read_l1_tsc(v, host_tsc);
-
 	/*
 	 * We may have to catch up the TSC to match elapsed wall clock
 	 * time for two reasons, even if kvmclock is used.
@@ -3313,11 +3317,46 @@  static int kvm_guest_time_update(struct kvm_vcpu *v)
 	 *	very slowly.
 	 */
 	if (vcpu->tsc_catchup) {
-		u64 tsc = compute_guest_tsc(v, kernel_ns);
-		if (tsc > tsc_timestamp) {
-			adjust_tsc_offset_guest(v, tsc - tsc_timestamp);
-			tsc_timestamp = tsc;
+		uint64_t now_host_tsc, now_guest_tsc;
+		int64_t adjustment;
+
+		/*
+		 * First, calculate what the guest TSC should be at the
+		 * time (kernel_ns) which will be placed in the hvclock.
+		 * This may be the *current* time, or it may be the time
+		 * of the master clock reference. This is 'tsc_timestamp'.
+		 */
+		tsc_timestamp = compute_guest_tsc(v, kernel_ns);
+
+		now_guest_tsc = tsc_timestamp;
+		now_host_tsc = host_tsc;
+
+#ifdef CONFIG_X86_64
+		/*
+		 * If the master clock was used, calculate what the guest
+		 * TSC should be *now* in order to advance to that.
+		 */
+		if (use_master_clock) {
+			int64_t now_kernel_ns;
+
+			if (!kvm_get_time_and_clockread(&now_kernel_ns,
+							&now_host_tsc)) {
+				now_kernel_ns = get_kvmclock_base_ns();
+				now_host_tsc = rdtsc();
+			}
+			now_guest_tsc = compute_guest_tsc(v, now_kernel_ns);
 		}
+#endif
+		/*
+		 * Calculate the delta between what the guest TSC *should* be,
+		 * and what it actually is according to kvm_read_l1_tsc().
+		 */
+		adjustment = now_guest_tsc - kvm_read_l1_tsc(v, now_host_tsc);
+
+		if (adjustment > 0)
+			adjust_tsc_offset_guest(v, adjustment);
+	} else {
+		tsc_timestamp = kvm_read_l1_tsc(v, host_tsc);
 	}
 
 	local_irq_restore(flags);