From patchwork Wed Sep 2 14:34:57 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Glauber Costa X-Patchwork-Id: 45220 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n82EA9DB025891 for ; Wed, 2 Sep 2009 14:10:10 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752829AbZIBOJA (ORCPT ); Wed, 2 Sep 2009 10:09:00 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752817AbZIBOI7 (ORCPT ); Wed, 2 Sep 2009 10:08:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55454 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752475AbZIBOIk (ORCPT ); Wed, 2 Sep 2009 10:08:40 -0400 Received: from int-mx08.intmail.prod.int.phx2.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n82E8gpk025704; Wed, 2 Sep 2009 10:08:42 -0400 Received: from localhost.localdomain (virtlab1.virt.bos.redhat.com [10.16.72.21]) by int-mx08.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n82E8c23029687; Wed, 2 Sep 2009 10:08:41 -0400 From: Glauber Costa To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, avi@redhat.com Subject: [PATCH v2 1/2] keep guest wallclock in sync with host clock Date: Wed, 2 Sep 2009 10:34:57 -0400 Message-Id: <1251902098-8660-2-git-send-email-glommer@redhat.com> In-Reply-To: <1251902098-8660-1-git-send-email-glommer@redhat.com> References: <1251902098-8660-1-git-send-email-glommer@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.21 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM clock is great to avoid drifting in guest VMs running ontop of kvm. However, the current mechanism will not propagate changes in wallclock value upwards. This effectively means that in a large pool of VMs that need accurate timing, all of them has to run NTP, instead of just the host doing it. Since the host updates information in the shared memory area upon msr writes, this patch introduces a worker that writes to that msr, and calls do_settimeofday at fixed intervals, with second resolution. A interval of 0 determines that we are not interested in this behaviour. A later patch will make this optional at runtime Signed-off-by: Glauber Costa --- arch/x86/kernel/kvmclock.c | 70 ++++++++++++++++++++++++++++++++++++++----- 1 files changed, 61 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index e5efcdc..555aab0 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -27,6 +27,7 @@ #define KVM_SCALE 22 static int kvmclock = 1; +static unsigned int kvm_wall_update_interval = 0; static int parse_no_kvmclock(char *arg) { @@ -39,24 +40,75 @@ early_param("no-kvmclock", parse_no_kvmclock); static DEFINE_PER_CPU_SHARED_ALIGNED(struct pvclock_vcpu_time_info, hv_clock); static struct pvclock_wall_clock wall_clock; -/* - * The wallclock is the time of day when we booted. Since then, some time may - * have elapsed since the hypervisor wrote the data. So we try to account for - * that with system time - */ -static unsigned long kvm_get_wallclock(void) +static void kvm_get_wall_ts(struct timespec *ts) { - struct pvclock_vcpu_time_info *vcpu_time; - struct timespec ts; int low, high; + struct pvclock_vcpu_time_info *vcpu_time; low = (int)__pa_symbol(&wall_clock); high = ((u64)__pa_symbol(&wall_clock) >> 32); native_write_msr(MSR_KVM_WALL_CLOCK, low, high); vcpu_time = &get_cpu_var(hv_clock); - pvclock_read_wallclock(&wall_clock, vcpu_time, &ts); + pvclock_read_wallclock(&wall_clock, vcpu_time, ts); put_cpu_var(hv_clock); +} + +static void kvm_sync_wall_clock(struct work_struct *work); +static DECLARE_DELAYED_WORK(kvm_sync_wall_work, kvm_sync_wall_clock); + +static void schedule_next_update(void) +{ + struct timespec next; + + if ((kvm_wall_update_interval == 0) || + (!kvm_para_available()) || + (!kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE))) + return; + + next.tv_sec = kvm_wall_update_interval; + next.tv_nsec = 0; + + schedule_delayed_work(&kvm_sync_wall_work, timespec_to_jiffies(&next)); +} + +static void kvm_sync_wall_clock(struct work_struct *work) +{ + struct timespec now, after; + u64 nsec_delta; + + do { + kvm_get_wall_ts(&now); + do_settimeofday(&now); + kvm_get_wall_ts(&after); + nsec_delta = (u64)after.tv_sec * NSEC_PER_SEC + after.tv_nsec; + nsec_delta -= (u64)now.tv_sec * NSEC_PER_SEC + now.tv_nsec; + } while (nsec_delta > NSEC_PER_SEC / 8); + + schedule_next_update(); +} + +static __init int init_updates(void) +{ + schedule_next_update(); + return 0; +} +/* + * It has to be run after workqueues are initialized, since we call + * schedule_delayed_work. Other than that, we have no specific requirements + */ +late_initcall(init_updates); + +/* + * The wallclock is the time of day when we booted. Since then, some time may + * have elapsed since the hypervisor wrote the data. So we try to account for + * that with system time + */ +static unsigned long kvm_get_wallclock(void) +{ + struct timespec ts; + + kvm_get_wall_ts(&ts); return ts.tv_sec; }