From patchwork Sun May 19 07:06:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Rozenfeld X-Patchwork-Id: 2590011 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 889873FD4E for ; Sun, 19 May 2013 07:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753205Ab3ESHHJ (ORCPT ); Sun, 19 May 2013 03:07:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:16684 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752947Ab3ESHHH (ORCPT ); Sun, 19 May 2013 03:07:07 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r4J776gu021144 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sun, 19 May 2013 03:07:06 -0400 Received: from localhost.localdomain (vpn-202-88.tlv.redhat.com [10.35.202.88]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r4J76ltp002807; Sun, 19 May 2013 03:07:01 -0400 From: Vadim Rozenfeld To: kvm@vger.kernel.org Cc: gleb@redhat.com, mtosatti@redhat.com, pl@dlh.net, Vadim Rozenfeld Subject: [RFC PATCH v2 2/2] add support for Hyper-V invariant TSC Date: Sun, 19 May 2013 17:06:37 +1000 Message-Id: <1368947197-9033-3-git-send-email-vrozenfe@redhat.com> In-Reply-To: <1368947197-9033-1-git-send-email-vrozenfe@redhat.com> References: <1368947197-9033-1-git-send-email-vrozenfe@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The following patch allows to activate a partition reference time enlightenment that is based on the host platform's support for an Invariant Time Stamp Counter (iTSC). NOTE: This code will survive migration due to lack of VM stop/resume handlers, when offset, scale and sequence should be readjusted. --- arch/x86/kvm/x86.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9645dab..b423fe4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1838,7 +1838,6 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data) u64 gfn; unsigned long addr; HV_REFERENCE_TSC_PAGE tsc_ref; - tsc_ref.TscSequence = 0; if (!(data & HV_X64_MSR_TSC_REFERENCE_ENABLE)) { kvm->arch.hv_tsc_page = data; break; @@ -1848,6 +1847,11 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data) HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT); if (kvm_is_error_hva(addr)) return 1; + tsc_ref.TscSequence = + boot_cpu_has(X86_FEATURE_CONSTANT_TSC) ? 1 : 0; + tsc_ref.TscScale = + ((10000LL << 32) / vcpu->arch.virtual_tsc_khz) << 32; + tsc_ref.TscOffset = 0; if (__copy_to_user((void __user *)addr, &tsc_ref, sizeof(tsc_ref))) return 1; mark_page_dirty(kvm, gfn);