From patchwork Thu Dec 31 03:03:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 7935351 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 544AC9F32E for ; Thu, 31 Dec 2015 03:07:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 696AE20220 for ; Thu, 31 Dec 2015 03:07:31 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5FA21201BB for ; Thu, 31 Dec 2015 03:07:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aETXb-0000KZ-6U; Thu, 31 Dec 2015 03:04:39 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aETXZ-0000Jq-SP for xen-devel@lists.xen.org; Thu, 31 Dec 2015 03:04:37 +0000 Received: from [85.158.143.35] by server-1.bemta-4.messagelabs.com id 1F/BF-21571-54B94865; Thu, 31 Dec 2015 03:04:37 +0000 X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-3.tower-21.messagelabs.com!1451531076!7367744!1 X-Originating-IP: [134.134.136.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 55627 invoked from network); 31 Dec 2015 03:04:36 -0000 Received: from mga03.intel.com (HELO mga03.intel.com) (134.134.136.65) by server-3.tower-21.messagelabs.com with SMTP; 31 Dec 2015 03:04:36 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 30 Dec 2015 19:04:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,503,1444719600"; d="scan'208";a="883840635" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.13.40]) by fmsmga002.fm.intel.com with ESMTP; 30 Dec 2015 19:04:33 -0800 From: Haozhong Zhang To: xen-devel@lists.xen.org, Jan Beulich , Boris Ostrovsky , Kevin Tian Date: Thu, 31 Dec 2015 11:03:30 +0800 Message-Id: <1451531020-29964-4-git-send-email-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.4.8 In-Reply-To: <1451531020-29964-1-git-send-email-haozhong.zhang@intel.com> References: <1451531020-29964-1-git-send-email-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Keir Fraser , Suravee Suthikulpanit , Andrew Cooper , Aravind Gopalakrishnan , Jun Nakajima Subject: [Xen-devel] [PATCH v3 03/13] x86/hvm: Scale host TSC when setting/getting guest TSC X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The existing hvm_[set|get]_guest_tsc_fixed() calculate the guest TSC by adding the TSC offset to the host TSC. When the TSC scaling is enabled, the host TSC should be scaled first. This patch adds the scaling logic to those two functions. Reviewed-by: Boris Ostrovsky Signed-off-by: Haozhong Zhang --- xen/arch/x86/hvm/hvm.c | 17 +++++++---------- xen/arch/x86/hvm/svm/svm.c | 12 ++++++++++++ xen/include/asm-x86/hvm/hvm.h | 2 ++ 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 21470ec..3648a44 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -60,6 +60,7 @@ #include #include #include +#include /* for cpu_has_tsc_ratio */ #include #include #include @@ -310,13 +311,11 @@ void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 guest_tsc, u64 at_tsc) tsc = hvm_get_guest_time_fixed(v, at_tsc); tsc = gtime_to_gtsc(v->domain, tsc); } - else if ( at_tsc ) - { - tsc = at_tsc; - } else { - tsc = rdtsc(); + tsc = at_tsc ?: rdtsc(); + if ( cpu_has_tsc_ratio ) + tsc = hvm_funcs.scale_tsc(v, tsc); } delta_tsc = guest_tsc - tsc; @@ -344,13 +343,11 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t at_tsc) tsc = hvm_get_guest_time_fixed(v, at_tsc); tsc = gtime_to_gtsc(v->domain, tsc); } - else if ( at_tsc ) - { - tsc = at_tsc; - } else { - tsc = rdtsc(); + tsc = at_tsc ?: rdtsc(); + if ( cpu_has_tsc_ratio ) + tsc = hvm_funcs.scale_tsc(v, tsc); } return tsc + v->arch.hvm_vcpu.cache_tsc_offset; diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index a66d854..c538a29 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -804,6 +804,16 @@ static uint64_t scale_tsc(uint64_t host_tsc, uint64_t ratio) return scaled_host_tsc; } +static uint64_t svm_scale_tsc(struct vcpu *v, uint64_t tsc) +{ + struct domain *d = v->domain; + + if ( !cpu_has_tsc_ratio || d->arch.vtsc ) + return tsc; + + return scale_tsc(tsc, vcpu_tsc_ratio(v)); +} + static uint64_t svm_get_tsc_offset(uint64_t host_tsc, uint64_t guest_tsc, uint64_t ratio) { @@ -2272,6 +2282,8 @@ static struct hvm_function_table __initdata svm_function_table = { .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled, .nhvm_intr_blocked = nsvm_intr_blocked, .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m, + + .scale_tsc = svm_scale_tsc, }; void svm_vmexit_handler(struct cpu_user_regs *regs) diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index b9d893d..ba6259e 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -212,6 +212,8 @@ struct hvm_function_table { void (*altp2m_vcpu_update_vmfunc_ve)(struct vcpu *v); bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v); int (*altp2m_vcpu_emulate_vmfunc)(struct cpu_user_regs *regs); + + uint64_t (*scale_tsc)(struct vcpu *v, uint64_t tsc); }; extern struct hvm_function_table hvm_funcs;