From patchwork Wed Aug 30 10:34:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Dyasli X-Patchwork-Id: 9929271 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BB1C460309 for ; Wed, 30 Aug 2017 10:37:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C083528479 for ; Wed, 30 Aug 2017 10:37:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B2E172847B; Wed, 30 Aug 2017 10:37:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2B9072843B for ; Wed, 30 Aug 2017 10:37:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dn0KZ-00010o-7n; Wed, 30 Aug 2017 10:34:43 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dn0KX-000101-JT for xen-devel@lists.xen.org; Wed, 30 Aug 2017 10:34:41 +0000 Received: from [85.158.137.68] by server-12.bemta-3.messagelabs.com id 55/E7-01916-1C496A95; Wed, 30 Aug 2017 10:34:41 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpikeJIrShJLcpLzFFi42JxWrrBXvfAlGW RBud+qlos+biYxYHR4+ju30wBjFGsmXlJ+RUJrBknW5+xFezUrTiyRamB8a1yFyMnh4SAv8T+ aVvYQWw2AT2JjbNfMYHYIgKyEqu75gDFuTiYBbYwS8zrfM7cxcjBISxgITH3SRaIySKgKrH2q jFIOa+AjcSn/kusECPlJXa1XQSzOQVsJY79PsgIYgsB1fS/XsAOYatKvH6xiwWiV1Di5MwnYD azgITEwRcvmCcw8s5CkpqFJLWAkWkVo3pxalFZapGuhV5SUWZ6RkluYmaOrqGBsV5uanFxYnp qTmJSsV5yfu4mRmDYMADBDsYL7c6HGCU5mJREeWMmLYsU4kvKT6nMSCzOiC8qzUktPsQow8Gh JMG7bSJQTrAoNT21Ii0zBxjAMGkJDh4lEd7Jk4HSvMUFibnFmekQqVOMilLivG9AEgIgiYzSP Lg2WNRcYpSVEuZlBDpEiKcgtSg3swRV/hWjOAejkjBvLsgUnsy8Erjpr4AWMwEtjvVaCrK4JB EhJdXAeGpfHVfpklNayTrcubNmra34df7hx27XumwLEWYhg8uej+f4Lts95dHT8PuLZPfd/nx r+0Q7D5ftfx7ufHVU9NvueRve3mMzm8py44iZHWPhLQPu0OSC/Vsf7K9f1nlDbJM2yxalv3/O zjYU2vT18W0mmTQR3xNvbl2qrL98otVa7lzXSyu1PDMlluKMREMt5qLiRACKL4c+lQIAAA== X-Env-Sender: prvs=408f3fb1e=sergey.dyasli@citrix.com X-Msg-Ref: server-2.tower-31.messagelabs.com!1504089278!100603515!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 18885 invoked from network); 30 Aug 2017 10:34:40 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 30 Aug 2017 10:34:40 -0000 X-IronPort-AV: E=Sophos;i="5.41,448,1498521600"; d="scan'208";a="445670401" From: Sergey Dyasli To: Date: Wed, 30 Aug 2017 11:34:32 +0100 Message-ID: <20170830103433.6605-5-sergey.dyasli@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170830103433.6605-1-sergey.dyasli@citrix.com> References: <20170830103433.6605-1-sergey.dyasli@citrix.com> MIME-Version: 1.0 Cc: Sergey Dyasli , Kevin Tian , Stefano Stabellini , Wei Liu , Jun Nakajima , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Jan Beulich Subject: [Xen-devel] [PATCH v1 4/5] x86/msr: introduce guest_rdmsr() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The new function is responsible for handling RDMSR from both HVM and PV guests. Currently it handles only 2 MSRs: MSR_INTEL_PLATFORM_INFO MSR_INTEL_MISC_FEATURES_ENABLES It has a different behaviour compared to the old MSR handlers: if MSR is being handled by guest_rdmsr() then RDMSR will either succeed (if a guest is allowed to access it based on its MSR policy) or produce a GP fault. A guest will never see a H/W value of some MSR unknown to this function. guest_rdmsr() unifies and replaces the handling code from vmx_msr_read_intercept() and priv_op_read_msr(). Signed-off-by: Sergey Dyasli Reviewed-by: Kevin Tian , with a small comment: --- xen/arch/x86/hvm/hvm.c | 7 ++++++- xen/arch/x86/hvm/vmx/vmx.c | 10 ---------- xen/arch/x86/msr.c | 31 +++++++++++++++++++++++++++++++ xen/arch/x86/pv/emul-priv-op.c | 22 ++++------------------ xen/include/asm-x86/msr.h | 8 ++++++++ 5 files changed, 49 insertions(+), 29 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 2ad07d52bc..ec7205ee32 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3334,11 +3334,16 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content) struct vcpu *v = current; struct domain *d = v->domain; uint64_t *var_range_base, *fixed_range_base; - int ret = X86EMUL_OKAY; + int ret; var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges; fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges; + if ( (ret = guest_rdmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE ) + return ret; + else + ret = X86EMUL_OKAY; + switch ( msr ) { unsigned int index; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 155fba9017..ac34383658 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2896,16 +2896,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content) goto gp_fault; break; - case MSR_INTEL_PLATFORM_INFO: - *msr_content = MSR_PLATFORM_INFO_CPUID_FAULTING; - break; - - case MSR_INTEL_MISC_FEATURES_ENABLES: - *msr_content = 0; - if ( current->arch.msr->misc_features_enables.cpuid_faulting ) - *msr_content |= MSR_MISC_FEATURES_CPUID_FAULTING; - break; - default: if ( passive_domain_do_rdmsr(msr, msr_content) ) goto done; diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c index b5ad97d3c8..a822a132ad 100644 --- a/xen/arch/x86/msr.c +++ b/xen/arch/x86/msr.c @@ -117,6 +117,37 @@ int init_vcpu_msr_policy(struct vcpu *v) return 0; } +int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val) +{ + const struct msr_domain_policy *dp = v->domain->arch.msr; + const struct msr_vcpu_policy *vp = v->arch.msr; + + switch ( msr ) + { + case MSR_INTEL_PLATFORM_INFO: + if ( !dp->plaform_info.available ) + goto gp_fault; + *val = (uint64_t) dp->plaform_info.cpuid_faulting << + _MSR_PLATFORM_INFO_CPUID_FAULTING; + break; + + case MSR_INTEL_MISC_FEATURES_ENABLES: + if ( !vp->misc_features_enables.available ) + goto gp_fault; + *val = (uint64_t) vp->misc_features_enables.cpuid_faulting << + _MSR_MISC_FEATURES_CPUID_FAULTING; + break; + + default: + return X86EMUL_UNHANDLEABLE; + } + + return X86EMUL_OKAY; + + gp_fault: + return X86EMUL_EXCEPTION; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c index 66cda538fc..d563214fc4 100644 --- a/xen/arch/x86/pv/emul-priv-op.c +++ b/xen/arch/x86/pv/emul-priv-op.c @@ -834,6 +834,10 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val, const struct vcpu *curr = current; const struct domain *currd = curr->domain; bool vpmu_msr = false; + int ret; + + if ( (ret = guest_rdmsr(curr, reg, val)) != X86EMUL_UNHANDLEABLE ) + return ret; switch ( reg ) { @@ -934,24 +938,6 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val, *val = 0; return X86EMUL_OKAY; - case MSR_INTEL_PLATFORM_INFO: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *val) ) - break; - *val = 0; - if ( this_cpu(cpuid_faulting_enabled) ) - *val |= MSR_PLATFORM_INFO_CPUID_FAULTING; - return X86EMUL_OKAY; - - case MSR_INTEL_MISC_FEATURES_ENABLES: - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || - rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, *val) ) - break; - *val = 0; - if ( curr->arch.msr->misc_features_enables.cpuid_faulting ) - *val |= MSR_MISC_FEATURES_CPUID_FAULTING; - return X86EMUL_OKAY; - case MSR_P6_PERFCTR(0) ... MSR_P6_PERFCTR(7): case MSR_P6_EVNTSEL(0) ... MSR_P6_EVNTSEL(3): case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR2: diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h index 7c8395b9b3..9cc505cb40 100644 --- a/xen/include/asm-x86/msr.h +++ b/xen/include/asm-x86/msr.h @@ -226,6 +226,14 @@ void init_guest_msr_policy(void); int init_domain_msr_policy(struct domain *d); int init_vcpu_msr_policy(struct vcpu *v); +/* + * Below functions can return X86EMUL_UNHANDLEABLE which means that MSR is + * not (yet) handled by it and must be processed by legacy handlers. Such + * behaviour is needed for transition period until all rd/wrmsr are handled + * by the new MSR infrastructure. + */ +int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val); + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_MSR_H */