From patchwork Wed Oct 9 00:41:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180197 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F010313BD for ; Wed, 9 Oct 2019 00:41:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CA0F020859 for ; Wed, 9 Oct 2019 00:41:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="alhNQlCi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729737AbfJIAlw (ORCPT ); Tue, 8 Oct 2019 20:41:52 -0400 Received: from mail-pg1-f202.google.com ([209.85.215.202]:48800 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727051AbfJIAlw (ORCPT ); Tue, 8 Oct 2019 20:41:52 -0400 Received: by mail-pg1-f202.google.com with SMTP id w13so407421pge.15 for ; Tue, 08 Oct 2019 17:41:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=wWhaKBFj4UiHiDF8QMpQfVJuRDyfgZlQAJc5UeHTyGw=; b=alhNQlCiZcCGVsOc18cQxP8bYopfg76Av1c5PWFkFC3RF8ynOL5it2+fSmLDK/lOzV kqoQrvA2fTiCfBpPSw+6GPHMr5uVznBbp7sbv1dLvJZEUdjgzW/8secBRVOKUXQ//Wmk V7Am2s6UNYfcrN/dJKBHuv6/xGEW4FzUa4/iKH9TAXmq/aPfo3Z+sxQrQWqdPBTSk/g8 /TYaKxCcOKHJVKbE/LbCQtxaxKsxLV/pnkwEqjrYrRG8Mw9AGDRv4mkBH+vGPOWOXIxC fTq+JGTxbKZKLdVB0B1z8c6716m+HA6FWA3Ped3sLsOWlnhz0eAJDURRPYouZ+g05u4R ADpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=wWhaKBFj4UiHiDF8QMpQfVJuRDyfgZlQAJc5UeHTyGw=; b=nyZUAukO8Gob/cgS+0EupkBzNTvs9UB1eRXUr5vGxD9g7sTUGeoEJJWLDibB1fYKBJ SPJiHw0R9YcV+PpxXaqVeRNnOFqWHDfMB40a6DxZGgPbKyR+mhS4lBk1xBMsS08M6X/E HvVtRf3NBKuqOPTsdz3Mskg4aEghNLlwz9gV3cGhw7VxvZyqFOuNQQfyd38hsK6ELfAM Vi4UYxiqFtfOcTUVUoNCvTqz+ItcHZjWF8bQxzrrC8rKNDsn5TiyoUkpwgYUBwtbP7r2 YDMVyU8G1qh5d4MbeRhEeAQbJf39KXtDJ9U3igba4ETSOXD4UfJb7Wou3AiyO+rubs4l 8NfQ== X-Gm-Message-State: APjAAAXQa+v+qzt9K7QT8oLuQLy6dvCDYhc7BaSuvY/mgXh93Tf/xBXu 46LPNCGio+kfxwGmyfY6xe9lPu2ygFfNUrpb X-Google-Smtp-Source: APXvYqx++oqr1N+AhdY23zFbXUWyOaXjXSiG9uPy0Ogz6IIpU1bltJc8Wadde9WPfYZatsUfPb08BeztYBLqb68a X-Received: by 2002:a65:6910:: with SMTP id s16mr1356164pgq.284.1570581710065; Tue, 08 Oct 2019 17:41:50 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:37 -0700 Message-Id: <20191009004142.225377-1-aaronlewis@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 1/6] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The SDM says that IA32_XSS is supported "If(CPUID.(0DH, 1):EAX.[3] = 1", so only the X86_FEATURE_XSAVES check is necessary. Fixes: 4d763b168e9c5 ("KVM: VMX: check CPUID before allowing read/write of IA32_XSS") Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/vmx.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e7970a2e8eae..409e9a7323f1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1823,8 +1823,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_XSS: if (!vmx_xsaves_supported() || (!msr_info->host_initiated && - !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))) + !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) return 1; msr_info->data = vcpu->arch.ia32_xss; break; @@ -2066,8 +2065,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_XSS: if (!vmx_xsaves_supported() || (!msr_info->host_initiated && - !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && - guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))) + !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) return 1; /* * The only supported bit as of Skylake is bit 8, but From patchwork Wed Oct 9 00:41:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42A0013BD for ; Wed, 9 Oct 2019 00:41:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1BD4A21835 for ; Wed, 9 Oct 2019 00:41:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nAH/Bgvs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729910AbfJIAl4 (ORCPT ); Tue, 8 Oct 2019 20:41:56 -0400 Received: from mail-pg1-f201.google.com ([209.85.215.201]:32876 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729860AbfJIAl4 (ORCPT ); Tue, 8 Oct 2019 20:41:56 -0400 Received: by mail-pg1-f201.google.com with SMTP id f10so439080pgj.0 for ; Tue, 08 Oct 2019 17:41:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HFb1XRQYBGQ051C39vXgzSCDUzRwswGfka/G3jhuSKA=; b=nAH/BgvskUp7FnY8g96ywrXrldMObHFA1uJUsCAf+Q05sqHZV0SPU/SLYo/CjdH90W teJ5Cie0QqOIKViO56QkzfxD1i7vO65o7ARleFWy1cpP30H8wSVDQB4Oxuf16l0m/e7j ttcU87xelo6t0W/GmBeCD03yNHZYs4+mqEx95Q3h+Wj68V05vjQkYUg9hJyDAB8GC7Zd 2Ui95N+aEu8HmBsAa+dFoJU5c5z9cRlDPFug0KD9o9w/VMQMjicCYRUxk++KztiySrVO MS3BxnUoRwWT16dV6gVVMKRHLYvYr/YFwDQD1KkHUFxnIlW/bpU70EGbMFaAaXfKyGP2 eLZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HFb1XRQYBGQ051C39vXgzSCDUzRwswGfka/G3jhuSKA=; b=tT9pkYK3d2GI5JYYl2dO7OwDPYfPSYelMhIqpwdfTiS+xVPpK0tnj4sE9OM//Ij8vq 0HBYBuwgAlW110oeNfkGwRqzf9mG6coN3fWYHAx42suO2dG3KhQdQNVGTYoBpJl02xRr xFJcW7HMClFg7u91KHEbZaIfj/PC+KH8p99E+MLCVpV8lSQBJDaXECOyih51VLAUXpbe MJfb3VY4Lb2TnfGexWpsxT3ReDRK02L9BsfvZ4Xg93yjl0RBychr/D1PJti40MFUGXXp M0AKwAHMGss6xipA7RHERudVkw4pQe7Ltjqab77Uacgcrpw03V7zUUTLwdlwm2CzBCr1 cPWA== X-Gm-Message-State: APjAAAV8Nx+br/wB/GUGkIsfvbHn3mteYnoLOPoOdF5DIK+ce6Axi0ub 7LrGXDimR37MSRkZM4vOm4cbTtc0dQN+Gl2b X-Google-Smtp-Source: APXvYqzzdx/wRd0YDIfermtN1PKlvSKV2aEqDS5FGpbQ4Fvtf79p319hNA9DPlkEl+pWYO0KqGg3yT1ef4vhzfDV X-Received: by 2002:a63:495b:: with SMTP id y27mr1338777pgk.438.1570581713649; Tue, 08 Oct 2019 17:41:53 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:38 -0700 In-Reply-To: <20191009004142.225377-1-aaronlewis@google.com> Message-Id: <20191009004142.225377-2-aaronlewis@google.com> Mime-Version: 1.0 References: <20191009004142.225377-1-aaronlewis@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 2/6] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Set IA32_XSS for the guest and host during VM Enter and VM Exit transitions rather than by using the MSR-load areas. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/svm.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 14 ++------------ arch/x86/kvm/x86.c | 25 +++++++++++++++++++++---- arch/x86/kvm/x86.h | 4 ++-- 4 files changed, 27 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index f8ecb6df5106..e2d7a7738c76 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) svm->vmcb->save.cr2 = vcpu->arch.cr2; clgi(); - kvm_load_guest_xcr0(vcpu); + kvm_load_guest_xsave_controls(vcpu); if (lapic_in_kernel(vcpu) && vcpu->arch.apic->lapic_timer.timer_advance_ns) @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(&svm->vcpu); - kvm_put_guest_xcr0(vcpu); + kvm_load_host_xsave_controls(vcpu); stgi(); /* Any pending NMI will happen here */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 409e9a7323f1..ff5ba28abecb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO); static bool __read_mostly nested = 1; module_param(nested, bool, S_IRUGO); -static u64 __read_mostly host_xss; - bool __read_mostly enable_pml = 1; module_param_named(pml, enable_pml, bool, S_IRUGO); @@ -2074,11 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data != 0) return 1; vcpu->arch.ia32_xss = data; - if (vcpu->arch.ia32_xss != host_xss) - add_atomic_switch_msr(vmx, MSR_IA32_XSS, - vcpu->arch.ia32_xss, host_xss, false); - else - clear_atomic_switch_msr(vmx, MSR_IA32_XSS); break; case MSR_IA32_RTIT_CTL: if ((pt_mode != PT_MODE_HOST_GUEST) || @@ -6540,7 +6533,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) vmx_set_interrupt_shadow(vcpu, 0); - kvm_load_guest_xcr0(vcpu); + kvm_load_guest_xsave_controls(vcpu); if (static_cpu_has(X86_FEATURE_PKU) && kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && @@ -6647,7 +6640,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) __write_pkru(vmx->host_pkru); } - kvm_put_guest_xcr0(vcpu); + kvm_load_host_xsave_controls(vcpu); vmx->nested.nested_run_pending = 0; vmx->idt_vectoring_info = 0; @@ -7599,9 +7592,6 @@ static __init int hardware_setup(void) WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost"); } - if (boot_cpu_has(X86_FEATURE_XSAVES)) - rdmsrl(MSR_IA32_XSS, host_xss); - if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() || !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global())) enable_vpid = 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 661e2bf38526..e90e658fd8a9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -176,6 +176,8 @@ struct kvm_shared_msrs { static struct kvm_shared_msrs_global __read_mostly shared_msrs_global; static struct kvm_shared_msrs __percpu *shared_msrs; +static u64 __read_mostly host_xss; + struct kvm_stats_debugfs_item debugfs_entries[] = { { "pf_fixed", VCPU_STAT(pf_fixed) }, { "pf_guest", VCPU_STAT(pf_guest) }, @@ -812,27 +814,39 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw) } EXPORT_SYMBOL_GPL(kvm_lmsw); -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu) +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu) { if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) && !vcpu->guest_xcr0_loaded) { /* kvm_set_xcr() also depends on this */ if (vcpu->arch.xcr0 != host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); + + if (kvm_x86_ops->xsaves_supported() && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVES) && + vcpu->arch.ia32_xss != host_xss) + wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); + vcpu->guest_xcr0_loaded = 1; } } -EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0); +EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_controls); -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu) { if (vcpu->guest_xcr0_loaded) { if (vcpu->arch.xcr0 != host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); + + if (kvm_x86_ops->xsaves_supported() && + guest_cpuid_has(vcpu, X86_FEATURE_XSAVES) && + vcpu->arch.ia32_xss != host_xss) + wrmsrl(MSR_IA32_XSS, host_xss); + vcpu->guest_xcr0_loaded = 0; } } -EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0); +EXPORT_SYMBOL_GPL(kvm_load_host_xsave_controls); static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) { @@ -9293,6 +9307,9 @@ int kvm_arch_hardware_setup(void) kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits; } + if (boot_cpu_has(X86_FEATURE_XSAVES)) + rdmsrl(MSR_IA32_XSS, host_xss); + kvm_init_msr_list(); return 0; } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index dbf7442a822b..0d04e865665b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -366,7 +366,7 @@ static inline bool kvm_pat_valid(u64 data) return (data | ((data & 0x0202020202020202ull) << 1)) == data; } -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu); -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu); +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu); +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu); #endif From patchwork Wed Oct 9 00:41:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45E7A13BD for ; Wed, 9 Oct 2019 00:42:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 275D720873 for ; Wed, 9 Oct 2019 00:42:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M/351PRc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729960AbfJIAmC (ORCPT ); Tue, 8 Oct 2019 20:42:02 -0400 Received: from mail-vk1-f202.google.com ([209.85.221.202]:52886 "EHLO mail-vk1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729860AbfJIAmC (ORCPT ); Tue, 8 Oct 2019 20:42:02 -0400 Received: by mail-vk1-f202.google.com with SMTP id o66so172047vka.19 for ; Tue, 08 Oct 2019 17:42:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SHgnXwsYJGxEstaZQfN+ECm7++y1Dz+ct07e3lnRp5c=; b=M/351PRcFXvI7Z3nQt3g8F70z+hYnDrjpdMvG1Xrv2+i81SNfBg/I95bB5Gf77q0U8 8yGI0UO5hHAE/j6Uxwk81pOn9LSvB+O5dkYKuR31CmbvaVtL8d5k1JN+F2sHpgDZ3ZFS etMvv7YAWxM/RZKZ2uWxwloC408Zka57ZFxF4fvnRWorNSoo5T3MzcozykyMsIopS4r0 9xTJSI74+NKBQt+Z+ErtJyY4a/Tk8bahGqvtyL3h5GXyUXBL5PcFmzO8ihDtjtHlO2L2 tixpl0OrwtN2wgTiDJloRpSiJt0ZjOc0qNrFH7JcWrrhQF63cycei7T72DGK4xF9BA+s 9QmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SHgnXwsYJGxEstaZQfN+ECm7++y1Dz+ct07e3lnRp5c=; b=Y09dELZwZx0mn8a2uz5DBOXhhKriJ1/P+FdLcCK1Jm9Negcp5BkLuz/b8P7g32Si0f KZqSKaHsV1XIvVF0i/eNmrYVnHmdLq9lk10Ez3ufVTyQk+8EuoOcq+9sCFhTd+7Ltu7S u7VP9sWIwv7iPLBEuo9vZEbK9LzatOHJMAnobtrepK9vvo6tFP1o5CRNpt7khlVEaPke /+ufAihknQFo6XXRyVZ2iGVqGVVi2BRlWEA95WLChCWB68zUkxN+TO6OnYmFvBf7jicl jEL3phtf2+cKrj5qa0/5vnton4QxNClVL/tKBBPtXWRnvQ6+hPi+WdchheVIdbcE6wGg XeJA== X-Gm-Message-State: APjAAAUXO2kjyKJnZwIJmWQTQ50Mq6tT5TYB6H93ATjYNRULxmaGRk3x MtJJRam/NW/YUZYG+gPIMyEGBjylmXZrOGzf X-Google-Smtp-Source: APXvYqyyq0oMV/8PNr4hTpOPz8WvQQgOUcYU7BCci8lGo3FkHQtXN6VoTjiUpWuuAhGzRLrWfTfwBPgN0q4kpPpI X-Received: by 2002:a05:6122:2bb:: with SMTP id 27mr687032vkq.66.1570581720834; Tue, 08 Oct 2019 17:42:00 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:39 -0700 In-Reply-To: <20191009004142.225377-1-aaronlewis@google.com> Message-Id: <20191009004142.225377-3-aaronlewis@google.com> Mime-Version: 1.0 References: <20191009004142.225377-1-aaronlewis@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 3/6] kvm: svm: Add support for XSAVES on AMD From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hoist support for IA32_XSS so it can be used for both AMD and Intel, instead of for just Intel. AMD has no equivalent of Intel's "Enable XSAVES/XRSTORS" VM-execution control. Instead, XSAVES is always available to the guest when supported on the host. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 20 -------------------- arch/x86/kvm/x86.c | 16 ++++++++++++++++ 3 files changed, 17 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index e2d7a7738c76..65223827c675 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5962,7 +5962,7 @@ static bool svm_mpx_supported(void) static bool svm_xsaves_supported(void) { - return false; + return boot_cpu_has(X86_FEATURE_XSAVES); } static bool svm_umip_emulated(void) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ff5ba28abecb..bd4ce33bd52f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1818,13 +1818,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index, &msr_info->data); - case MSR_IA32_XSS: - if (!vmx_xsaves_supported() || - (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) - return 1; - msr_info->data = vcpu->arch.ia32_xss; - break; case MSR_IA32_RTIT_CTL: if (pt_mode != PT_MODE_HOST_GUEST) return 1; @@ -2060,19 +2053,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!nested_vmx_allowed(vcpu)) return 1; return vmx_set_vmx_msr(vcpu, msr_index, data); - case MSR_IA32_XSS: - if (!vmx_xsaves_supported() || - (!msr_info->host_initiated && - !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) - return 1; - /* - * The only supported bit as of Skylake is bit 8, but - * it is not supported on KVM. - */ - if (data != 0) - return 1; - vcpu->arch.ia32_xss = data; - break; case MSR_IA32_RTIT_CTL: if ((pt_mode != PT_MODE_HOST_GUEST) || vmx_rtit_ctl_check(vcpu, data) || diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e90e658fd8a9..77f2e8c05047 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2702,6 +2702,15 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_TSC: kvm_write_tsc(vcpu, msr_info); break; + case MSR_IA32_XSS: + if (!kvm_x86_ops->xsaves_supported() || + (!msr_info->host_initiated && + !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) + return 1; + if (data != 0) + return 1; + vcpu->arch.ia32_xss = data; + break; case MSR_SMI_COUNT: if (!msr_info->host_initiated) return 1; @@ -3032,6 +3041,13 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1: return get_msr_mce(vcpu, msr_info->index, &msr_info->data, msr_info->host_initiated); + case MSR_IA32_XSS: + if (!kvm_x86_ops->xsaves_supported() || + (!msr_info->host_initiated && + !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))) + return 1; + msr_info->data = vcpu->arch.ia32_xss; + break; case MSR_K7_CLK_CTL: /* * Provide expected ramp-up count for K7. All other From patchwork Wed Oct 9 00:41:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180203 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1C6817D4 for ; Wed, 9 Oct 2019 00:42:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B282521835 for ; Wed, 9 Oct 2019 00:42:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="obNd30Lt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730004AbfJIAmH (ORCPT ); Tue, 8 Oct 2019 20:42:07 -0400 Received: from mail-vk1-f202.google.com ([209.85.221.202]:48408 "EHLO mail-vk1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729924AbfJIAmG (ORCPT ); Tue, 8 Oct 2019 20:42:06 -0400 Received: by mail-vk1-f202.google.com with SMTP id h145so176458vke.15 for ; Tue, 08 Oct 2019 17:42:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=aPzuEK87aDrWGqWoXHQrkCZLPZrRn8GynSzDsCpPFJA=; b=obNd30LtFrVVMUNKHicXP5ZEi0UcdyxV/0Ol7Z8ApkmTeEvaIzM4o12f/zhZalDzlg taZ3Aj2KLlfbHPRV++RjpJc9GU3gO+OHO6uYXrYVeeqEwgHoRoBAyRBpXNcwAdWqDuM4 U85gI9BKxzEigcOMDhZCHNNUPDXM2PCRJ5DwzbDzoP2aecsSVdqRd83K0EO9NB24tvVH iG+myYYyor75x2NCczrbPhFa5VZ+JvZlNY2wl5SpiABE6BK8qfookuhKK8TGvJaM89Tb YbmBy8BoLn/eSQ9UjM+pjWk6KoSzazvgy62KCO8p1h48SOozfdjmRJQ/8aY5A4IjwgKw 34xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aPzuEK87aDrWGqWoXHQrkCZLPZrRn8GynSzDsCpPFJA=; b=hbUPGy1oMAv3ighYCITCOr/ZiG612ziyL+jkWfygvrKWKUgFEQZZr8aHFgBY1/mx1r Y/iqX+ALyAJkSAr9L04E01EjVFgCWpCX2ubUc/2itVXKG23lzZdUYLXKpnOG5DEJMHxx 8oXCVOGzgoAhF9S9N0/KkHf5wMPnIOattcMjkWoWjLQlkEZOq1PdkQRD0PcL9v43reeq q2YBh+fmGkqIM7ka+b53YXaSYH7+kcqsUZDg8DvAviinUbwTaT5Ln5/dRqt/ERxNkrDT cN2+ZGl97i6pQbkPvAkUsT/6HQWvB2rrwKpt6GXxUCA3LFuKNblBwuIuMMdGM8d0aURs wqyg== X-Gm-Message-State: APjAAAWv6bKXwKmDq2lAsOvmjOeOKtIfslh0qsFJAytai8NHpIXnY8zJ I/AijRWRvERMsnM2ru/ILxEyU7Cp2dPWPq6R X-Google-Smtp-Source: APXvYqxtFBb3LJKjxCedYKIF5Jk5uNC1BnGIbTsYvfgyh5egAWmDXO3lxu0yqUWaKb0l/RmAiIgyDCLiGyJbQVnA X-Received: by 2002:ac5:cb62:: with SMTP id l2mr697137vkn.32.1570581725690; Tue, 08 Oct 2019 17:42:05 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:40 -0700 In-Reply-To: <20191009004142.225377-1-aaronlewis@google.com> Message-Id: <20191009004142.225377-4-aaronlewis@google.com> Mime-Version: 1.0 References: <20191009004142.225377-1-aaronlewis@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 4/6] kvm: svm: Enumerate XSAVES in guest CPUID when it is available to the guest From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the function guest_cpuid_set() to allow a bit in the guest cpuid to be set. This is complementary to the guest_cpuid_clear() function. Also, set the XSAVES bit in the guest cpuid if the host has the same bit set and guest has XSAVE bit set. This is to ensure that XSAVES will be enumerated in the guest CPUID if XSAVES can be used in the guest. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/cpuid.h | 9 +++++++++ arch/x86/kvm/svm.c | 4 ++++ 2 files changed, 13 insertions(+) diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index d78a61408243..420ceea02fd1 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -113,6 +113,15 @@ static __always_inline void guest_cpuid_clear(struct kvm_vcpu *vcpu, unsigned x8 *reg &= ~bit(x86_feature); } +static __always_inline void guest_cpuid_set(struct kvm_vcpu *vcpu, unsigned x86_feature) +{ + int *reg; + + reg = guest_cpuid_get_register(vcpu, x86_feature); + if (reg) + *reg |= ~bit(x86_feature); +} + static inline bool guest_cpuid_is_amd(struct kvm_vcpu *vcpu) { struct kvm_cpuid_entry2 *best; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 65223827c675..2522a467bbc0 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5887,6 +5887,10 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + if (guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && + boot_cpu_has(X86_FEATURE_XSAVES)) + guest_cpuid_set(vcpu, X86_FEATURE_XSAVES); + /* Update nrips enabled cache */ svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS); From patchwork Wed Oct 9 00:41:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180205 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7783F17D4 for ; Wed, 9 Oct 2019 00:42:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 57EFA20859 for ; Wed, 9 Oct 2019 00:42:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YcHIgZjO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730026AbfJIAmL (ORCPT ); Tue, 8 Oct 2019 20:42:11 -0400 Received: from mail-pl1-f202.google.com ([209.85.214.202]:55708 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729881AbfJIAmL (ORCPT ); Tue, 8 Oct 2019 20:42:11 -0400 Received: by mail-pl1-f202.google.com with SMTP id g11so405830plm.22 for ; Tue, 08 Oct 2019 17:42:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8WBSkdbrHuvvunaAsJ6aBKGfXNeqDL3KyyVDtgZdvuI=; b=YcHIgZjO25rlhO80x/5vYyOkJj879nxCPpjqq/4UAI5uYydzdhaIY7y6w/MZEINj7+ v8XHsqiZT51NYLocAwQWPaNh+vk1G1A5ZAakkpUKBq6nxQStg6pdr/UzqQxglJPhMNQJ AGlVMSpbhZlfzCR3G9c5Lh26tRcqepsvLhEjANsmGxv/xl7uIasIW5Eos1tPfHNlVV5+ Fd3eHZt2zjYULxNVW30mEqdv+oZ0HAsASeBKpogiAFwoZPGlGBaTY3zkK5WBTF7z8Z7P /9T6wl1H5oYFSdMXgic0+h9S1ALyUaOVww/HUHwq3lMNuoLOIuGKoC6p/r/wg3Nb0e4d xR0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8WBSkdbrHuvvunaAsJ6aBKGfXNeqDL3KyyVDtgZdvuI=; b=Cnr01JsPFQudLa1kUUR7gksThYpWOGoe9T+btEi9l3DyaPDpJN9MdeADcu/oMYeo+u RBymCMf2f3Tsisaf7UxisfLsyvU1O/mcHgkgwq4djOmtdSUzxW8/DFgD4jeFy+Dgj3OE S+zk21JWmtrxSArBwqB2CEGchn7GjJX7KROzawbCPCKDehWEk4Fv801OvrexwHa70C15 j13hliZwwqd/hoPgH6FLgySgxZY5RZ2ySXJYugPYT4RS5fh6dHyH6OxwH4VOQyKG0FT8 U+fvjcz3pZxoAETWMAWan5IPTX+8iejQdtO80tRyeblo8t7S8xMsYAeK5Tuplb1sRSCh PAng== X-Gm-Message-State: APjAAAVn+dvcf5o+NXc6OynOQG+gFYugdMElQTWzKZVCc3U4w2xTcdzB /Q36w0dN078cTCEP6ahmAXooUCLQjbcwS6So X-Google-Smtp-Source: APXvYqz74DjqW4pZ3KjJgs8zH+E1YdyF9CpP4gvxdUxv//sC05oq+oCaY2VZr4hcX3ZNTAZw2+9lD1A67aulri65 X-Received: by 2002:a63:5949:: with SMTP id j9mr1301064pgm.371.1570581729460; Tue, 08 Oct 2019 17:42:09 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:41 -0700 In-Reply-To: <20191009004142.225377-1-aaronlewis@google.com> Message-Id: <20191009004142.225377-5-aaronlewis@google.com> Mime-Version: 1.0 References: <20191009004142.225377-1-aaronlewis@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 5/6] kvm: x86: Add IA32_XSS to the emulated_msrs list From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add IA32_XSS to the list of emulated MSRs if it is supported in the guest. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/svm.c | 12 +++++++----- arch/x86/kvm/vmx/vmx.c | 2 ++ arch/x86/kvm/x86.c | 1 + 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 2522a467bbc0..8de6705ac30d 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -498,6 +498,11 @@ static inline bool avic_vcpu_is_running(struct kvm_vcpu *vcpu) return (READ_ONCE(*entry) & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK); } +static bool svm_xsaves_supported(void) +{ + return boot_cpu_has(X86_FEATURE_XSAVES); +} + static void recalc_intercepts(struct vcpu_svm *svm) { struct vmcb_control_area *c, *h; @@ -5871,6 +5876,8 @@ static bool svm_has_emulated_msr(int index) case MSR_IA32_MCG_EXT_CTL: case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: return false; + case MSR_IA32_XSS: + return svm_xsaves_supported(); default: break; } @@ -5964,11 +5971,6 @@ static bool svm_mpx_supported(void) return false; } -static bool svm_xsaves_supported(void) -{ - return boot_cpu_has(X86_FEATURE_XSAVES); -} - static bool svm_umip_emulated(void) { return false; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bd4ce33bd52f..c28461385c2b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6270,6 +6270,8 @@ static bool vmx_has_emulated_msr(int index) case MSR_AMD64_VIRT_SPEC_CTRL: /* This is AMD only. */ return false; + case MSR_IA32_XSS: + return vmx_xsaves_supported(); default: return true; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 77f2e8c05047..243c6df12d81 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1229,6 +1229,7 @@ static u32 emulated_msrs[] = { MSR_MISC_FEATURES_ENABLES, MSR_AMD64_VIRT_SPEC_CTRL, MSR_IA32_POWER_CTL, + MSR_IA32_XSS, /* * The following list leaves out MSRs whose values are determined From patchwork Wed Oct 9 00:41:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11180207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B31EC17D4 for ; Wed, 9 Oct 2019 00:42:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 81BB020873 for ; Wed, 9 Oct 2019 00:42:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IXy0QzO4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730040AbfJIAmV (ORCPT ); Tue, 8 Oct 2019 20:42:21 -0400 Received: from mail-pg1-f202.google.com ([209.85.215.202]:35286 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729881AbfJIAmV (ORCPT ); Tue, 8 Oct 2019 20:42:21 -0400 Received: by mail-pg1-f202.google.com with SMTP id s1so434117pgm.2 for ; Tue, 08 Oct 2019 17:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2JlXV32dh3Qy0/myg6pgeOwLjaZKIAICdtqnMt/R3Js=; b=IXy0QzO4s4cEES2I9VLYVN26fpSHVaHQjrt53URoZjjJp5CGD3tIab+KJtyTOFYexB HJNmyL0D/30Lf0iDziBJ7V7pNMJKfk1bb4ggXU/JoW4+RXN62np/xtAQ0BjjGOzhmoWZ Wi72uUWB9DofRUUwQmE7npA5x6Tg0tYZNnvxFmzDwOuA0zq1D6X1M/onKET1+WAGo7kL 1wm5lLIJcnpjLWxz2gFBEw+vjkJaS1mZyUIlBPjHAWNf/HTb/TkjwVm1t/h8jyJhhYES IikdZniC0JEIevzNIa0XwAUZODhCL40X7saMvBdn7ZPVGkh/uJF7JzsQ1w+9SDlwBpJd XYMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2JlXV32dh3Qy0/myg6pgeOwLjaZKIAICdtqnMt/R3Js=; b=tPNQLpFdJTxO4s3TsN78pC0GbzKYfvjgzLmeIN1+YiTTdt4GNavORzqqnCdkWzyBYv Nu4oQVSRscHepONEfvdghHdQEzZ+r0YBaL9PJIfTCRDrRoxwK44eNvUHleVzF0ePYZjK YStM3XBORphOAbQt1Ow1oTF3Wz7sJPP4hAZmtil+9JTSzl/cS4vYmfeEhBzOZ4bKW6eK w9sK0FN3e1++Yx7DVGxjZbNhqR4Go+wdwODbjbpybt8vdfeIsczNwwbdzGxDyCs35HC2 93/CbwINt64NHA6213bYYnrcbE30IMESfSefjnI3dEQoRQlje+kt/gsgRkvOZMST86Ov ujCA== X-Gm-Message-State: APjAAAXnE6AECaTzATxgVl1sfovk0gs4eNZJSr7S8uwxWiVvCoa/CZZJ 0PjQEbChoERw+yq4JDp1tJ8UXksDXEsDtmsg X-Google-Smtp-Source: APXvYqzD465jOBrpsFUdra8IN47Xg9RpARntDSso4xa/SjrDdRM/BGknOzvRB17p6aD7T3+3vYfRHvDqzQnxAvwZ X-Received: by 2002:a65:4183:: with SMTP id a3mr1261759pgq.404.1570581740465; Tue, 08 Oct 2019 17:42:20 -0700 (PDT) Date: Tue, 8 Oct 2019 17:41:42 -0700 In-Reply-To: <20191009004142.225377-1-aaronlewis@google.com> Message-Id: <20191009004142.225377-6-aaronlewis@google.com> Mime-Version: 1.0 References: <20191009004142.225377-1-aaronlewis@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [Patch 6/6] kvm: tests: Add test to verify MSR_IA32_XSS From: Aaron Lewis To: Babu Moger , Yang Weijiang , Sebastian Andrzej Siewior , kvm@vger.kernel.org Cc: Paolo Bonzini , Aaron Lewis , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Verify that calling get and set for MSR_IA32_XSS returns expected results. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/processor.h | 7 +- .../selftests/kvm/lib/x86_64/processor.c | 69 ++++++++++++++++--- .../selftests/kvm/x86_64/xss_msr_test.c | 65 +++++++++++++++++ 5 files changed, 134 insertions(+), 9 deletions(-) create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index b35da375530a..6e9ec34f8124 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -11,6 +11,7 @@ /x86_64/vmx_close_while_nested_test /x86_64/vmx_set_nested_state_test /x86_64/vmx_tsc_adjust_test +/x86_64/xss_msr_test /clear_dirty_log_test /dirty_log_test /kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c5ec868fa1e5..3138a916574a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -25,6 +25,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test +TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test TEST_GEN_PROGS_x86_64 += clear_dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index ff234018219c..1dc55eea756a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -308,6 +308,8 @@ struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid); void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *state); +struct kvm_msr_list *kvm_get_msr_index_list(void); + struct kvm_cpuid2 *kvm_get_supported_cpuid(void); void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_cpuid2 *cpuid); @@ -324,8 +326,11 @@ kvm_get_supported_cpuid_entry(uint32_t function) uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index); void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, uint64_t msr_value); +void vcpu_set_msr_expect_result(struct kvm_vm *vm, uint32_t vcpuid, + uint64_t msr_index, uint64_t msr_value, int result); -uint32_t kvm_get_cpuid_max(void); +uint32_t kvm_get_cpuid_max_basic(void); +uint32_t kvm_get_cpuid_max_extended(void); void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits); /* diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 6698cb741e10..425262e15afa 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -869,13 +869,14 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index) return buffer.entry.data; } -/* VCPU Set MSR +/* VCPU Set MSR Expect Result * * Input Args: * vm - Virtual Machine * vcpuid - VCPU ID * msr_index - Index of MSR * msr_value - New value of MSR + * result - The expected result of KVM_SET_MSRS * * Output Args: None * @@ -883,8 +884,9 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index) * * Set value of MSR for VCPU. */ -void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, - uint64_t msr_value) +void vcpu_set_msr_expect_result(struct kvm_vm *vm, uint32_t vcpuid, + uint64_t msr_index, uint64_t msr_value, + int result) { struct vcpu *vcpu = vcpu_find(vm, vcpuid); struct { @@ -899,10 +901,30 @@ void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, buffer.entry.index = msr_index; buffer.entry.data = msr_value; r = ioctl(vcpu->fd, KVM_SET_MSRS, &buffer.header); - TEST_ASSERT(r == 1, "KVM_SET_MSRS IOCTL failed,\n" + TEST_ASSERT(r == result, "KVM_SET_MSRS IOCTL failed,\n" " rc: %i errno: %i", r, errno); } +/* VCPU Set MSR + * + * Input Args: + * vm - Virtual Machine + * vcpuid - VCPU ID + * msr_index - Index of MSR + * msr_value - New value of MSR + * + * Output Args: None + * + * Return: On success, nothing. On failure a TEST_ASSERT is produced. + * + * Set value of MSR for VCPU. + */ +void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, + uint64_t msr_value) +{ + vcpu_set_msr_expect_result(vm, vcpuid, msr_index, msr_value, 1); +} + /* VM VCPU Args Set * * Input Args: @@ -1000,19 +1022,45 @@ struct kvm_x86_state { struct kvm_msrs msrs; }; -static int kvm_get_num_msrs(struct kvm_vm *vm) +static int kvm_get_num_msrs_fd(int kvm_fd) { struct kvm_msr_list nmsrs; int r; nmsrs.nmsrs = 0; - r = ioctl(vm->kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs); + r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs); TEST_ASSERT(r == -1 && errno == E2BIG, "Unexpected result from KVM_GET_MSR_INDEX_LIST probe, r: %i", r); return nmsrs.nmsrs; } +static int kvm_get_num_msrs(struct kvm_vm *vm) +{ + return kvm_get_num_msrs_fd(vm->kvm_fd); +} + +struct kvm_msr_list *kvm_get_msr_index_list(void) +{ + struct kvm_msr_list *list; + int nmsrs, r, kvm_fd; + + kvm_fd = open(KVM_DEV_PATH, O_RDONLY); + if (kvm_fd < 0) + exit(KSFT_SKIP); + + nmsrs = kvm_get_num_msrs_fd(kvm_fd); + list = malloc(sizeof(*list) + nmsrs * sizeof(list->indices[0])); + list->nmsrs = nmsrs; + r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, list); + close(kvm_fd); + + TEST_ASSERT(r == 0, "Unexpected result from KVM_GET_MSR_INDEX_LIST, r: %i", + r); + + return list; +} + struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid) { struct vcpu *vcpu = vcpu_find(vm, vcpuid); @@ -1158,7 +1206,12 @@ bool is_intel_cpu(void) return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]); } -uint32_t kvm_get_cpuid_max(void) +uint32_t kvm_get_cpuid_max_basic(void) +{ + return kvm_get_supported_cpuid_entry(0)->eax; +} + +uint32_t kvm_get_cpuid_max_extended(void) { return kvm_get_supported_cpuid_entry(0x80000000)->eax; } @@ -1169,7 +1222,7 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) bool pae; /* SDM 4.1.4 */ - if (kvm_get_cpuid_max() < 0x80000008) { + if (kvm_get_cpuid_max_extended() < 0x80000008) { pae = kvm_get_supported_cpuid_entry(1)->edx & (1 << 6); *pa_bits = pae ? 36 : 32; *va_bits = 32; diff --git a/tools/testing/selftests/kvm/x86_64/xss_msr_test.c b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c new file mode 100644 index 000000000000..47060eff06ce --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019, Google LLC. + * + * Tests for the IA32_XSS MSR. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "vmx.h" + +#define VCPU_ID 1 + +#define X86_FEATURE_XSAVES (1<<3) + +int main(int argc, char *argv[]) +{ + struct kvm_cpuid_entry2 *entry; + struct kvm_msr_list *list; + bool found_xss = false; + struct kvm_vm *vm; + uint64_t xss_val; + int i; + + /* Create VM */ + vm = vm_create_default(VCPU_ID, 0, 0); + + list = kvm_get_msr_index_list(); + for (i = 0; i < list->nmsrs; ++i) { + if (list->indices[i] == MSR_IA32_XSS) { + found_xss = true; + break; + } + } + + if (kvm_get_cpuid_max_basic() < 0xd) { + printf("XSAVES is not supported by the CPU.\n"); + exit(KSFT_SKIP); + } + + entry = kvm_get_supported_cpuid_index(0xd, 1); + TEST_ASSERT(found_xss == !!(entry->eax & X86_FEATURE_XSAVES), + "Support for IA32_XSS and support for XSAVES do not match.\n"); + + if (!found_xss) { + printf("IA32_XSS and XSAVES are not supported. Skipping test.\n"); + exit(KSFT_SKIP); + } + + xss_val = vcpu_get_msr(vm, VCPU_ID, MSR_IA32_XSS); + TEST_ASSERT(xss_val == 0, "MSR_IA32_XSS should always be zero\n"); + + vcpu_set_msr(vm, VCPU_ID, MSR_IA32_XSS, xss_val); + /* + * At present, KVM only supports a guest IA32_XSS value of 0. Verify + * that trying to set the guest IA32_XSS to an unsupported value fails. + */ + vcpu_set_msr_expect_result(vm, VCPU_ID, MSR_IA32_XSS, ~0ull, 0); + + free(list); + kvm_vm_free(vm); +}