From patchwork Mon Mar 4 19:40:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 2214221 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id B487A4006E for ; Mon, 4 Mar 2013 19:40:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932171Ab3CDTkg (ORCPT ); Mon, 4 Mar 2013 14:40:36 -0500 Received: from mout.web.de ([212.227.17.11]:55006 "EHLO mout.web.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758573Ab3CDTkg (ORCPT ); Mon, 4 Mar 2013 14:40:36 -0500 Received: from mchn199C.mchp.siemens.de ([95.157.56.37]) by smtp.web.de (mrweb103) with ESMTPSA (Nemesis) id 0Lxwme-1UoqM629SO-015W7D; Mon, 04 Mar 2013 20:40:31 +0100 Message-ID: <5134F8AD.8030607@web.de> Date: Mon, 04 Mar 2013 20:40:29 +0100 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Gleb Natapov , Marcelo Tosatti CC: kvm , Nadav Har'El Subject: [PATCH v2] KVM: nVMX: Fix setting of CR0 and CR4 in guest mode X-Enigmail-Version: 1.5 X-Provags-ID: V02:K0:JogG+YpN10NbnbbrmBeMQ8RHEnZ2cS4kQG1w+la25Di XiAxBTgeO+4b9kJOpFY659Me2pBU4V7oM7WrzQAcWA1UTB7XIi 1lKiFxPDFtZjW2eHtcRIDHrkMgSHHkjhyD7GYVNFJAHQnv0pT3 aHBTVaMPsJ6LyVoVv0ZLwuO28nKTUCkH1CO9VXvbTNGR+nhPxl PRbH5uSSt8NDygX161x5A== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The logic for calculating the value with which we call kvm_set_cr0/4 was broken (will definitely be visible with nested unrestricted guest mode support). Also, we performed the check regarding CR0_ALWAYSON too early when in guest mode. What really needs to be done on both CR0 and CR4 is to mask out L1-owned bits and merge them in from GUEST_CR0/4. In contrast, arch.cr0/4 and arch.cr0/4_guest_owned_bits contain the mangled L0+L1 state and, thus, are not suited as input. For both CRs, we can then apply the check against VMXON_CRx_ALWAYSON and refuse the update if it fails. To be fully consistent, we implement this check now also for CR4. Finally, we have to set the shadow to the value L2 wanted to write originally. Signed-off-by: Jan Kiszka --- Changes in v2: - keep the non-misleading part of the comment in handle_set_cr0 arch/x86/kvm/vmx.c | 46 +++++++++++++++++++++++++++++++--------------- 1 files changed, 31 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 7cc566b..832b7b4 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -4605,37 +4605,53 @@ vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall) /* called to set cr0 as appropriate for a mov-to-cr0 exit. */ static int handle_set_cr0(struct kvm_vcpu *vcpu, unsigned long val) { - if (to_vmx(vcpu)->nested.vmxon && - ((val & VMXON_CR0_ALWAYSON) != VMXON_CR0_ALWAYSON)) - return 1; - if (is_guest_mode(vcpu)) { + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + unsigned long orig_val = val; + /* * We get here when L2 changed cr0 in a way that did not change * any of L1's shadowed bits (see nested_vmx_exit_handled_cr), - * but did change L0 shadowed bits. This can currently happen - * with the TS bit: L0 may want to leave TS on (for lazy fpu - * loading) while pretending to allow the guest to change it. + * but did change L0 shadowed bits. */ - if (kvm_set_cr0(vcpu, (val & vcpu->arch.cr0_guest_owned_bits) | - (vcpu->arch.cr0 & ~vcpu->arch.cr0_guest_owned_bits))) + val = (val & ~vmcs12->cr0_guest_host_mask) | + (vmcs_read64(GUEST_CR0) & vmcs12->cr0_guest_host_mask); + if ((val & VMXON_CR0_ALWAYSON) != VMXON_CR0_ALWAYSON) return 1; - vmcs_writel(CR0_READ_SHADOW, val); + + if (kvm_set_cr0(vcpu, val)) + return 1; + vmcs_writel(CR0_READ_SHADOW, orig_val); return 0; - } else + } else { + if (to_vmx(vcpu)->nested.vmxon && + ((val & VMXON_CR0_ALWAYSON) != VMXON_CR0_ALWAYSON)) + return 1; return kvm_set_cr0(vcpu, val); + } } static int handle_set_cr4(struct kvm_vcpu *vcpu, unsigned long val) { if (is_guest_mode(vcpu)) { - if (kvm_set_cr4(vcpu, (val & vcpu->arch.cr4_guest_owned_bits) | - (vcpu->arch.cr4 & ~vcpu->arch.cr4_guest_owned_bits))) + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + unsigned long orig_val = val; + + val = (val & ~vmcs12->cr4_guest_host_mask) | + (vmcs_readl(GUEST_CR4) & vmcs12->cr4_guest_host_mask); + if ((val & VMXON_CR4_ALWAYSON) != VMXON_CR4_ALWAYSON) + return 1; + + if (kvm_set_cr4(vcpu, val)) return 1; - vmcs_writel(CR4_READ_SHADOW, val); + vmcs_writel(CR4_READ_SHADOW, orig_val); return 0; - } else + } else { + if (to_vmx(vcpu)->nested.vmxon && + ((val & VMXON_CR4_ALWAYSON) != VMXON_CR4_ALWAYSON)) + return 1; return kvm_set_cr4(vcpu, val); + } } /* called to set cr0 as approriate for clts instruction exit. */