From patchwork Thu Sep 5 12:21:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 2854080 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CA1859F494 for ; Thu, 5 Sep 2013 12:22:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 853B0202CF for ; Thu, 5 Sep 2013 12:22:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4DECA20254 for ; Thu, 5 Sep 2013 12:22:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935825Ab3IEMW0 (ORCPT ); Thu, 5 Sep 2013 08:22:26 -0400 Received: from mail-ee0-f44.google.com ([74.125.83.44]:44314 "EHLO mail-ee0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756801Ab3IEMWF (ORCPT ); Thu, 5 Sep 2013 08:22:05 -0400 Received: by mail-ee0-f44.google.com with SMTP id b47so865778eek.31 for ; Thu, 05 Sep 2013 05:22:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id; bh=iQDKfg2oytFlmpa9g3WbjrvLL6uphCW1jqIMxfqggLI=; b=L3Sa4ieQaCNsUHGMoY2vNBnWf/u3XT5jG67UbS0cVKKasjlVtEEOK6g6q0A++5kWDR 6elJ6BNK/kxUEETmBeCAFWlYNdn/y6tHs+71G+/HNBCEjxYYzVD2b0/35NYO7ArjqMs+ 5UninLHkOsVwObtLr1td1m7BMdOBxMqRx1Vxd7FzaytTU+6ghi3dp4E+8ZjQaNF/+cgS oDHbJXxcrC3c6nv5IrVlWml96hznZcZoUdbZWXhMO3u8VUk74H4Cmz7I5wqUu9XIk8AE cjTocrQbghKLbzlf3KiiqGjiXDIYYZQeFpRZWR4XztlNGNiK2Hr4SvkcKzI/cG8MJplt Ajdw== X-Received: by 10.15.32.136 with SMTP id a8mr2086754eev.71.1378383724368; Thu, 05 Sep 2013 05:22:04 -0700 (PDT) Received: from playground.lan (net-37-117-144-28.cust.dsl.vodafone.it. [37.117.144.28]) by mx.google.com with ESMTPSA id p5sm48241572eeg.5.1969.12.31.16.00.00 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 05 Sep 2013 05:22:03 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Gleb Natapov Subject: [PATCH] KVM: x86: prevent setting unsupported XSAVE states Date: Thu, 5 Sep 2013 14:21:54 +0200 Message-Id: <1378383714-9723-2-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-9.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A guest can still attempt to save and restore XSAVE states even if they have been masked in CPUID leaf 0Dh. This usually is not visible to the guest, but is still wrong: "Any attempt to set a reserved bit (as determined by the contents of EAX and EDX after executing CPUID with EAX=0DH, ECX= 0H) in XCR0 for a given processor will result in a #GP exception". The patch also performs the same checks as __kvm_set_xcr in KVM_SET_XSAVE. This catches migration from newer to older kernel/processor before the guest starts running. Cc: kvm@vger.kernel.org Cc: Gleb Natapov Signed-off-by: Paolo Bonzini --- arch/x86/kvm/cpuid.c | 2 +- arch/x86/kvm/x86.c | 10 ++++++++-- arch/x86/kvm/x86.h | 1 + 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index a20ecb5..d7c465d 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -182,7 +182,7 @@ static bool supported_xcr0_bit(unsigned bit) { u64 mask = ((u64)1 << bit); - return mask & (XSTATE_FP | XSTATE_SSE | XSTATE_YMM) & host_xcr0; + return mask & KVM_SUPPORTED_XCR0 & host_xcr0; } #define F(x) bit(X86_FEATURE_##x) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3625798..801a882 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -586,6 +586,8 @@ int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) return 1; if ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) return 1; + if (xcr0 & ~KVM_SUPPORTED_XCR0) + return 1; if (xcr0 & ~host_xcr0) return 1; kvm_put_guest_xcr0(vcpu); @@ -2980,10 +2982,14 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, u64 xstate_bv = *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; - if (cpu_has_xsave) + if (cpu_has_xsave) { + if (xstate_bv & ~KVM_SUPPORTED_XCR0) + return -EINVAL; + if (xstate_bv & ~host_xcr0) + return -EINVAL; memcpy(&vcpu->arch.guest_fpu.state->xsave, guest_xsave->region, xstate_size); - else { + } else { if (xstate_bv & ~XSTATE_FPSSE) return -EINVAL; memcpy(&vcpu->arch.guest_fpu.state->fxsave, diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index e224f7a..587fb9e 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -122,6 +122,7 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt, gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception); +#define KVM_SUPPORTED_XCR0 (XSTATE_FP | XSTATE_SSE | XSTATE_YMM) extern u64 host_xcr0; extern struct static_key kvm_no_apic_vcpu;