From patchwork Mon Feb 15 09:45:42 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 79368 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o1F9kHJD018511 for ; Mon, 15 Feb 2010 09:46:18 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754344Ab0BOJqO (ORCPT ); Mon, 15 Feb 2010 04:46:14 -0500 Received: from goliath.siemens.de ([192.35.17.28]:17910 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753995Ab0BOJqB (ORCPT ); Mon, 15 Feb 2010 04:46:01 -0500 Received: from mail3.siemens.de (localhost [127.0.0.1]) by goliath.siemens.de (8.12.11.20060308/8.12.11) with ESMTP id o1F9ji0Q027237; Mon, 15 Feb 2010 10:45:44 +0100 Received: from localhost.localdomain (mchn012c.mchp.siemens.de [139.25.109.167] (may be forged)) by mail3.siemens.de (8.12.11.20060308/8.12.11) with ESMTP id o1F9jhwU027810; Mon, 15 Feb 2010 10:45:44 +0100 From: Jan Kiszka To: Avi Kivity , Marcelo Tosatti Cc: kvm Subject: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Date: Mon, 15 Feb 2010 10:45:42 +0100 Message-Id: <0bc4d8c46c0868cff70568bb5ee6df25162ddab6.1266227138.git.jan.kiszka@siemens.com> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: References: In-Reply-To: References: Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Mon, 15 Feb 2010 09:46:18 +0000 (UTC) diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt index c6416a3..8770b67 100644 --- a/Documentation/kvm/api.txt +++ b/Documentation/kvm/api.txt @@ -656,6 +656,7 @@ struct kvm_clock_data { 4.29 KVM_GET_VCPU_EVENTS Capability: KVM_CAP_VCPU_EVENTS +Extended by: KVM_CAP_INTR_SHADOW Architectures: x86 Type: vm ioctl Parameters: struct kvm_vcpu_event (out) @@ -676,7 +677,7 @@ struct kvm_vcpu_events { __u8 injected; __u8 nr; __u8 soft; - __u8 pad; + __u8 shadow; } interrupt; struct { __u8 injected; @@ -688,9 +689,13 @@ struct kvm_vcpu_events { __u32 flags; }; +KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that +interrupt.shadow contains a valid state. Otherwise, this field is undefined. + 4.30 KVM_SET_VCPU_EVENTS Capability: KVM_CAP_VCPU_EVENTS +Extended by: KVM_CAP_INTR_SHADOW Architectures: x86 Type: vm ioctl Parameters: struct kvm_vcpu_event (in) @@ -709,6 +714,10 @@ current in-kernel state. The bits are: KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector +If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in +the flags field to signal that interrupt.shadow contains a valid state and +shall be written into the VCPU. + 5. The kvm_run structure diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h index f46b79f..dc6cd24 100644 --- a/arch/x86/include/asm/kvm.h +++ b/arch/x86/include/asm/kvm.h @@ -257,6 +257,7 @@ struct kvm_reinject_control { /* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */ #define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001 #define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002 +#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004 /* for KVM_GET/SET_VCPU_EVENTS */ struct kvm_vcpu_events { @@ -271,7 +272,7 @@ struct kvm_vcpu_events { __u8 injected; __u8 nr; __u8 soft; - __u8 pad; + __u8 shadow; } interrupt; struct { __u8 injected; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index f82b072..0fa74d0 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -854,7 +854,7 @@ static void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask) if (mask & X86_SHADOW_INT_MOV_SS) interruptibility |= GUEST_INTR_STATE_MOV_SS; - if (mask & X86_SHADOW_INT_STI) + else if (mask & X86_SHADOW_INT_STI) interruptibility |= GUEST_INTR_STATE_STI; if ((interruptibility != interruptibility_old)) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 50d1d2a..60e6341 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2132,6 +2132,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft; events->interrupt.nr = vcpu->arch.interrupt.nr; events->interrupt.soft = 0; + events->interrupt.shadow = + !!kvm_x86_ops->get_interrupt_shadow(vcpu, + X86_SHADOW_INT_MOV_SS | X86_SHADOW_INT_STI); events->nmi.injected = vcpu->arch.nmi_injected; events->nmi.pending = vcpu->arch.nmi_pending; @@ -2140,7 +2143,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, events->sipi_vector = vcpu->arch.sipi_vector; events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING - | KVM_VCPUEVENT_VALID_SIPI_VECTOR); + | KVM_VCPUEVENT_VALID_SIPI_VECTOR + | KVM_VCPUEVENT_VALID_SHADOW); vcpu_put(vcpu); } @@ -2149,7 +2153,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING - | KVM_VCPUEVENT_VALID_SIPI_VECTOR)) + | KVM_VCPUEVENT_VALID_SIPI_VECTOR + | KVM_VCPUEVENT_VALID_SHADOW)) return -EINVAL; vcpu_load(vcpu); @@ -2164,6 +2169,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.interrupt.soft = events->interrupt.soft; if (vcpu->arch.interrupt.pending && irqchip_in_kernel(vcpu->kvm)) kvm_pic_clear_isr_ack(vcpu->kvm); + if (events->flags & KVM_VCPUEVENT_VALID_SHADOW) + kvm_x86_ops->set_interrupt_shadow(vcpu, + events->interrupt.shadow ? X86_SHADOW_INT_MOV_SS : 0); vcpu->arch.nmi_injected = events->nmi.injected; if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) diff --git a/include/linux/kvm.h b/include/linux/kvm.h index dfa54be..46fb860 100644 --- a/include/linux/kvm.h +++ b/include/linux/kvm.h @@ -501,6 +501,7 @@ struct kvm_ioeventfd { #define KVM_CAP_HYPERV_VAPIC 45 #define KVM_CAP_HYPERV_SPIN 46 #define KVM_CAP_PCI_SEGMENT 47 +#define KVM_CAP_INTR_SHADOW 48 #ifdef KVM_CAP_IRQ_ROUTING