From patchwork Fri May 15 02:52:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Rutherford X-Patchwork-Id: 6410761 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 49CC99F1CC for ; Fri, 15 May 2015 02:52:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2569220454 for ; Fri, 15 May 2015 02:52:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DA7EE20444 for ; Fri, 15 May 2015 02:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1423523AbbEOCw2 (ORCPT ); Thu, 14 May 2015 22:52:28 -0400 Received: from mail-ig0-f180.google.com ([209.85.213.180]:37922 "EHLO mail-ig0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1423516AbbEOCwZ (ORCPT ); Thu, 14 May 2015 22:52:25 -0400 Received: by igbhj9 with SMTP id hj9so96541593igb.1 for ; Thu, 14 May 2015 19:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references; bh=I6yDKLYXLXT+Uaa/EWC667NxPHIHflC3oymJ+VYZWpE=; b=DucaUv9QVq4GW1CL4MTRZ9zFddNoY5nU978Vb1rPgFqf/p2MJ9zhsQ+eDj0Sg4b691 bDKFlvdKtsvKWqt5/5K3WJ+pS+VFf+Fp99XQ9Y8PFA5S85fOpG/Sr++j2SjZZNIs/Zf9 G43wczwHTv2HgMk49LC161OyXTlC7S61loh2fF0DwOouFWRcY9ik+erk6XGAuLWQPeO2 nVIjwUij3CX6Cep2WGR6elsgjikzX8qSjnR3fDPjT4+NPzIbjLiTma44Q0jhSwBiSfQI JRXIVsy1XbRgvVyznzEpMHRWPNT733hxLrq9/N3BfRicSTp1Pc3vjecH/8hgbqsrHGtN HLGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=I6yDKLYXLXT+Uaa/EWC667NxPHIHflC3oymJ+VYZWpE=; b=BqYPPy8HE1nLKy+OdJP/AqTxMitJgs/nwku1PRbUF1R6bAYrnLcHGrOz9kftr7xiWP Cccd0eV7QFOhPRWgQci4+DKU/L7VzCejG41QF9fymIWNiNU1W6StAmtVSNk4VQQwXBQH xwwYWgTrv2GKHwD3o0mnHr6VcitvMN7GJEW2bDMJD1rCjiNDzIYWlbDCXUDTjWj3HH3z tZeT2ZyBaWMYMPzF2/8qQuGwwS1gHtOJlTafMZTTHDTSAqsOHD3Pz3B/5KsOdp0cHm2V humLmR4DmYK0yfPfVWFK+9fElis+RE3Nl6Fob1TI3OV+hAGUa4XQS5XYIO2dm8Rn3WGT W9ig== X-Gm-Message-State: ALoCoQn8GEV6KtuXwXl7Xtba5fOZz22CLAS94cdYVSYPBQGQDGuCx4lP3Hqat2Udad47YUB741uF X-Received: by 10.107.7.88 with SMTP id 85mr9728482ioh.42.1431658344989; Thu, 14 May 2015 19:52:24 -0700 (PDT) Received: from entropic.kir.corp.google.com ([172.31.8.128]) by mx.google.com with ESMTPSA id i80sm197076iod.6.2015.05.14.19.52.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 14 May 2015 19:52:24 -0700 (PDT) From: Steve Rutherford To: kvm@vger.kernel.org Subject: [RFC PATCH v2 3/4] KVM: x86: Add EOI exit bitmap inference Date: Thu, 14 May 2015 19:52:04 -0700 Message-Id: <1431658325-22856-3-git-send-email-srutherford@google.com> X-Mailer: git-send-email 2.2.0.rc0.207.ga3a616c In-Reply-To: <1431658325-22856-1-git-send-email-srutherford@google.com> References: <1431658325-22856-1-git-send-email-srutherford@google.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support a userspace IOAPIC interacting with an in kernel APIC, the EOI exit bitmaps need to be configurable. If the IOAPIC is in userspace (i.e. the irqchip has been split), the EOI exit bitmaps will be set whenever the GSI Routes are configured. In particular, for the low MSI routes are reservable for userspace IOAPICs. For these MSI routes, the EOI Exit bit corresponding to the destination vector of the route will be set for the destination VCPU. The intention is for the userspace IOAPICs to use the reservable MSI routes to inject interrupts into the guest. This is a slight abuse of the notion of an MSI Route, given that MSIs classically bypass the IOAPIC. It might be worthwhile to add an additional route type to improve clarity. Compile tested for Intel x86. Signed-off-by: Steve Rutherford --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/ioapic.c | 16 ++++++++++++++++ arch/x86/kvm/ioapic.h | 2 ++ arch/x86/kvm/lapic.c | 3 +-- arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++-------- include/linux/kvm_host.h | 9 +++++++++ virt/kvm/irqchip.c | 37 +++++++++++++++++++++++++++++++++++++ 7 files changed, 88 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b1978f1..16a6187 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -643,6 +643,7 @@ struct kvm_arch { u64 disabled_quirks; bool irqchip_split; + u8 nr_reserved_ioapic_pins; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c index 856f791..fb5281b 100644 --- a/arch/x86/kvm/ioapic.c +++ b/arch/x86/kvm/ioapic.c @@ -672,3 +672,19 @@ int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state) spin_unlock(&ioapic->lock); return 0; } + +void kvm_arch_irq_routing_update(struct kvm *kvm) +{ + struct kvm_ioapic *ioapic = kvm->arch.vioapic; + + if (ioapic) + return; + if (!lapic_in_kernel(kvm)) + return; + kvm_make_scan_ioapic_request(kvm); +} + +u8 kvm_arch_nr_userspace_ioapic_pins(struct kvm *kvm) +{ + return kvm->arch.nr_reserved_ioapic_pins; +} diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h index ca0b0b4..3af349c 100644 --- a/arch/x86/kvm/ioapic.h +++ b/arch/x86/kvm/ioapic.h @@ -9,6 +9,7 @@ struct kvm; struct kvm_vcpu; #define IOAPIC_NUM_PINS KVM_IOAPIC_NUM_PINS +#define MAX_NR_RESERVED_IOAPIC_PINS 48 #define IOAPIC_VERSION_ID 0x11 /* IOAPIC version */ #define IOAPIC_EDGE_TRIG 0 #define IOAPIC_LEVEL_TRIG 1 @@ -123,4 +124,5 @@ int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state); void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap, u32 *tmr); +void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); #endif diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 42fada6f..befd54a 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -209,8 +209,7 @@ out: if (old) kfree_rcu(old, rcu); - if (!irqchip_split(kvm)) - kvm_vcpu_request_scan_ioapic(kvm); + kvm_make_scan_ioapic_request(kvm); } static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e7377df..405d0d3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3864,15 +3864,20 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm, case KVM_CAP_SPLIT_IRQCHIP: { mutex_lock(&kvm->lock); r = -EEXIST; - if (lapic_in_kernel(kvm)) + if (irqchip_in_kernel(kvm)) goto split_irqchip_unlock; r = -EINVAL; - if (atomic_read(&kvm->online_vcpus)) - goto split_irqchip_unlock; - r = kvm_setup_empty_irq_routing(kvm); - if (r) + if (cap->args[0] > MAX_NR_RESERVED_IOAPIC_PINS) goto split_irqchip_unlock; - kvm->arch.irqchip_split = true; + if (!irqchip_split(kvm)) { + if (atomic_read(&kvm->online_vcpus)) + goto split_irqchip_unlock; + r = kvm_setup_empty_irq_routing(kvm); + if (r) + goto split_irqchip_unlock; + kvm->arch.irqchip_split = true; + } + kvm->arch.nr_reserved_ioapic_pins = cap->args[0]; r = 0; split_irqchip_unlock: mutex_unlock(&kvm->lock); @@ -6335,8 +6340,17 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } } - if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu)) - vcpu_scan_ioapic(vcpu); + if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu)) { + if (irqchip_split(vcpu->kvm)) { + memset(vcpu->arch.eoi_exit_bitmaps, 0, 32); + kvm_scan_ioapic_routes( + vcpu, vcpu->arch.eoi_exit_bitmaps); + kvm_x86_ops->load_eoi_exitmap( + vcpu, vcpu->arch.eoi_exit_bitmaps); + + } else + vcpu_scan_ioapic(vcpu); + } if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu)) kvm_vcpu_reload_apic_access_page(vcpu); } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cef20ad..93bd490 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -438,10 +438,19 @@ void vcpu_put(struct kvm_vcpu *vcpu); #ifdef __KVM_HAVE_IOAPIC void kvm_vcpu_request_scan_ioapic(struct kvm *kvm); +void kvm_arch_irq_routing_update(struct kvm *kvm); +u8 kvm_arch_nr_userspace_ioapic_pins(struct kvm *kvm); #else static inline void kvm_vcpu_request_scan_ioapic(struct kvm *kvm) { } +static inline void kvm_arch_irq_routing_update(struct kvm *kvm) +{ +} +static inline u8 kvm_arch_nr_userspace_ioapic_pins(struct kvm *kvm) +{ + return 0; +} #endif #ifdef CONFIG_HAVE_KVM_IRQFD diff --git a/virt/kvm/irqchip.c b/virt/kvm/irqchip.c index 8aaceed..208fdd3 100644 --- a/virt/kvm/irqchip.c +++ b/virt/kvm/irqchip.c @@ -203,6 +203,8 @@ int kvm_set_irq_routing(struct kvm *kvm, kvm_irq_routing_update(kvm); mutex_unlock(&kvm->irq_lock); + kvm_arch_irq_routing_update(kvm); + synchronize_srcu_expedited(&kvm->irq_srcu); new = old; @@ -212,3 +214,38 @@ out: kfree(new); return r; } + +void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_kernel_irq_routing_entry *entry; + struct kvm_irq_routing_table *table; + u32 i, nr_ioapic_pins; + int idx; + + /* kvm->irq_routing must be read after clearing + * KVM_SCAN_IOAPIC. */ + smp_mb(); + idx = srcu_read_lock(&kvm->irq_srcu); + table = kvm->irq_routing; + nr_ioapic_pins = min_t(u32, table->nr_rt_entries, + kvm_arch_nr_userspace_ioapic_pins(kvm)); + for (i = 0; i < nr_ioapic_pins; ++i) { + hlist_for_each_entry(entry, &table->map[i], link) { + u32 dest_id, dest_mode; + + if (entry->type != KVM_IRQ_ROUTING_MSI) + continue; + dest_id = (entry->msi.address_lo >> 12) & 0xff; + dest_mode = (entry->msi.address_lo >> 2) & 0x1; + if (kvm_apic_match_dest(vcpu, NULL, 0, dest_id, + dest_mode)) { + u32 vector = entry->msi.data & 0xff; + + __set_bit(vector, + (unsigned long *) eoi_exit_bitmap); + } + } + } + srcu_read_unlock(&kvm->irq_srcu, idx); +}