From patchwork Wed Aug 9 20:34:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lan,Tianyu" X-Patchwork-Id: 9892635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C42A060236 for ; Thu, 10 Aug 2017 02:43:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9CF0289CE for ; Thu, 10 Aug 2017 02:43:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9EAFB289E6; Thu, 10 Aug 2017 02:43:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E45AB289CE for ; Thu, 10 Aug 2017 02:43:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dfdPr-0001Lc-Ph; Thu, 10 Aug 2017 02:41:43 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dfdPq-0001Je-Dh for xen-devel@lists.xen.org; Thu, 10 Aug 2017 02:41:42 +0000 Received: from [85.158.139.211] by server-12.bemta-5.messagelabs.com id 49/FA-01731-5E7CB895; Thu, 10 Aug 2017 02:41:41 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkkeJIrShJLcpLzFFi42Jpa+sQ0X1yvDv S4OcHa4slHxezODB6HN39mymAMYo1My8pvyKBNePZiRusBfeiK25/8W1gnO/SxcjFISQwnVFi yZKTTF2MnBwSArwSR5bNYIWwAyQO7W1hgyjqYJR439rIDpJgE1CXOLF4IiOILSIgLXHt82VGk CJmgauMEqe3zWcGSQgLuEu83vyRBcRmEVCVOHZ2GlgDr4CrRMeH5SwQGxQkpjx8D1bPCRSfum Ur2BVCAi4S65ZPY5zAyLuAkWEVo0ZxalFZapGuobFeUlFmekZJbmJmjq6hgalebmpxcWJ6ak5 iUrFecn7uJkZgQDAAwQ7Gf9s8DzFKcjApifJ2NHRHCvEl5adUZiQWZ8QXleakFh9ilOHgUJLg 7T4GlBMsSk1PrUjLzAGGJkxagoNHSYS3DCTNW1yQmFucmQ6ROsVozLFh9fovTByvJvz/xiTEk peflyolzqsLUioAUppRmgc3CBYzlxhlpYR5GYFOE+IpSC3KzSxBlX/FKM7BqCTMWwEyhSczrw Ru3yugU5iATonw7QQ5pSQRISXVwNhvlJzC0uRw9rjlugo19aeeca9cXk3SSD3DvWzvv5+8x54 aXp5Zee5jReOzmG28NxRl+4+H7b9h5OuxbisD+6/Frod3eYjOnMZVfdOgjXGH7pLCj6tXZswx 6uk98u/V0is/vu/++GztdfXU31+sPG/w9KZYCzZVpj6XaUyo2bxJx4vl0R/LlhwlluKMREMt5 qLiRABiGrKJlAIAAA== X-Env-Sender: tianyu.lan@intel.com X-Msg-Ref: server-7.tower-206.messagelabs.com!1502332897!103723772!2 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 30873 invoked from network); 10 Aug 2017 02:41:40 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 10 Aug 2017 02:41:40 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2017 19:41:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.41,349,1498546800"; d="scan'208"; a="1203988005" Received: from sky-ws.sh.intel.com (HELO localhost) ([10.239.48.141]) by fmsmga002.fm.intel.com with ESMTP; 09 Aug 2017 19:41:37 -0700 From: Lan Tianyu To: xen-devel@lists.xen.org Date: Wed, 9 Aug 2017 16:34:15 -0400 Message-Id: <1502310866-10450-15-git-send-email-tianyu.lan@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1502310866-10450-1-git-send-email-tianyu.lan@intel.com> References: <1502310866-10450-1-git-send-email-tianyu.lan@intel.com> Cc: Lan Tianyu , kevin.tian@intel.com, wei.liu2@citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, julien.grall@arm.com, jbeulich@suse.com, Chao Gao Subject: [Xen-devel] [PATCH V2 14/25] x86/vvtd: Process interrupt remapping request X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Chao Gao When a remapping interrupt request arrives, remapping hardware computes the interrupt_index per the algorithm described in VTD spec "Interrupt Remapping Table", interprets the IRTE and generates a remapped interrupt request. This patch introduces viommu_handle_irq_request() to emulate the process how remapping hardware handles a remapping interrupt request. Signed-off-by: Chao Gao Signed-off-by: Lan Tianyu --- xen/drivers/passthrough/vtd/iommu.h | 21 +++ xen/drivers/passthrough/vtd/vtd.h | 6 + xen/drivers/passthrough/vtd/vvtd.c | 276 +++++++++++++++++++++++++++++++++++- 3 files changed, 302 insertions(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h index 102b4f3..70e64cf 100644 --- a/xen/drivers/passthrough/vtd/iommu.h +++ b/xen/drivers/passthrough/vtd/iommu.h @@ -244,6 +244,21 @@ #define dma_frcd_source_id(c) (c & 0xffff) #define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */ +enum VTD_FAULT_TYPE +{ + /* Interrupt remapping transition faults */ + VTD_FR_IR_REQ_RSVD = 0x20, /* One or more IR request reserved + * fields set */ + VTD_FR_IR_INDEX_OVER = 0x21, /* Index value greater than max */ + VTD_FR_IR_ENTRY_P = 0x22, /* Present (P) not set in IRTE */ + VTD_FR_IR_ROOT_INVAL = 0x23, /* IR Root table invalid */ + VTD_FR_IR_IRTE_RSVD = 0x24, /* IRTE Rsvd field non-zero with + * Present flag set */ + VTD_FR_IR_REQ_COMPAT = 0x25, /* Encountered compatible IR + * request while disabled */ + VTD_FR_IR_SID_ERR = 0x26, /* Invalid Source-ID */ +}; + /* * 0: Present * 1-11: Reserved @@ -384,6 +399,12 @@ struct iremap_entry { }; /* + * When VT-d doesn't enable Extended Interrupt Mode. Hardware only interprets + * only 8-bits ([15:8]) of Destination-ID field in the IRTEs. + */ +#define IRTE_xAPIC_DEST_MASK 0xff00 + +/* * Posted-interrupt descriptor address is 64 bits with 64-byte aligned, only * the upper 26 bits of lest significiant 32 bits is available. */ diff --git a/xen/drivers/passthrough/vtd/vtd.h b/xen/drivers/passthrough/vtd/vtd.h index bb8889f..1032b46 100644 --- a/xen/drivers/passthrough/vtd/vtd.h +++ b/xen/drivers/passthrough/vtd/vtd.h @@ -47,6 +47,8 @@ struct IO_APIC_route_remap_entry { }; }; +#define IOAPIC_REMAP_ENTRY_INDEX(x) ((x.index_15 << 15) + x.index_0_14) + struct msi_msg_remap_entry { union { u32 val; @@ -65,4 +67,8 @@ struct msi_msg_remap_entry { u32 data; /* msi message data */ }; +#define MSI_REMAP_ENTRY_INDEX(x) ((x.address_lo.index_15 << 15) + \ + x.address_lo.index_0_14 + \ + (x.address_lo.SHV ? (uint16_t)x.data : 0)) + #endif // _VTD_H_ diff --git a/xen/drivers/passthrough/vtd/vvtd.c b/xen/drivers/passthrough/vtd/vvtd.c index 8e8dbe6..2bee352 100644 --- a/xen/drivers/passthrough/vtd/vvtd.c +++ b/xen/drivers/passthrough/vtd/vvtd.c @@ -23,11 +23,16 @@ #include #include #include +#include #include +#include #include +#include #include +#include #include "iommu.h" +#include "vtd.h" struct hvm_hw_vvtd_regs { uint8_t data[1024]; @@ -38,6 +43,9 @@ struct hvm_hw_vvtd_regs { #define VIOMMU_STATUS_IRQ_REMAPPING_ENABLED (1 << 0) #define VIOMMU_STATUS_DMA_REMAPPING_ENABLED (1 << 1) +#define vvtd_irq_remapping_enabled(vvtd) \ + (vvtd->status & VIOMMU_STATUS_IRQ_REMAPPING_ENABLED) + struct vvtd { /* VIOMMU_STATUS_XXX */ int status; @@ -120,6 +128,140 @@ static inline uint8_t vvtd_get_reg_byte(struct vvtd *vtd, uint32_t reg) vvtd_set_reg(vvtd, (reg) + 4, (val) >> 32); \ } while(0) +static int map_guest_page(struct domain *d, uint64_t gfn, void **virt) +{ + struct page_info *p; + + p = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); + if ( !p ) + return -EINVAL; + + if ( !get_page_type(p, PGT_writable_page) ) + { + put_page(p); + return -EINVAL; + } + + *virt = __map_domain_page_global(p); + if ( !*virt ) + { + put_page_and_type(p); + return -ENOMEM; + } + return 0; +} + +static void unmap_guest_page(void *virt) +{ + struct page_info *page; + + if ( !virt ) + return; + + virt = (void *)((unsigned long)virt & PAGE_MASK); + page = mfn_to_page(domain_page_map_to_mfn(virt)); + + unmap_domain_page_global(virt); + put_page_and_type(page); +} + +static void vvtd_inj_irq( + struct vlapic *target, + uint8_t vector, + uint8_t trig_mode, + uint8_t delivery_mode) +{ + VVTD_DEBUG(VVTD_DBG_INFO, "dest=v%d, delivery_mode=%x vector=%d " + "trig_mode=%d.", + vlapic_vcpu(target)->vcpu_id, delivery_mode, + vector, trig_mode); + + ASSERT((delivery_mode == dest_Fixed) || + (delivery_mode == dest_LowestPrio)); + + vlapic_set_irq(target, vector, trig_mode); +} + +static int vvtd_delivery( + struct domain *d, int vector, + uint32_t dest, uint8_t dest_mode, + uint8_t delivery_mode, uint8_t trig_mode) +{ + struct vlapic *target; + struct vcpu *v; + + switch ( delivery_mode ) + { + case dest_LowestPrio: + target = vlapic_lowest_prio(d, NULL, 0, dest, dest_mode); + if ( target != NULL ) + { + vvtd_inj_irq(target, vector, trig_mode, delivery_mode); + break; + } + VVTD_DEBUG(VVTD_DBG_INFO, "null round robin: vector=%02x\n", vector); + break; + + case dest_Fixed: + for_each_vcpu ( d, v ) + if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest, + dest_mode) ) + vvtd_inj_irq(vcpu_vlapic(v), vector, + trig_mode, delivery_mode); + break; + + case dest_NMI: + for_each_vcpu ( d, v ) + if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest, dest_mode) + && !test_and_set_bool(v->nmi_pending) ) + vcpu_kick(v); + break; + + default: + printk(XENLOG_G_WARNING + "%pv: Unsupported VTD delivery mode %d for Dom%d\n", + current, delivery_mode, d->domain_id); + return -EINVAL; + } + + return 0; +} + +static uint32_t irq_remapping_request_index(struct irq_remapping_request *irq) +{ + if ( irq->type == VIOMMU_REQUEST_IRQ_MSI ) + { + struct msi_msg_remap_entry msi_msg = { { irq->msg.msi.addr }, 0, + irq->msg.msi.data }; + + return MSI_REMAP_ENTRY_INDEX(msi_msg); + } + else if ( irq->type == VIOMMU_REQUEST_IRQ_APIC ) + { + struct IO_APIC_route_remap_entry remap_rte = { { irq->msg.rte } }; + + return IOAPIC_REMAP_ENTRY_INDEX(remap_rte); + } + BUG(); + return 0; +} + +static inline uint32_t irte_dest(struct vvtd *vvtd, uint32_t dest) +{ + uint64_t irta; + + vvtd_get_reg_quad(vvtd, DMAR_IRTA_REG, irta); + /* In xAPIC mode, only 8-bits([15:8]) are valid */ + return DMA_IRTA_EIME(irta) ? dest : MASK_EXTR(dest, IRTE_xAPIC_DEST_MASK); +} + +static int vvtd_record_fault(struct vvtd *vvtd, + struct irq_remapping_request *irq, + int reason) +{ + return 0; +} + static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val) { uint64_t irta; @@ -259,6 +401,137 @@ static const struct hvm_mmio_ops vvtd_mmio_ops = { .write = vvtd_write }; +static bool ir_sid_valid(struct iremap_entry *irte, uint32_t source_id) +{ + return true; +} + +/* + * 'record_fault' is a flag to indicate whether we need recording a fault + * and notifying guest when a fault happens during fetching vIRTE. + */ +static int vvtd_get_entry(struct vvtd *vvtd, + struct irq_remapping_request *irq, + struct iremap_entry *dest, + bool record_fault) +{ + int ret; + uint32_t entry = irq_remapping_request_index(irq); + struct iremap_entry *irte, *irt_page; + + VVTD_DEBUG(VVTD_DBG_TRANS, "interpret a request with index %x", entry); + + if ( entry > vvtd->irt_max_entry ) + { + ret = VTD_FR_IR_INDEX_OVER; + goto handle_fault; + } + + ret = map_guest_page(vvtd->domain, vvtd->irt + (entry >> IREMAP_ENTRY_ORDER), + (void**)&irt_page); + if ( ret ) + { + ret = VTD_FR_IR_ROOT_INVAL; + goto handle_fault; + } + + irte = irt_page + (entry % (1 << IREMAP_ENTRY_ORDER)); + dest->val = irte->val; + if ( !qinval_present(*irte) ) + { + ret = VTD_FR_IR_ENTRY_P; + goto unmap_handle_fault; + } + + /* Check reserved bits */ + if ( (irte->remap.res_1 || irte->remap.res_2 || irte->remap.res_3 || + irte->remap.res_4) ) + { + ret = VTD_FR_IR_IRTE_RSVD; + goto unmap_handle_fault; + } + + if (!ir_sid_valid(irte, irq->source_id)) + { + ret = VTD_FR_IR_SID_ERR; + goto unmap_handle_fault; + } + unmap_guest_page(irt_page); + return 0; + + unmap_handle_fault: + unmap_guest_page(irt_page); + handle_fault: + if ( !record_fault ) + return ret; + + switch ( ret ) + { + case VTD_FR_IR_SID_ERR: + case VTD_FR_IR_IRTE_RSVD: + case VTD_FR_IR_ENTRY_P: + if ( qinval_fault_disable(*irte) ) + break; + /* fall through */ + case VTD_FR_IR_INDEX_OVER: + case VTD_FR_IR_ROOT_INVAL: + vvtd_record_fault(vvtd, irq, ret); + break; + + default: + gdprintk(XENLOG_G_INFO, "Can't handle VT-d fault %x\n", ret); + } + return ret; +} + +static int vvtd_irq_request_sanity_check(struct vvtd *vvtd, + struct irq_remapping_request *irq) +{ + if ( irq->type == VIOMMU_REQUEST_IRQ_APIC ) + { + struct IO_APIC_route_remap_entry rte = { { irq->msg.rte } }; + + ASSERT(rte.format); + return (!rte.reserved) ? 0 : VTD_FR_IR_REQ_RSVD; + } + else if ( irq->type == VIOMMU_REQUEST_IRQ_MSI ) + { + struct msi_msg_remap_entry msi_msg = { { irq->msg.msi.addr } }; + + ASSERT(msi_msg.address_lo.format); + return 0; + } + BUG(); + return 0; +} + +static int vvtd_handle_irq_request(struct domain *d, + struct irq_remapping_request *irq) +{ + struct iremap_entry irte; + int ret; + struct vvtd *vvtd = domain_vvtd(d); + + if ( !vvtd || !vvtd_irq_remapping_enabled(vvtd) ) + return -EINVAL; + + ret = vvtd_irq_request_sanity_check(vvtd, irq); + if ( ret ) + { + vvtd_record_fault(vvtd, irq, ret); + return ret; + } + + if ( !vvtd_get_entry(vvtd, irq, &irte, true) ) + { + vvtd_delivery(vvtd->domain, irte.remap.vector, + irte_dest(vvtd, irte.remap.dst), irte.remap.dm, + irte.remap.dlm, irte.remap.tm); + return 0; + } + return -EFAULT; +} + static void vvtd_reset(struct vvtd *vvtd, uint64_t capability) { uint64_t cap = DMA_CAP_NFR | DMA_CAP_SLLPS | DMA_CAP_FRO | @@ -334,7 +607,8 @@ static int vvtd_destroy(struct viommu *viommu) struct viommu_ops vvtd_hvm_vmx_ops = { .query_caps = vvtd_query_caps, .create = vvtd_create, - .destroy = vvtd_destroy + .destroy = vvtd_destroy, + .handle_irq_request = vvtd_handle_irq_request }; static int vvtd_register(void)