From patchwork Wed Aug 9 20:34:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "lan,Tianyu" X-Patchwork-Id: 9892637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 01F1560236 for ; Thu, 10 Aug 2017 02:43:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB5C1289CE for ; Thu, 10 Aug 2017 02:43:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D03C828A03; Thu, 10 Aug 2017 02:43:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 156A9289CE for ; Thu, 10 Aug 2017 02:43:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dfdQB-0001pw-OU; Thu, 10 Aug 2017 02:42:03 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dfdQ9-0001e4-MG for xen-devel@lists.xen.org; Thu, 10 Aug 2017 02:42:01 +0000 Received: from [85.158.143.35] by server-10.bemta-6.messagelabs.com id 0F/CB-03582-9F7CB895; Thu, 10 Aug 2017 02:42:01 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrDLMWRWlGSWpSXmKPExsVywNykQvfH8e5 IgxdTFC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1owj3f+ZC+7lVkxd8IK9gfGNXxcjF4eQwHRG idOHP7N1MXJySAjwShxZNoMVwvaX2PGjlR2iqINR4v/8h2AJNgF1iROLJzKC2CIC0hLXPl9mB CliFrgKNGnbfGaQhLBAosTfV7fZQWwWAVWJjS9OgMV5BVwlztx+wASxQUFiysP3YHFOoPjULV vB4kICLhLrlk9jnMDIu4CRYRWjRnFqUVlqka6RhV5SUWZ6RkluYmaOrqGBmV5uanFxYnpqTmJ SsV5yfu4mRmBIMADBDsbzawMPMUpyMCmJ8nY0dEcK8SXlp1RmJBZnxBeV5qQWH2KU4eBQkuBl AYaYkGBRanpqRVpmDjA4YdISHDxKIrzvjwGleYsLEnOLM9MhUqcYdTleTfj/jUmIJS8/L1VKn HcJSJEASFFGaR7cCFikXGKUlRLmZQQ6SoinILUoN7MEVf4VozgHo5IwbwXIFJ7MvBK4Ta+Ajm ACOiLCtxPkiJJEhJRUA2OAXW7f3IqcZTsPcGrddfLokvwj/WxFG1/sj3UuRqKebQxfvJbkBWX rhWYU+XfltfL7Hj1SpxGefPUrl1XX/k0eX1Seivy+96tv8lEDvoJjc5ncu2fv3N98O57ZlSPy nuHSxzarLzf/3st2b2FVaVBNe6/tpj+L4le1L/tY3O60J1fx19YD2kosxRmJhlrMRcWJABKOm FyPAgAA X-Env-Sender: tianyu.lan@intel.com X-Msg-Ref: server-14.tower-21.messagelabs.com!1502332917!70579653!2 X-Originating-IP: [192.55.52.120] X-SpamReason: No, hits=0.8 required=7.0 tests=DATE_IN_PAST_06_12 X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 13487 invoked from network); 10 Aug 2017 02:42:00 -0000 Received: from mga04.intel.com (HELO mga04.intel.com) (192.55.52.120) by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 10 Aug 2017 02:42:00 -0000 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2017 19:41:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,349,1498546800"; d="scan'208";a="138481081" Received: from sky-ws.sh.intel.com (HELO localhost) ([10.239.48.141]) by fmsmga006.fm.intel.com with ESMTP; 09 Aug 2017 19:41:57 -0700 From: Lan Tianyu To: xen-devel@lists.xen.org Date: Wed, 9 Aug 2017 16:34:22 -0400 Message-Id: <1502310866-10450-22-git-send-email-tianyu.lan@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1502310866-10450-1-git-send-email-tianyu.lan@intel.com> References: <1502310866-10450-1-git-send-email-tianyu.lan@intel.com> Cc: Lan Tianyu , kevin.tian@intel.com, wei.liu2@citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, julien.grall@arm.com, jbeulich@suse.com, Chao Gao Subject: [Xen-devel] [PATCH V2 21/25] tools/libxc: Add a new interface to bind remapping format msi with pirq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Chao Gao Introduce a new binding relationship and provide a new interface to manage the new relationship. Signed-off-by: Chao Gao Signed-off-by: Lan Tianyu --- tools/libxc/include/xenctrl.h | 17 ++++++ tools/libxc/xc_domain.c | 53 +++++++++++++++++ xen/drivers/passthrough/io.c | 135 +++++++++++++++++++++++++++++++++++------- xen/include/public/domctl.h | 7 +++ xen/include/xen/hvm/irq.h | 7 +++ 5 files changed, 198 insertions(+), 21 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index dfaa9d5..b0a9437 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1720,6 +1720,15 @@ int xc_domain_ioport_mapping(xc_interface *xch, uint32_t nr_ports, uint32_t add_mapping); +int xc_domain_update_msi_irq_remapping( + xc_interface *xch, + uint32_t domid, + uint32_t pirq, + uint32_t source_id, + uint32_t data, + uint64_t addr, + uint64_t gtable); + int xc_domain_update_msi_irq( xc_interface *xch, uint32_t domid, @@ -1734,6 +1743,14 @@ int xc_domain_unbind_msi_irq(xc_interface *xch, uint32_t pirq, uint32_t gflags); +int xc_domain_unbind_msi_irq_remapping( + xc_interface *xch, + uint32_t domid, + uint32_t pirq, + uint32_t source_id, + uint32_t data, + uint64_t addr); + int xc_domain_bind_pt_irq(xc_interface *xch, uint32_t domid, uint8_t machine_irq, diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c index 3bab4e8..4b6a510 100644 --- a/tools/libxc/xc_domain.c +++ b/tools/libxc/xc_domain.c @@ -1702,8 +1702,34 @@ int xc_deassign_dt_device( return rc; } +int xc_domain_update_msi_irq_remapping( + xc_interface *xch, + uint32_t domid, + uint32_t pirq, + uint32_t source_id, + uint32_t data, + uint64_t addr, + uint64_t gtable) +{ + int rc; + xen_domctl_bind_pt_irq_t *bind; + + DECLARE_DOMCTL; + domctl.cmd = XEN_DOMCTL_bind_pt_irq; + domctl.domain = (domid_t)domid; + bind = &(domctl.u.bind_pt_irq); + bind->irq_type = PT_IRQ_TYPE_MSI_IR; + bind->machine_irq = pirq; + bind->u.msi_ir.source_id = source_id; + bind->u.msi_ir.data = data; + bind->u.msi_ir.addr = addr; + bind->u.msi_ir.gtable = gtable; + + rc = do_domctl(xch, &domctl); + return rc; +} int xc_domain_update_msi_irq( xc_interface *xch, @@ -1732,6 +1758,33 @@ int xc_domain_update_msi_irq( return rc; } +int xc_domain_unbind_msi_irq_remapping( + xc_interface *xch, + uint32_t domid, + uint32_t pirq, + uint32_t source_id, + uint32_t data, + uint64_t addr) +{ + int rc; + xen_domctl_bind_pt_irq_t *bind; + + DECLARE_DOMCTL; + + domctl.cmd = XEN_DOMCTL_unbind_pt_irq; + domctl.domain = (domid_t)domid; + + bind = &(domctl.u.bind_pt_irq); + bind->irq_type = PT_IRQ_TYPE_MSI_IR; + bind->machine_irq = pirq; + bind->u.msi_ir.source_id = source_id; + bind->u.msi_ir.data = data; + bind->u.msi_ir.addr = addr; + + rc = do_domctl(xch, &domctl); + return rc; +} + int xc_domain_unbind_msi_irq( xc_interface *xch, uint32_t domid, diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c index 4d457f6..0510887 100644 --- a/xen/drivers/passthrough/io.c +++ b/xen/drivers/passthrough/io.c @@ -276,6 +276,92 @@ static struct vcpu *vector_hashing_dest(const struct domain *d, return dest; } +static inline void set_hvm_gmsi_info(struct hvm_gmsi_info *msi, + xen_domctl_bind_pt_irq_t *pt_irq_bind) +{ + if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI ) + { + msi->legacy.gvec = pt_irq_bind->u.msi.gvec; + msi->legacy.gflags = pt_irq_bind->u.msi.gflags; + } + else if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_IR ) + { + msi->intremap.source_id = pt_irq_bind->u.msi_ir.source_id; + msi->intremap.data = pt_irq_bind->u.msi_ir.data; + msi->intremap.addr = pt_irq_bind->u.msi_ir.addr; + } + else + BUG(); +} + +static inline void clear_hvm_gmsi_info(struct hvm_gmsi_info *msi, int irq_type) +{ + if ( irq_type == PT_IRQ_TYPE_MSI ) + { + msi->legacy.gvec = 0; + msi->legacy.gflags = 0; + } + else if ( irq_type == PT_IRQ_TYPE_MSI_IR ) + { + msi->intremap.source_id = 0; + msi->intremap.data = 0; + msi->intremap.addr = 0; + } + else + BUG(); +} + +static inline bool hvm_gmsi_info_need_update(struct hvm_gmsi_info *msi, + xen_domctl_bind_pt_irq_t *pt_irq_bind) +{ + if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI ) + return ((msi->legacy.gvec != pt_irq_bind->u.msi.gvec) || + (msi->legacy.gflags != pt_irq_bind->u.msi.gflags)); + else if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_IR ) + return ((msi->intremap.source_id != pt_irq_bind->u.msi_ir.source_id) || + (msi->intremap.data != pt_irq_bind->u.msi_ir.data) || + (msi->intremap.addr != pt_irq_bind->u.msi_ir.addr)); + BUG(); + return 0; +} + +static int pirq_dpci_2_msi_attr(struct domain *d, + struct hvm_pirq_dpci *pirq_dpci, uint8_t *gvec, + uint8_t *dest, uint8_t *dm, uint8_t *dlm) +{ + int rc = 0; + + if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI ) + { + *gvec = pirq_dpci->gmsi.legacy.gvec; + *dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK; + *dm = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK); + *dlm = (pirq_dpci->gmsi.legacy.gflags & VMSI_DELIV_MASK) >> + GFLAGS_SHIFT_DELIV_MODE; + } + else if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI_IR ) + { + struct irq_remapping_request request; + struct irq_remapping_info irq_info; + + irq_request_msi_fill(&request, pirq_dpci->gmsi.intremap.source_id, + pirq_dpci->gmsi.intremap.addr, + pirq_dpci->gmsi.intremap.data); + /* Currently, only viommu 0 is supported */ + rc = viommu_get_irq_info(d, 0, &request, &irq_info); + if ( !rc ) + { + *gvec = irq_info.vector; + *dest = irq_info.dest; + *dm = irq_info.dest_mode; + *dlm = irq_info.delivery_mode; + } + } + else + BUG(); + return rc; +} + int pt_irq_create_bind( struct domain *d, xen_domctl_bind_pt_irq_t *pt_irq_bind) { @@ -339,17 +425,21 @@ int pt_irq_create_bind( switch ( pt_irq_bind->irq_type ) { case PT_IRQ_TYPE_MSI: + case PT_IRQ_TYPE_MSI_IR: { - uint8_t dest, dest_mode, delivery_mode; + uint8_t dest = 0, dest_mode = 0, delivery_mode = 0, gvec; int dest_vcpu_id; const struct vcpu *vcpu; + bool ir = (pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_IR); + uint64_t gtable = ir ? pt_irq_bind->u.msi_ir.gtable : + pt_irq_bind->u.msi.gtable; if ( !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) ) { pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED | HVM_IRQ_DPCI_MACH_MSI | - HVM_IRQ_DPCI_GUEST_MSI; - pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec; - pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags; + (ir ? HVM_IRQ_DPCI_GUEST_MSI_IR : + HVM_IRQ_DPCI_GUEST_MSI); + set_hvm_gmsi_info(&pirq_dpci->gmsi, pt_irq_bind); /* * 'pt_irq_create_bind' can be called after 'pt_irq_destroy_bind'. * The 'pirq_cleanup_check' which would free the structure is only @@ -364,9 +454,9 @@ int pt_irq_create_bind( pirq_dpci->dom = d; /* bind after hvm_irq_dpci is setup to avoid race with irq handler*/ rc = pirq_guest_bind(d->vcpu[0], info, 0); - if ( rc == 0 && pt_irq_bind->u.msi.gtable ) + if ( rc == 0 && gtable ) { - rc = msixtbl_pt_register(d, info, pt_irq_bind->u.msi.gtable); + rc = msixtbl_pt_register(d, info, gtable); if ( unlikely(rc) ) { pirq_guest_unbind(d, info); @@ -381,8 +471,7 @@ int pt_irq_create_bind( } if ( unlikely(rc) ) { - pirq_dpci->gmsi.legacy.gflags = 0; - pirq_dpci->gmsi.legacy.gvec = 0; + clear_hvm_gmsi_info(&pirq_dpci->gmsi, pt_irq_bind->irq_type); pirq_dpci->dom = NULL; pirq_dpci->flags = 0; pirq_cleanup_check(info, d); @@ -392,7 +481,8 @@ int pt_irq_create_bind( } else { - uint32_t mask = HVM_IRQ_DPCI_MACH_MSI | HVM_IRQ_DPCI_GUEST_MSI; + uint32_t mask = HVM_IRQ_DPCI_MACH_MSI | + (ir ? HVM_IRQ_DPCI_GUEST_MSI_IR : HVM_IRQ_DPCI_GUEST_MSI); if ( (pirq_dpci->flags & mask) != mask ) { @@ -401,29 +491,31 @@ int pt_irq_create_bind( } /* If pirq is already mapped as vmsi, update guest data/addr. */ - if ( pirq_dpci->gmsi.legacy.gvec != pt_irq_bind->u.msi.gvec || - pirq_dpci->gmsi.legacy.gflags != pt_irq_bind->u.msi.gflags ) + if ( hvm_gmsi_info_need_update(&pirq_dpci->gmsi, pt_irq_bind) ) { /* Directly clear pending EOIs before enabling new MSI info. */ pirq_guest_eoi(info); - pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec; - pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags; + set_hvm_gmsi_info(&pirq_dpci->gmsi, pt_irq_bind); } } /* Calculate dest_vcpu_id for MSI-type pirq migration. */ - dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK; - dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK); - delivery_mode = (pirq_dpci->gmsi.legacy.gflags & VMSI_DELIV_MASK) >> - GFLAGS_SHIFT_DELIV_MODE; - - dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode); + rc = pirq_dpci_2_msi_attr(d, pirq_dpci, &gvec, &dest, &dest_mode, + &delivery_mode); + if ( unlikely(rc) ) + { + spin_unlock(&d->event_lock); + return -EFAULT; + } + else + dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode); pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id; spin_unlock(&d->event_lock); pirq_dpci->gmsi.posted = false; vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL; - if ( iommu_intpost ) + /* Currently, don't use interrupt posting for guest's remapping MSIs */ + if ( iommu_intpost && !ir ) { if ( delivery_mode == dest_LowestPrio ) vcpu = vector_hashing_dest(d, dest, dest_mode, @@ -435,7 +527,7 @@ int pt_irq_create_bind( hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]); /* Use interrupt posting if it is supported. */ - if ( iommu_intpost ) + if ( iommu_intpost && !ir ) pi_update_irte(vcpu ? &vcpu->arch.hvm_vmx.pi_desc : NULL, info, pirq_dpci->gmsi.legacy.gvec); @@ -627,6 +719,7 @@ int pt_irq_destroy_bind( } break; case PT_IRQ_TYPE_MSI: + case PT_IRQ_TYPE_MSI_IR: break; default: return -EOPNOTSUPP; diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 4b10f26..1adf032 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -555,6 +555,7 @@ typedef enum pt_irq_type_e { PT_IRQ_TYPE_MSI, PT_IRQ_TYPE_MSI_TRANSLATE, PT_IRQ_TYPE_SPI, /* ARM: valid range 32-1019 */ + PT_IRQ_TYPE_MSI_IR, } pt_irq_type_t; struct xen_domctl_bind_pt_irq { uint32_t machine_irq; @@ -575,6 +576,12 @@ struct xen_domctl_bind_pt_irq { uint64_aligned_t gtable; } msi; struct { + uint32_t source_id; + uint32_t data; + uint64_t addr; + uint64_aligned_t gtable; + } msi_ir; + struct { uint16_t spi; } spi; } u; diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h index 5e736f8..884e092 100644 --- a/xen/include/xen/hvm/irq.h +++ b/xen/include/xen/hvm/irq.h @@ -41,6 +41,7 @@ struct dev_intx_gsi_link { #define _HVM_IRQ_DPCI_GUEST_PCI_SHIFT 4 #define _HVM_IRQ_DPCI_GUEST_MSI_SHIFT 5 #define _HVM_IRQ_DPCI_IDENTITY_GSI_SHIFT 6 +#define _HVM_IRQ_DPCI_GUEST_MSI_IR_SHIFT 7 #define _HVM_IRQ_DPCI_TRANSLATE_SHIFT 15 #define HVM_IRQ_DPCI_MACH_PCI (1 << _HVM_IRQ_DPCI_MACH_PCI_SHIFT) #define HVM_IRQ_DPCI_MACH_MSI (1 << _HVM_IRQ_DPCI_MACH_MSI_SHIFT) @@ -49,6 +50,7 @@ struct dev_intx_gsi_link { #define HVM_IRQ_DPCI_GUEST_PCI (1 << _HVM_IRQ_DPCI_GUEST_PCI_SHIFT) #define HVM_IRQ_DPCI_GUEST_MSI (1 << _HVM_IRQ_DPCI_GUEST_MSI_SHIFT) #define HVM_IRQ_DPCI_IDENTITY_GSI (1 << _HVM_IRQ_DPCI_IDENTITY_GSI_SHIFT) +#define HVM_IRQ_DPCI_GUEST_MSI_IR (1 << _HVM_IRQ_DPCI_GUEST_MSI_IR_SHIFT) #define HVM_IRQ_DPCI_TRANSLATE (1 << _HVM_IRQ_DPCI_TRANSLATE_SHIFT) #define VMSI_DEST_ID_MASK 0xff @@ -67,6 +69,11 @@ struct hvm_gmsi_info { uint32_t gvec; uint32_t gflags; } legacy; + struct { + uint32_t source_id; + uint32_t data; + uint64_t addr; + } intremap; }; int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */ bool posted; /* directly deliver to guest via VT-d PI? */