From patchwork Mon May 2 12:55:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 8992281 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 76A149F1D3 for ; Mon, 2 May 2016 12:58:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B51E4201FA for ; Mon, 2 May 2016 12:58:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 033AE20211 for ; Mon, 2 May 2016 12:58:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1axDO8-0008BD-Gm; Mon, 02 May 2016 12:55:48 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1axDO6-0008Aw-CN for xen-devel@lists.xenproject.org; Mon, 02 May 2016 12:55:46 +0000 Received: from [85.158.139.211] by server-12.bemta-5.messagelabs.com id 82/7C-25799-15E47275; Mon, 02 May 2016 12:55:45 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrCIsWRWlGSWpSXmKPExsXS6fjDS9ffTz3 cYOpObYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePNnCtsBVu+Mlds6P3F3MD4cgFzFyMnh5BA nsSCd7+YQGxeATuJ8+cWM4LYEgKGEvvmr2IDsVkEVCXefoWoZxNQl2h7tp21i5GDQ0TAQOLc0 aQuRi4OZoGrTBJNi1eyg9QIC/hK7O66zwRSwysgKPF3hzBImBlofM/C62wTGLlmIWRmIclA2F oSD3/dYoGwtSWWLXzNDFLOLCAtsfwfB0TYSuLr5yloSkBsV4kta4+wLmDkWMWoUZxaVJZapGt opJdUlJmeUZKbmJmja2hgqpebWlycmJ6ak5hUrJecn7uJERiCDECwg7FvlvMhRkkOJiVR3iQL 9XAhvqT8lMqMxOKM+KLSnNTiQ4wyHBxKErzFvkA5waLU9NSKtMwcYDTApCU4eJREeBVA0rzFB Ym5xZnpEKlTjLocx+beWMskxJKXn5cqJc5bAlIkAFKUUZoHNwIWmZcYZaWEeRmBjhLiKUgtys 0sQZV/xSjOwagkzMsNMoUnM68EbtMroCOYgI7IXq8KckRJIkJKqoGR53CRg5Dw9+rKxCqXSRN zGIqvH4pZzLJ9/lm5d18r/JINLNgrb7MKTJr3NZmh7e6UF9f3XNzD9IrlxvzkF6/2uPd6yz+9 teN52gmvl+cX3TM22hEb816gLl82TdTwSI/MuRf1k1frfHjOcWn52rdn5BeES+rolFnyFW0Tj NWz6WK7Z7Lk75yNSizFGYmGWsxFxYkAwqw5Z8cCAAA= X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-12.tower-206.messagelabs.com!1462193740!1050930!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 33600 invoked from network); 2 May 2016 12:55:42 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 2 May 2016 12:55:42 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Mon, 02 May 2016 06:55:40 -0600 Message-Id: <57276A6702000078000E7A92@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Mon, 02 May 2016 06:55:35 -0600 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall Subject: [Xen-devel] [PATCH] IOMMU/x86: per-domain control structure is not HVM-specific X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ... and hence should not live in the HVM part of the PV/HVM union. In fact it's not even architecture specific (there already is a per-arch extension type to it), so it gets moved out right to common struct domain. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper , and should be Acked-by: Julien Grall Reviewed-by: Wei Liu --- Of course this is quite large a patch for 4.7, but I don't think we should try to make this brokenness slightly more safe by e.g. adding a suitable BUILD_BUG_ON() guaranteeing that the shared structure sits above the end of the PV part of the union. That's at best something to be considered for the stable trees, imo. IOMMU/x86: per-domain control structure is not HVM-specific ... and hence should not live in the HVM part of the PV/HVM union. In fact it's not even architecture specific (there already is a per-arch extension type to it), so it gets moved out right to common struct domain. Signed-off-by: Jan Beulich --- Of course this is quite large a patch for 4.7, but I don't think we should try to make this brokenness slightly more safe by e.g. adding a suitable BUILD_BUG_ON() guaranteeing that the shared structure sits above the end of the PV part of the union. That's at best something to be considered for the stable trees, imo. --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -647,7 +647,7 @@ long arch_do_domctl( case XEN_DOMCTL_ioport_mapping: { - struct hvm_iommu *hd; + struct domain_iommu *hd; unsigned int fgp = domctl->u.ioport_mapping.first_gport; unsigned int fmp = domctl->u.ioport_mapping.first_mport; unsigned int np = domctl->u.ioport_mapping.nr_ports; @@ -673,7 +673,7 @@ long arch_do_domctl( if ( ret ) break; - hd = domain_hvm_iommu(d); + hd = dom_iommu(d); if ( add ) { printk(XENLOG_G_INFO --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -174,12 +174,12 @@ static bool_t dpci_portio_accept(const s const ioreq_t *p) { struct vcpu *curr = current; - struct hvm_iommu *hd = domain_hvm_iommu(curr->domain); + const struct domain_iommu *dio = dom_iommu(curr->domain); struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io; struct g2m_ioport *g2m_ioport; unsigned int start, end; - list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list ) + list_for_each_entry( g2m_ioport, &dio->arch.g2m_ioport_list, list ) { start = g2m_ioport->gport; end = start + g2m_ioport->np; --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -229,9 +229,10 @@ static void tboot_gen_domain_integrity(c if ( !is_idle_domain(d) ) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - update_iommu_mac(&ctx, hd->arch.pgd_maddr, - agaw_to_level(hd->arch.agaw)); + const struct domain_iommu *dio = dom_iommu(d); + + update_iommu_mac(&ctx, dio->arch.pgd_maddr, + agaw_to_level(dio->arch.agaw)); } } --- a/xen/drivers/passthrough/amd/iommu_cmd.c +++ b/xen/drivers/passthrough/amd/iommu_cmd.c @@ -18,7 +18,6 @@ */ #include -#include #include #include #include "../ats.h" --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -18,7 +18,6 @@ #include #include -#include #include #include @@ -59,12 +58,12 @@ static uint16_t guest_bdf(struct domain static inline struct guest_iommu *domain_iommu(struct domain *d) { - return domain_hvm_iommu(d)->arch.g_iommu; + return dom_iommu(d)->arch.g_iommu; } static inline struct guest_iommu *vcpu_iommu(struct vcpu *v) { - return domain_hvm_iommu(v->domain)->arch.g_iommu; + return dom_iommu(v->domain)->arch.g_iommu; } static void guest_iommu_enable(struct guest_iommu *iommu) @@ -885,7 +884,7 @@ static const struct hvm_mmio_ops iommu_m int guest_iommu_init(struct domain* d) { struct guest_iommu *iommu; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( !is_hvm_domain(d) || !iommu_enabled || !iommuv2_enabled || !has_viommu(d) ) @@ -924,5 +923,5 @@ void guest_iommu_destroy(struct domain * tasklet_kill(&iommu->cmd_buffer_tasklet); xfree(iommu); - domain_hvm_iommu(d)->arch.g_iommu = NULL; + dom_iommu(d)->arch.g_iommu = NULL; } --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -18,7 +18,6 @@ #include #include -#include #include #include #include --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include "../ats.h" @@ -340,7 +339,7 @@ static int iommu_update_pde_count(struct unsigned long first_mfn; u64 *table, *pde, *ntable; u64 ntable_maddr, mask; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); bool_t ok = 0; ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn ); @@ -395,7 +394,7 @@ static int iommu_merge_pages(struct doma u64 *table, *pde, *ntable; u64 ntable_mfn; unsigned long first_mfn; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn ); @@ -445,7 +444,7 @@ static int iommu_pde_from_gfn(struct dom unsigned long next_table_mfn; unsigned int level; struct page_info *table; - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); table = hd->arch.root_table; level = hd->arch.paging_mode; @@ -554,7 +553,7 @@ static int update_paging_mode(struct dom struct page_info *old_root = NULL; void *new_root_vaddr; unsigned long old_root_mfn; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( gfn == INVALID_MFN ) return -EADDRNOTAVAIL; @@ -637,7 +636,7 @@ int amd_iommu_map_page(struct domain *d, unsigned int flags) { bool_t need_flush = 0; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); unsigned long pt_mfn[7]; unsigned int merge_level; @@ -717,7 +716,7 @@ out: int amd_iommu_unmap_page(struct domain *d, unsigned long gfn) { unsigned long pt_mfn[7]; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); BUG_ON( !hd->arch.root_table ); @@ -787,7 +786,7 @@ int amd_iommu_reserve_domain_unity_map(s /* Share p2m table with iommu. */ void amd_iommu_share_p2m(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct page_info *p2m_table; mfn_t pgd_mfn; --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include "../ats.h" @@ -117,8 +116,7 @@ static void amd_iommu_setup_domain_devic int req_id, valid = 1; int dte_i = 0; u8 bus = pdev->bus; - - struct hvm_iommu *hd = domain_hvm_iommu(domain); + const struct domain_iommu *hd = dom_iommu(domain); BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode || !iommu->dev_table.buffer ); @@ -224,7 +222,7 @@ int __init amd_iov_detect(void) return scan_pci_devices(); } -static int allocate_domain_resources(struct hvm_iommu *hd) +static int allocate_domain_resources(struct domain_iommu *hd) { /* allocate root table */ spin_lock(&hd->arch.mapping_lock); @@ -259,7 +257,7 @@ static int get_paging_mode(unsigned long static int amd_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); /* allocate page directroy */ if ( allocate_domain_resources(hd) != 0 ) @@ -341,7 +339,7 @@ void amd_iommu_disable_domain_device(str AMD_IOMMU_DEBUG("Disable: device id = %#x, " "domain = %d, paging mode = %d\n", req_id, domain->domain_id, - domain_hvm_iommu(domain)->arch.paging_mode); + dom_iommu(domain)->arch.paging_mode); } spin_unlock_irqrestore(&iommu->lock, flags); @@ -358,7 +356,7 @@ static int reassign_device(struct domain { struct amd_iommu *iommu; int bdf; - struct hvm_iommu *t = domain_hvm_iommu(target); + struct domain_iommu *t = dom_iommu(target); bdf = PCI_BDF2(pdev->bus, pdev->devfn); iommu = find_iommu_for_device(pdev->seg, bdf); @@ -459,7 +457,7 @@ static void deallocate_page_table(struct static void deallocate_iommu_page_tables(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( iommu_use_hap_pt(d) ) return; @@ -599,7 +597,7 @@ static void amd_dump_p2m_table_level(str static void amd_dump_p2m_table(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !hd->arch.root_table ) return; --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2542,7 +2542,7 @@ static u32 platform_features = ARM_SMMU_ static void arm_smmu_iotlb_flush_all(struct domain *d) { - struct arm_smmu_xen_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv; + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv; struct iommu_domain *cfg; spin_lock(&smmu_domain->lock); @@ -2573,7 +2573,7 @@ static struct iommu_domain *arm_smmu_get struct arm_smmu_xen_domain *xen_domain; struct arm_smmu_device *smmu; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; smmu = find_smmu_for_device(dev); if (!smmu) @@ -2606,7 +2606,7 @@ static int arm_smmu_assign_dev(struct do struct arm_smmu_xen_domain *xen_domain; int ret = 0; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; if (!dev->archdata.iommu) { dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device); @@ -2667,7 +2667,7 @@ static int arm_smmu_deassign_dev(struct struct iommu_domain *domain = dev_iommu_domain(dev); struct arm_smmu_xen_domain *xen_domain; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; if (!domain || domain->priv->cfg.domain != d) { dev_err(dev, " not attached to domain %d\n", d->domain_id); @@ -2724,7 +2724,7 @@ static int arm_smmu_iommu_domain_init(st spin_lock_init(&xen_domain->lock); INIT_LIST_HEAD(&xen_domain->contexts); - domain_hvm_iommu(d)->arch.priv = xen_domain; + dom_iommu(d)->arch.priv = xen_domain; /* Coherent walk can be enabled only when all SMMUs support it. */ if (platform_features & ARM_SMMU_FEAT_COHERENT_WALK) @@ -2739,7 +2739,7 @@ static void __hwdom_init arm_smmu_iommu_ static void arm_smmu_iommu_domain_teardown(struct domain *d) { - struct arm_smmu_xen_domain *xen_domain = domain_hvm_iommu(d)->arch.priv; + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv; ASSERT(list_empty(&xen_domain->contexts)); xfree(xen_domain); --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -27,7 +27,7 @@ static spinlock_t dtdevs_lock = SPIN_LOC int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev) { int rc = -EBUSY; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return -EINVAL; @@ -69,7 +69,7 @@ fail: int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); int rc; if ( !iommu_enabled || !hd->platform_ops ) @@ -109,16 +109,14 @@ static bool_t iommu_dt_device_is_assigne int iommu_dt_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - INIT_LIST_HEAD(&hd->dt_devices); + INIT_LIST_HEAD(&dom_iommu(d)->dt_devices); return 0; } int iommu_release_dt_devices(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct dt_device_node *dev, *_dev; int rc; --- a/xen/drivers/passthrough/io.c +++ b/xen/drivers/passthrough/io.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -14,7 +14,6 @@ #include #include -#include #include #include #include @@ -129,7 +128,7 @@ static void __init parse_iommu_param(cha int iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); int ret = 0; ret = arch_iommu_domain_init(d); @@ -159,7 +158,7 @@ static void __hwdom_init check_hwdom_req void __hwdom_init iommu_hwdom_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); check_hwdom_reqs(d); @@ -193,7 +192,7 @@ void __hwdom_init iommu_hwdom_init(struc void iommu_teardown(struct domain *d) { - const struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); d->need_iommu = 0; hd->platform_ops->teardown(d); @@ -228,9 +227,7 @@ int iommu_construct(struct domain *d) void iommu_domain_destroy(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - if ( !iommu_enabled || !hd->platform_ops ) + if ( !iommu_enabled || !dom_iommu(d)->platform_ops ) return; if ( need_iommu(d) ) @@ -242,7 +239,7 @@ void iommu_domain_destroy(struct domain int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int flags) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -252,7 +249,7 @@ int iommu_map_page(struct domain *d, uns int iommu_unmap_page(struct domain *d, unsigned long gfn) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -279,7 +276,7 @@ static void iommu_free_pagetables(unsign void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush ) return; @@ -289,7 +286,7 @@ void iommu_iotlb_flush(struct domain *d, void iommu_iotlb_flush_all(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all ) return; @@ -403,12 +400,10 @@ int iommu_get_reserved_device_memory(iom bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature) { - const struct hvm_iommu *hd = domain_hvm_iommu(d); - if ( !iommu_enabled ) return 0; - return test_bit(feature, hd->features); + return test_bit(feature, dom_iommu(d)->features); } static void iommu_dump_p2m_table(unsigned char key) --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -1256,7 +1255,7 @@ void iommu_read_msi_from_ire( int iommu_add_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; int rc; u8 devfn; @@ -1265,7 +1264,7 @@ int iommu_add_device(struct pci_dev *pde ASSERT(pcidevs_locked()); - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -1287,14 +1286,14 @@ int iommu_add_device(struct pci_dev *pde int iommu_enable_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; if ( !pdev->domain ) return -EINVAL; ASSERT(pcidevs_locked()); - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->enable_device ) return 0; @@ -1304,13 +1303,13 @@ int iommu_enable_device(struct pci_dev * int iommu_remove_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; u8 devfn; if ( !pdev->domain ) return -EINVAL; - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -1350,7 +1349,7 @@ static int device_assigned(u16 seg, u8 b static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev; int rc = 0; @@ -1410,7 +1409,7 @@ static int assign_device(struct domain * /* caller should hold the pcidevs_lock */ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev = NULL; int ret = 0; @@ -1460,7 +1459,7 @@ static int iommu_get_device_group( struct domain *d, u16 seg, u8 bus, u8 devfn, XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev; int group_id, sdev_id; u32 bdf; --- a/xen/drivers/passthrough/vtd/intremap.c +++ b/xen/drivers/passthrough/vtd/intremap.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include @@ -253,7 +252,7 @@ static u64 addr_to_dma_page_maddr(struct { struct acpi_drhd_unit *drhd; struct pci_dev *pdev; - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); int addr_width = agaw_to_width(hd->arch.agaw); struct dma_pte *parent, *pte = NULL; int level = agaw_to_level(hd->arch.agaw); @@ -561,7 +560,7 @@ static void iommu_flush_all(void) static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, int dma_old_pte_present, unsigned int page_count) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct acpi_drhd_unit *drhd; struct iommu *iommu; int flush_dev_iotlb; @@ -612,7 +611,7 @@ static void intel_iommu_iotlb_flush_all( /* clear one page's page table */ static void dma_pte_clear_one(struct domain *domain, u64 addr) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); struct dma_pte *page = NULL, *pte = NULL; u64 pg_maddr; @@ -1240,9 +1239,7 @@ void __init iommu_free(struct acpi_drhd_ static int intel_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); + dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); return 0; } @@ -1276,7 +1273,7 @@ int domain_context_mapping_one( struct iommu *iommu, u8 bus, u8 devfn, const struct pci_dev *pdev) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); struct context_entry *context, *context_entries; u64 maddr, pgd_maddr; u16 seg = iommu->intel->drhd->segment; @@ -1646,10 +1643,9 @@ static int domain_context_unmap( if ( found == 0 ) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); int iommu_domid; - clear_bit(iommu->index, &hd->arch.iommu_bitmap); + clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap); iommu_domid = domain_iommu_domid(domain, iommu); if ( iommu_domid == -1 ) @@ -1668,7 +1664,7 @@ out: static void iommu_domain_teardown(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct mapped_rmrr *mrmrr, *tmp; if ( list_empty(&acpi_drhd_units) ) @@ -1693,7 +1689,7 @@ static int intel_iommu_map_page( struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int flags) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 }; u64 pg_maddr; @@ -1759,7 +1755,7 @@ void iommu_pte_flush(struct domain *d, u { struct acpi_drhd_unit *drhd; struct iommu *iommu = NULL; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); int flush_dev_iotlb; int iommu_domid; @@ -1800,11 +1796,11 @@ static int __init vtd_ept_page_compatibl */ static void iommu_set_pgd(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); mfn_t pgd_mfn; pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); + dom_iommu(d)->arch.pgd_maddr = + pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); } static int rmrr_identity_mapping(struct domain *d, bool_t map, @@ -1814,7 +1810,7 @@ static int rmrr_identity_mapping(struct unsigned long base_pfn = rmrr->base_address >> PAGE_SHIFT_4K; unsigned long end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K; struct mapped_rmrr *mrmrr; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); ASSERT(pcidevs_locked()); ASSERT(rmrr->base_address < rmrr->end_address); @@ -2525,12 +2521,12 @@ static void vtd_dump_p2m_table_level(pad static void vtd_dump_p2m_table(struct domain *d) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; if ( list_empty(&acpi_drhd_units) ) return; - hd = domain_hvm_iommu(d); + hd = dom_iommu(d); printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw)); vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0); } --- a/xen/drivers/passthrough/vtd/quirks.c +++ b/xen/drivers/passthrough/vtd/quirks.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -41,7 +41,7 @@ int __init iommu_setup_hpet_msi(struct m int arch_iommu_populate_page_table(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct page_info *page; int rc = 0, n = 0; @@ -119,7 +119,7 @@ void __hwdom_init arch_iommu_check_autot int arch_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); spin_lock_init(&hd->arch.mapping_lock); INIT_LIST_HEAD(&hd->arch.g2m_ioport_list); @@ -130,7 +130,7 @@ int arch_iommu_domain_init(struct domain void arch_iommu_domain_destroy(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct list_head *ioport_list, *tmp; struct g2m_ioport *ioport; --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -11,12 +11,10 @@ #include #include #include -#include struct hvm_domain { uint64_t params[HVM_NR_PARAMS]; - struct hvm_iommu iommu; } __cacheline_aligned; #ifdef CONFIG_ARM_64 --- a/xen/include/asm-arm/hvm/iommu.h +++ /dev/null @@ -1,10 +0,0 @@ -#ifndef __ASM_ARM_HVM_IOMMU_H_ -#define __ASM_ARM_HVM_IOMMU_H_ - -struct arch_hvm_iommu -{ - /* Private information for the IOMMU drivers */ - void *priv; -}; - -#endif /* __ASM_ARM_HVM_IOMMU_H_ */ --- a/xen/include/asm-arm/iommu.h +++ b/xen/include/asm-arm/iommu.h @@ -14,9 +14,14 @@ #ifndef __ARCH_ARM_IOMMU_H__ #define __ARCH_ARM_IOMMU_H__ +struct arch_iommu +{ + /* Private information for the IOMMU drivers */ + void *priv; +}; + /* Always share P2M Table between the CPU and the IOMMU */ #define iommu_use_hap_pt(d) (1) -#define domain_hvm_iommu(d) (&d->arch.hvm_domain.iommu) const struct iommu_ops *iommu_get_ops(void); void __init iommu_set_ops(const struct iommu_ops *ops); --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -123,9 +122,6 @@ struct hvm_domain { spinlock_t uc_lock; bool_t is_in_uc_mode; - /* Pass-through */ - struct hvm_iommu hvm_iommu; - /* hypervisor intercepted msix table */ struct list_head msixtbl_list; spinlock_t msixtbl_list_lock; --- a/xen/include/asm-x86/hvm/iommu.h +++ b/xen/include/asm-x86/hvm/iommu.h @@ -48,7 +48,7 @@ struct g2m_ioport { #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48 -struct arch_hvm_iommu +struct arch_iommu { u64 pgd_maddr; /* io page directory machine address */ spinlock_t mapping_lock; /* io page table lock */ --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -14,11 +14,12 @@ #ifndef __ARCH_X86_IOMMU_H__ #define __ARCH_X86_IOMMU_H__ +#include /* For now - should really be merged here. */ + #define MAX_IOMMUS 32 /* Does this domain have a P2M table we can use as its IOMMU pagetable? */ #define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share) -#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu) void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value); unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg); --- a/xen/include/xen/hvm/iommu.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright (c) 2006, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program; If not, see . - * - * Copyright (C) Allen Kay - */ - -#ifndef __XEN_HVM_IOMMU_H__ -#define __XEN_HVM_IOMMU_H__ - -#include -#include -#include - -struct hvm_iommu { - struct arch_hvm_iommu arch; - - /* iommu_ops */ - const struct iommu_ops *platform_ops; - -#ifdef CONFIG_HAS_DEVICE_TREE - /* List of DT devices assigned to this domain */ - struct list_head dt_devices; -#endif - - /* Features supported by the IOMMU */ - DECLARE_BITMAP(features, IOMMU_FEAT_count); -}; - -#define iommu_set_feature(d, f) set_bit((f), domain_hvm_iommu(d)->features) -#define iommu_clear_feature(d, f) clear_bit((f), domain_hvm_iommu(d)->features) - -#endif /* __XEN_HVM_IOMMU_H__ */ --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -86,6 +86,24 @@ enum iommu_feature bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature); +struct domain_iommu { + struct arch_iommu arch; + + /* iommu_ops */ + const struct iommu_ops *platform_ops; + +#ifdef CONFIG_HAS_DEVICE_TREE + /* List of DT devices assigned to this domain */ + struct list_head dt_devices; +#endif + + /* Features supported by the IOMMU */ + DECLARE_BITMAP(features, IOMMU_FEAT_count); +}; + +#define dom_iommu(d) (&(d)->iommu) +#define iommu_set_feature(d, f) set_bit(f, dom_iommu(d)->features) +#define iommu_clear_feature(d, f) clear_bit(f, dom_iommu(d)->features) #ifdef CONFIG_HAS_PCI void pt_pci_init(void); --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -368,6 +369,8 @@ struct domain int64_t time_offset_seconds; #ifdef CONFIG_HAS_PASSTHROUGH + struct domain_iommu iommu; + /* Does this guest need iommu mappings (-1 meaning "being set up")? */ s8 need_iommu; #endif --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -647,7 +647,7 @@ long arch_do_domctl( case XEN_DOMCTL_ioport_mapping: { - struct hvm_iommu *hd; + struct domain_iommu *hd; unsigned int fgp = domctl->u.ioport_mapping.first_gport; unsigned int fmp = domctl->u.ioport_mapping.first_mport; unsigned int np = domctl->u.ioport_mapping.nr_ports; @@ -673,7 +673,7 @@ long arch_do_domctl( if ( ret ) break; - hd = domain_hvm_iommu(d); + hd = dom_iommu(d); if ( add ) { printk(XENLOG_G_INFO --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -174,12 +174,12 @@ static bool_t dpci_portio_accept(const s const ioreq_t *p) { struct vcpu *curr = current; - struct hvm_iommu *hd = domain_hvm_iommu(curr->domain); + const struct domain_iommu *dio = dom_iommu(curr->domain); struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io; struct g2m_ioport *g2m_ioport; unsigned int start, end; - list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list ) + list_for_each_entry( g2m_ioport, &dio->arch.g2m_ioport_list, list ) { start = g2m_ioport->gport; end = start + g2m_ioport->np; --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -229,9 +229,10 @@ static void tboot_gen_domain_integrity(c if ( !is_idle_domain(d) ) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - update_iommu_mac(&ctx, hd->arch.pgd_maddr, - agaw_to_level(hd->arch.agaw)); + const struct domain_iommu *dio = dom_iommu(d); + + update_iommu_mac(&ctx, dio->arch.pgd_maddr, + agaw_to_level(dio->arch.agaw)); } } --- a/xen/drivers/passthrough/amd/iommu_cmd.c +++ b/xen/drivers/passthrough/amd/iommu_cmd.c @@ -18,7 +18,6 @@ */ #include -#include #include #include #include "../ats.h" --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -18,7 +18,6 @@ #include #include -#include #include #include @@ -59,12 +58,12 @@ static uint16_t guest_bdf(struct domain static inline struct guest_iommu *domain_iommu(struct domain *d) { - return domain_hvm_iommu(d)->arch.g_iommu; + return dom_iommu(d)->arch.g_iommu; } static inline struct guest_iommu *vcpu_iommu(struct vcpu *v) { - return domain_hvm_iommu(v->domain)->arch.g_iommu; + return dom_iommu(v->domain)->arch.g_iommu; } static void guest_iommu_enable(struct guest_iommu *iommu) @@ -885,7 +884,7 @@ static const struct hvm_mmio_ops iommu_m int guest_iommu_init(struct domain* d) { struct guest_iommu *iommu; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( !is_hvm_domain(d) || !iommu_enabled || !iommuv2_enabled || !has_viommu(d) ) @@ -924,5 +923,5 @@ void guest_iommu_destroy(struct domain * tasklet_kill(&iommu->cmd_buffer_tasklet); xfree(iommu); - domain_hvm_iommu(d)->arch.g_iommu = NULL; + dom_iommu(d)->arch.g_iommu = NULL; } --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -18,7 +18,6 @@ #include #include -#include #include #include #include --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include "../ats.h" @@ -340,7 +339,7 @@ static int iommu_update_pde_count(struct unsigned long first_mfn; u64 *table, *pde, *ntable; u64 ntable_maddr, mask; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); bool_t ok = 0; ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn ); @@ -395,7 +394,7 @@ static int iommu_merge_pages(struct doma u64 *table, *pde, *ntable; u64 ntable_mfn; unsigned long first_mfn; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn ); @@ -445,7 +444,7 @@ static int iommu_pde_from_gfn(struct dom unsigned long next_table_mfn; unsigned int level; struct page_info *table; - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); table = hd->arch.root_table; level = hd->arch.paging_mode; @@ -554,7 +553,7 @@ static int update_paging_mode(struct dom struct page_info *old_root = NULL; void *new_root_vaddr; unsigned long old_root_mfn; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( gfn == INVALID_MFN ) return -EADDRNOTAVAIL; @@ -637,7 +636,7 @@ int amd_iommu_map_page(struct domain *d, unsigned int flags) { bool_t need_flush = 0; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); unsigned long pt_mfn[7]; unsigned int merge_level; @@ -717,7 +716,7 @@ out: int amd_iommu_unmap_page(struct domain *d, unsigned long gfn) { unsigned long pt_mfn[7]; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); BUG_ON( !hd->arch.root_table ); @@ -787,7 +786,7 @@ int amd_iommu_reserve_domain_unity_map(s /* Share p2m table with iommu. */ void amd_iommu_share_p2m(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct page_info *p2m_table; mfn_t pgd_mfn; --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include "../ats.h" @@ -117,8 +116,7 @@ static void amd_iommu_setup_domain_devic int req_id, valid = 1; int dte_i = 0; u8 bus = pdev->bus; - - struct hvm_iommu *hd = domain_hvm_iommu(domain); + const struct domain_iommu *hd = dom_iommu(domain); BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode || !iommu->dev_table.buffer ); @@ -224,7 +222,7 @@ int __init amd_iov_detect(void) return scan_pci_devices(); } -static int allocate_domain_resources(struct hvm_iommu *hd) +static int allocate_domain_resources(struct domain_iommu *hd) { /* allocate root table */ spin_lock(&hd->arch.mapping_lock); @@ -259,7 +257,7 @@ static int get_paging_mode(unsigned long static int amd_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); /* allocate page directroy */ if ( allocate_domain_resources(hd) != 0 ) @@ -341,7 +339,7 @@ void amd_iommu_disable_domain_device(str AMD_IOMMU_DEBUG("Disable: device id = %#x, " "domain = %d, paging mode = %d\n", req_id, domain->domain_id, - domain_hvm_iommu(domain)->arch.paging_mode); + dom_iommu(domain)->arch.paging_mode); } spin_unlock_irqrestore(&iommu->lock, flags); @@ -358,7 +356,7 @@ static int reassign_device(struct domain { struct amd_iommu *iommu; int bdf; - struct hvm_iommu *t = domain_hvm_iommu(target); + struct domain_iommu *t = dom_iommu(target); bdf = PCI_BDF2(pdev->bus, pdev->devfn); iommu = find_iommu_for_device(pdev->seg, bdf); @@ -459,7 +457,7 @@ static void deallocate_page_table(struct static void deallocate_iommu_page_tables(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( iommu_use_hap_pt(d) ) return; @@ -599,7 +597,7 @@ static void amd_dump_p2m_table_level(str static void amd_dump_p2m_table(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !hd->arch.root_table ) return; --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2542,7 +2542,7 @@ static u32 platform_features = ARM_SMMU_ static void arm_smmu_iotlb_flush_all(struct domain *d) { - struct arm_smmu_xen_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv; + struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv; struct iommu_domain *cfg; spin_lock(&smmu_domain->lock); @@ -2573,7 +2573,7 @@ static struct iommu_domain *arm_smmu_get struct arm_smmu_xen_domain *xen_domain; struct arm_smmu_device *smmu; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; smmu = find_smmu_for_device(dev); if (!smmu) @@ -2606,7 +2606,7 @@ static int arm_smmu_assign_dev(struct do struct arm_smmu_xen_domain *xen_domain; int ret = 0; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; if (!dev->archdata.iommu) { dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device); @@ -2667,7 +2667,7 @@ static int arm_smmu_deassign_dev(struct struct iommu_domain *domain = dev_iommu_domain(dev); struct arm_smmu_xen_domain *xen_domain; - xen_domain = domain_hvm_iommu(d)->arch.priv; + xen_domain = dom_iommu(d)->arch.priv; if (!domain || domain->priv->cfg.domain != d) { dev_err(dev, " not attached to domain %d\n", d->domain_id); @@ -2724,7 +2724,7 @@ static int arm_smmu_iommu_domain_init(st spin_lock_init(&xen_domain->lock); INIT_LIST_HEAD(&xen_domain->contexts); - domain_hvm_iommu(d)->arch.priv = xen_domain; + dom_iommu(d)->arch.priv = xen_domain; /* Coherent walk can be enabled only when all SMMUs support it. */ if (platform_features & ARM_SMMU_FEAT_COHERENT_WALK) @@ -2739,7 +2739,7 @@ static void __hwdom_init arm_smmu_iommu_ static void arm_smmu_iommu_domain_teardown(struct domain *d) { - struct arm_smmu_xen_domain *xen_domain = domain_hvm_iommu(d)->arch.priv; + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv; ASSERT(list_empty(&xen_domain->contexts)); xfree(xen_domain); --- a/xen/drivers/passthrough/device_tree.c +++ b/xen/drivers/passthrough/device_tree.c @@ -27,7 +27,7 @@ static spinlock_t dtdevs_lock = SPIN_LOC int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev) { int rc = -EBUSY; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return -EINVAL; @@ -69,7 +69,7 @@ fail: int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); int rc; if ( !iommu_enabled || !hd->platform_ops ) @@ -109,16 +109,14 @@ static bool_t iommu_dt_device_is_assigne int iommu_dt_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - INIT_LIST_HEAD(&hd->dt_devices); + INIT_LIST_HEAD(&dom_iommu(d)->dt_devices); return 0; } int iommu_release_dt_devices(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct dt_device_node *dev, *_dev; int rc; --- a/xen/drivers/passthrough/io.c +++ b/xen/drivers/passthrough/io.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -14,7 +14,6 @@ #include #include -#include #include #include #include @@ -129,7 +128,7 @@ static void __init parse_iommu_param(cha int iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); int ret = 0; ret = arch_iommu_domain_init(d); @@ -159,7 +158,7 @@ static void __hwdom_init check_hwdom_req void __hwdom_init iommu_hwdom_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); check_hwdom_reqs(d); @@ -193,7 +192,7 @@ void __hwdom_init iommu_hwdom_init(struc void iommu_teardown(struct domain *d) { - const struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); d->need_iommu = 0; hd->platform_ops->teardown(d); @@ -228,9 +227,7 @@ int iommu_construct(struct domain *d) void iommu_domain_destroy(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - if ( !iommu_enabled || !hd->platform_ops ) + if ( !iommu_enabled || !dom_iommu(d)->platform_ops ) return; if ( need_iommu(d) ) @@ -242,7 +239,7 @@ void iommu_domain_destroy(struct domain int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int flags) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -252,7 +249,7 @@ int iommu_map_page(struct domain *d, uns int iommu_unmap_page(struct domain *d, unsigned long gfn) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -279,7 +276,7 @@ static void iommu_free_pagetables(unsign void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush ) return; @@ -289,7 +286,7 @@ void iommu_iotlb_flush(struct domain *d, void iommu_iotlb_flush_all(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush_all ) return; @@ -403,12 +400,10 @@ int iommu_get_reserved_device_memory(iom bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature) { - const struct hvm_iommu *hd = domain_hvm_iommu(d); - if ( !iommu_enabled ) return 0; - return test_bit(feature, hd->features); + return test_bit(feature, dom_iommu(d)->features); } static void iommu_dump_p2m_table(unsigned char key) --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -1256,7 +1255,7 @@ void iommu_read_msi_from_ire( int iommu_add_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; int rc; u8 devfn; @@ -1265,7 +1264,7 @@ int iommu_add_device(struct pci_dev *pde ASSERT(pcidevs_locked()); - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -1287,14 +1286,14 @@ int iommu_add_device(struct pci_dev *pde int iommu_enable_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; if ( !pdev->domain ) return -EINVAL; ASSERT(pcidevs_locked()); - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->enable_device ) return 0; @@ -1304,13 +1303,13 @@ int iommu_enable_device(struct pci_dev * int iommu_remove_device(struct pci_dev *pdev) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; u8 devfn; if ( !pdev->domain ) return -EINVAL; - hd = domain_hvm_iommu(pdev->domain); + hd = dom_iommu(pdev->domain); if ( !iommu_enabled || !hd->platform_ops ) return 0; @@ -1350,7 +1349,7 @@ static int device_assigned(u16 seg, u8 b static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev; int rc = 0; @@ -1410,7 +1409,7 @@ static int assign_device(struct domain * /* caller should hold the pcidevs_lock */ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev = NULL; int ret = 0; @@ -1460,7 +1459,7 @@ static int iommu_get_device_group( struct domain *d, u16 seg, u8 bus, u8 devfn, XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct pci_dev *pdev; int group_id, sdev_id; u32 bdf; --- a/xen/drivers/passthrough/vtd/intremap.c +++ b/xen/drivers/passthrough/vtd/intremap.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include @@ -253,7 +252,7 @@ static u64 addr_to_dma_page_maddr(struct { struct acpi_drhd_unit *drhd; struct pci_dev *pdev; - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); int addr_width = agaw_to_width(hd->arch.agaw); struct dma_pte *parent, *pte = NULL; int level = agaw_to_level(hd->arch.agaw); @@ -561,7 +560,7 @@ static void iommu_flush_all(void) static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, int dma_old_pte_present, unsigned int page_count) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct acpi_drhd_unit *drhd; struct iommu *iommu; int flush_dev_iotlb; @@ -612,7 +611,7 @@ static void intel_iommu_iotlb_flush_all( /* clear one page's page table */ static void dma_pte_clear_one(struct domain *domain, u64 addr) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); struct dma_pte *page = NULL, *pte = NULL; u64 pg_maddr; @@ -1240,9 +1239,7 @@ void __init iommu_free(struct acpi_drhd_ static int intel_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); - - hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); + dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); return 0; } @@ -1276,7 +1273,7 @@ int domain_context_mapping_one( struct iommu *iommu, u8 bus, u8 devfn, const struct pci_dev *pdev) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); + struct domain_iommu *hd = dom_iommu(domain); struct context_entry *context, *context_entries; u64 maddr, pgd_maddr; u16 seg = iommu->intel->drhd->segment; @@ -1646,10 +1643,9 @@ static int domain_context_unmap( if ( found == 0 ) { - struct hvm_iommu *hd = domain_hvm_iommu(domain); int iommu_domid; - clear_bit(iommu->index, &hd->arch.iommu_bitmap); + clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap); iommu_domid = domain_iommu_domid(domain, iommu); if ( iommu_domid == -1 ) @@ -1668,7 +1664,7 @@ out: static void iommu_domain_teardown(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct mapped_rmrr *mrmrr, *tmp; if ( list_empty(&acpi_drhd_units) ) @@ -1693,7 +1689,7 @@ static int intel_iommu_map_page( struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int flags) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 }; u64 pg_maddr; @@ -1759,7 +1755,7 @@ void iommu_pte_flush(struct domain *d, u { struct acpi_drhd_unit *drhd; struct iommu *iommu = NULL; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); int flush_dev_iotlb; int iommu_domid; @@ -1800,11 +1796,11 @@ static int __init vtd_ept_page_compatibl */ static void iommu_set_pgd(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); mfn_t pgd_mfn; pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); + dom_iommu(d)->arch.pgd_maddr = + pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); } static int rmrr_identity_mapping(struct domain *d, bool_t map, @@ -1814,7 +1810,7 @@ static int rmrr_identity_mapping(struct unsigned long base_pfn = rmrr->base_address >> PAGE_SHIFT_4K; unsigned long end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K; struct mapped_rmrr *mrmrr; - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); ASSERT(pcidevs_locked()); ASSERT(rmrr->base_address < rmrr->end_address); @@ -2525,12 +2521,12 @@ static void vtd_dump_p2m_table_level(pad static void vtd_dump_p2m_table(struct domain *d) { - struct hvm_iommu *hd; + const struct domain_iommu *hd; if ( list_empty(&acpi_drhd_units) ) return; - hd = domain_hvm_iommu(d); + hd = dom_iommu(d); printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw)); vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0); } --- a/xen/drivers/passthrough/vtd/quirks.c +++ b/xen/drivers/passthrough/vtd/quirks.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -41,7 +41,7 @@ int __init iommu_setup_hpet_msi(struct m int arch_iommu_populate_page_table(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct page_info *page; int rc = 0, n = 0; @@ -119,7 +119,7 @@ void __hwdom_init arch_iommu_check_autot int arch_iommu_domain_init(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + struct domain_iommu *hd = dom_iommu(d); spin_lock_init(&hd->arch.mapping_lock); INIT_LIST_HEAD(&hd->arch.g2m_ioport_list); @@ -130,7 +130,7 @@ int arch_iommu_domain_init(struct domain void arch_iommu_domain_destroy(struct domain *d) { - struct hvm_iommu *hd = domain_hvm_iommu(d); + const struct domain_iommu *hd = dom_iommu(d); struct list_head *ioport_list, *tmp; struct g2m_ioport *ioport; --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -11,12 +11,10 @@ #include #include #include -#include struct hvm_domain { uint64_t params[HVM_NR_PARAMS]; - struct hvm_iommu iommu; } __cacheline_aligned; #ifdef CONFIG_ARM_64 --- a/xen/include/asm-arm/hvm/iommu.h +++ /dev/null @@ -1,10 +0,0 @@ -#ifndef __ASM_ARM_HVM_IOMMU_H_ -#define __ASM_ARM_HVM_IOMMU_H_ - -struct arch_hvm_iommu -{ - /* Private information for the IOMMU drivers */ - void *priv; -}; - -#endif /* __ASM_ARM_HVM_IOMMU_H_ */ --- a/xen/include/asm-arm/iommu.h +++ b/xen/include/asm-arm/iommu.h @@ -14,9 +14,14 @@ #ifndef __ARCH_ARM_IOMMU_H__ #define __ARCH_ARM_IOMMU_H__ +struct arch_iommu +{ + /* Private information for the IOMMU drivers */ + void *priv; +}; + /* Always share P2M Table between the CPU and the IOMMU */ #define iommu_use_hap_pt(d) (1) -#define domain_hvm_iommu(d) (&d->arch.hvm_domain.iommu) const struct iommu_ops *iommu_get_ops(void); void __init iommu_set_ops(const struct iommu_ops *ops); --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -123,9 +122,6 @@ struct hvm_domain { spinlock_t uc_lock; bool_t is_in_uc_mode; - /* Pass-through */ - struct hvm_iommu hvm_iommu; - /* hypervisor intercepted msix table */ struct list_head msixtbl_list; spinlock_t msixtbl_list_lock; --- a/xen/include/asm-x86/hvm/iommu.h +++ b/xen/include/asm-x86/hvm/iommu.h @@ -48,7 +48,7 @@ struct g2m_ioport { #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48 -struct arch_hvm_iommu +struct arch_iommu { u64 pgd_maddr; /* io page directory machine address */ spinlock_t mapping_lock; /* io page table lock */ --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -14,11 +14,12 @@ #ifndef __ARCH_X86_IOMMU_H__ #define __ARCH_X86_IOMMU_H__ +#include /* For now - should really be merged here. */ + #define MAX_IOMMUS 32 /* Does this domain have a P2M table we can use as its IOMMU pagetable? */ #define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share) -#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu) void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value); unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg); --- a/xen/include/xen/hvm/iommu.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright (c) 2006, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program; If not, see . - * - * Copyright (C) Allen Kay - */ - -#ifndef __XEN_HVM_IOMMU_H__ -#define __XEN_HVM_IOMMU_H__ - -#include -#include -#include - -struct hvm_iommu { - struct arch_hvm_iommu arch; - - /* iommu_ops */ - const struct iommu_ops *platform_ops; - -#ifdef CONFIG_HAS_DEVICE_TREE - /* List of DT devices assigned to this domain */ - struct list_head dt_devices; -#endif - - /* Features supported by the IOMMU */ - DECLARE_BITMAP(features, IOMMU_FEAT_count); -}; - -#define iommu_set_feature(d, f) set_bit((f), domain_hvm_iommu(d)->features) -#define iommu_clear_feature(d, f) clear_bit((f), domain_hvm_iommu(d)->features) - -#endif /* __XEN_HVM_IOMMU_H__ */ --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -86,6 +86,24 @@ enum iommu_feature bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature); +struct domain_iommu { + struct arch_iommu arch; + + /* iommu_ops */ + const struct iommu_ops *platform_ops; + +#ifdef CONFIG_HAS_DEVICE_TREE + /* List of DT devices assigned to this domain */ + struct list_head dt_devices; +#endif + + /* Features supported by the IOMMU */ + DECLARE_BITMAP(features, IOMMU_FEAT_count); +}; + +#define dom_iommu(d) (&(d)->iommu) +#define iommu_set_feature(d, f) set_bit(f, dom_iommu(d)->features) +#define iommu_clear_feature(d, f) clear_bit(f, dom_iommu(d)->features) #ifdef CONFIG_HAS_PCI void pt_pci_init(void); --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -368,6 +369,8 @@ struct domain int64_t time_offset_seconds; #ifdef CONFIG_HAS_PASSTHROUGH + struct domain_iommu iommu; + /* Does this guest need iommu mappings (-1 meaning "being set up")? */ s8 need_iommu; #endif