From patchwork Thu Jan 5 19:04:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 9499469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3BAE960413 for ; Thu, 5 Jan 2017 19:05:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28DF128437 for ; Thu, 5 Jan 2017 19:05:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1ACBC2842F; Thu, 5 Jan 2017 19:05:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 893CD2842F for ; Thu, 5 Jan 2017 19:05:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cPDLn-0000Ur-BH; Thu, 05 Jan 2017 19:05:23 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cPDLj-0007nm-CL for linux-arm-kernel@lists.infradead.org; Thu, 05 Jan 2017 19:05:21 +0000 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BB9E7129200; Thu, 5 Jan 2017 19:05:02 +0000 (UTC) Received: from localhost.redhat.com (vpn1-4-92.ams2.redhat.com [10.36.4.92]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v05J4naC012717; Thu, 5 Jan 2017 14:04:56 -0500 From: Eric Auger To: eric.auger@redhat.com, eric.auger.pro@gmail.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 01/18] iommu/dma: Allow MSI-only cookies Date: Thu, 5 Jan 2017 19:04:29 +0000 Message-Id: <1483643086-2883-2-git-send-email-eric.auger@redhat.com> In-Reply-To: <1483643086-2883-1-git-send-email-eric.auger@redhat.com> References: <1483643086-2883-1-git-send-email-eric.auger@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 05 Jan 2017 19:05:02 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170105_110519_534122_D4AEE859 X-CRM114-Status: GOOD ( 23.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: drjones@redhat.com, kvm@vger.kernel.org, punit.agrawal@arm.com, linux-kernel@vger.kernel.org, geethasowjanya.akula@gmail.com, diana.craciun@nxp.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, bharat.bhushan@nxp.com, shankerd@codeaurora.org, gpkulkarni@gmail.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP IOMMU domain users such as VFIO face a similar problem to DMA API ops with regard to mapping MSI messages in systems where the MSI write is subject to IOMMU translation. With the relevant infrastructure now in place for managed DMA domains, it's actually really simple for other users to piggyback off that and reap the benefits without giving up their own IOVA management, and without having to reinvent their own wheel in the MSI layer. Allow such users to opt into automatic MSI remapping by dedicating a region of their IOVA space to a managed cookie, and extend the mapping routine to implement a trivial linear allocator in such cases, to avoid the needless overhead of a full-blown IOVA domain. Signed-off-by: Robin Murphy Signed-off-by: Eric Auger --- [Eric] fixed 2 issues on MSI path --- drivers/iommu/dma-iommu.c | 116 +++++++++++++++++++++++++++++++++++++--------- include/linux/dma-iommu.h | 7 +++ 2 files changed, 101 insertions(+), 22 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 2db0d64..f2c47ad 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -37,10 +37,19 @@ struct iommu_dma_msi_page { phys_addr_t phys; }; +enum iommu_dma_cookie_type { + IOMMU_DMA_IOVA_COOKIE, + IOMMU_DMA_MSI_COOKIE, +}; + struct iommu_dma_cookie { - struct iova_domain iovad; - struct list_head msi_page_list; - spinlock_t msi_lock; + union { + struct iova_domain iovad; + dma_addr_t msi_iova; + }; + struct list_head msi_page_list; + spinlock_t msi_lock; + enum iommu_dma_cookie_type type; }; static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain) @@ -53,6 +62,19 @@ int iommu_dma_init(void) return iova_cache_get(); } +static struct iommu_dma_cookie *__cookie_alloc(enum iommu_dma_cookie_type type) +{ + struct iommu_dma_cookie *cookie; + + cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); + if (cookie) { + spin_lock_init(&cookie->msi_lock); + INIT_LIST_HEAD(&cookie->msi_page_list); + cookie->type = type; + } + return cookie; +} + /** * iommu_get_dma_cookie - Acquire DMA-API resources for a domain * @domain: IOMMU domain to prepare for DMA-API usage @@ -62,25 +84,53 @@ int iommu_dma_init(void) */ int iommu_get_dma_cookie(struct iommu_domain *domain) { + if (domain->iova_cookie) + return -EEXIST; + + domain->iova_cookie = __cookie_alloc(IOMMU_DMA_IOVA_COOKIE); + if (!domain->iova_cookie) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL(iommu_get_dma_cookie); + +/** + * iommu_get_msi_cookie - Acquire just MSI remapping resources + * @domain: IOMMU domain to prepare + * @base: Start address of IOVA region for MSI mappings + * + * Users who manage their own IOVA allocation and do not want DMA API support, + * but would still like to take advantage of automatic MSI remapping, can use + * this to initialise their own domain appropriately. Users should reserve a + * contiguous IOVA region, starting at @base, large enough to accommodate the + * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address + * used by the devices attached to @domain. + */ +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) +{ struct iommu_dma_cookie *cookie; + if (domain->type != IOMMU_DOMAIN_UNMANAGED) + return -EINVAL; + if (domain->iova_cookie) return -EEXIST; - cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); + cookie = __cookie_alloc(IOMMU_DMA_MSI_COOKIE); if (!cookie) return -ENOMEM; - spin_lock_init(&cookie->msi_lock); - INIT_LIST_HEAD(&cookie->msi_page_list); + cookie->msi_iova = base; domain->iova_cookie = cookie; return 0; } -EXPORT_SYMBOL(iommu_get_dma_cookie); +EXPORT_SYMBOL(iommu_get_msi_cookie); /** * iommu_put_dma_cookie - Release a domain's DMA mapping resources - * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() + * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or + * iommu_get_msi_cookie() * * IOMMU drivers should normally call this from their domain_free callback. */ @@ -92,7 +142,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) if (!cookie) return; - if (cookie->iovad.granule) + if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule) put_iova_domain(&cookie->iovad); list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) { @@ -137,11 +187,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size, struct device *dev) { - struct iova_domain *iovad = cookie_iovad(domain); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; unsigned long order, base_pfn, end_pfn; - if (!iovad) - return -ENODEV; + if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) + return -EINVAL; /* Use the smallest supported page size for IOVA granularity */ order = __ffs(domain->pgsize_bitmap); @@ -662,11 +713,21 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, { struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_msi_page *msi_page; - struct iova_domain *iovad = &cookie->iovad; + struct iova_domain *iovad; struct iova *iova; int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; + size_t size; + + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) { + iovad = &cookie->iovad; + size = iovad->granule; + } else { + iovad = NULL; + size = PAGE_SIZE; + } + + msi_addr &= ~(phys_addr_t)(size - 1); - msi_addr &= ~(phys_addr_t)iova_mask(iovad); list_for_each_entry(msi_page, &cookie->msi_page_list, list) if (msi_page->phys == msi_addr) return msi_page; @@ -675,13 +736,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev)); - if (!iova) - goto out_free_page; - msi_page->phys = msi_addr; - msi_page->iova = iova_dma_addr(iovad, iova); - if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot)) + if (iovad) { + iova = __alloc_iova(domain, size, dma_get_mask(dev)); + if (!iova) + goto out_free_page; + msi_page->iova = iova_dma_addr(iovad, iova); + } else { + msi_page->iova = cookie->msi_iova; + cookie->msi_iova += size; + } + + if (iommu_map(domain, msi_page->iova, msi_addr, size, prot)) goto out_free_iova; INIT_LIST_HEAD(&msi_page->list); @@ -689,7 +755,10 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - __free_iova(iovad, iova); + if (iovad) + __free_iova(iovad, iova); + else + cookie->msi_iova -= size; out_free_page: kfree(msi_page); return NULL; @@ -730,7 +799,10 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) msg->data = ~0U; } else { msg->address_hi = upper_32_bits(msi_page->iova); - msg->address_lo &= iova_mask(&cookie->iovad); + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) + msg->address_lo &= iova_mask(&cookie->iovad); + else + msg->address_lo &= (PAGE_SIZE - 1); msg->address_lo += lower_32_bits(msi_page->iova); } } diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 7f7e9a7..c843461 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -27,6 +27,7 @@ /* Domain management interface for IOMMU drivers */ int iommu_get_dma_cookie(struct iommu_domain *domain); +int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); void iommu_put_dma_cookie(struct iommu_domain *domain); /* Setup call for arch DMA mapping code */ @@ -86,6 +87,12 @@ static inline int iommu_get_dma_cookie(struct iommu_domain *domain) return -ENODEV; } +static inline int iommu_get_msi_cookie(struct iommu_domain *domain, + dma_addr_t base) +{ + return -ENODEV; +} + static inline void iommu_put_dma_cookie(struct iommu_domain *domain) { }