From patchwork Sun Sep 27 06:34:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 11801875 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 49A4F112C for ; Sun, 27 Sep 2020 06:41:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2710A239D4 for ; Sun, 27 Sep 2020 06:41:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2710A239D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 964FA6E0EF; Sun, 27 Sep 2020 06:40:59 +0000 (UTC) X-Original-To: Intel-gfx@lists.freedesktop.org Delivered-To: Intel-gfx@lists.freedesktop.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 228816E0D4 for ; Sun, 27 Sep 2020 06:40:57 +0000 (UTC) IronPort-SDR: Ke+OK9wCHbfedBfp7+spQk6v3GE5xfMCYZR0/mWgfreFzF4a+I/UfoScEgIDrV9R1fw+MkL95d tCcswI+2Ohvg== X-IronPort-AV: E=McAfee;i="6000,8403,9756"; a="141863656" X-IronPort-AV: E=Sophos;i="5.77,309,1596524400"; d="scan'208";a="141863656" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2020 23:40:56 -0700 IronPort-SDR: bY30GybGosSeTunpyzLsnvFo/yfNAPFJ6ZG3HraLvjOY40LcAwtb35tVj2hb292c3qyX/lfwBw h0vmqXBXmhVw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,309,1596524400"; d="scan'208";a="349457827" Received: from allen-box.sh.intel.com ([10.239.159.139]) by FMSMGA003.fm.intel.com with ESMTP; 26 Sep 2020 23:40:54 -0700 From: Lu Baolu To: Joerg Roedel , Tom Murphy , David Woodhouse , Christoph Hellwig Date: Sun, 27 Sep 2020 14:34:33 +0800 Message-Id: <20200927063437.13988-4-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200927063437.13988-1-baolu.lu@linux.intel.com> References: <20200927063437.13988-1-baolu.lu@linux.intel.com> Subject: [Intel-gfx] [PATCH v4 3/7] iommu: Allow the dma-iommu api to use bounce buffers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ashok Raj , Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Lu Baolu MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tom Murphy Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Signed-off-by: Tom Murphy Co-developed-by: Lu Baolu Signed-off-by: Lu Baolu --- drivers/iommu/dma-iommu.c | 163 +++++++++++++++++++++++++++++++++++--- 1 file changed, 150 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index fb84cfa83703..3526db774611 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -21,9 +21,11 @@ #include #include #include +#include #include #include #include +#include struct iommu_dma_msi_page { struct list_head list; @@ -500,6 +502,31 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist); } +static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + phys_addr_t phys; + + phys = iommu_iova_to_phys(domain, dma_addr); + if (WARN_ON(!phys)) + return; + + __iommu_dma_unmap(dev, dma_addr, size); + + if (unlikely(is_swiotlb_buffer(phys))) + swiotlb_tbl_unmap_single(dev, phys, size, + iova_align(iovad, size), dir, attrs); +} + +static bool dev_is_untrusted(struct device *dev) +{ + return dev_is_pci(dev) && to_pci_dev(dev)->untrusted; +} + static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size_t size, int prot, u64 dma_mask) { @@ -525,6 +552,55 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return iova + iova_off; } +static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t org_size, dma_addr_t dma_mask, bool coherent, + enum dma_data_direction dir, unsigned long attrs) +{ + int prot = dma_info_to_prot(dir, coherent, attrs); + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t aligned_size = org_size; + void *padding_start; + size_t padding_size; + dma_addr_t iova; + + /* + * If both the physical buffer start address and size are + * page aligned, we don't need to use a bounce page. + */ + if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) && + iova_offset(iovad, phys | org_size)) { + aligned_size = iova_align(iovad, org_size); + phys = swiotlb_tbl_map_single(dev, + __phys_to_dma(dev, io_tlb_start), + phys, org_size, aligned_size, dir, attrs); + + if (phys == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Cleanup the padding area. */ + padding_start = phys_to_virt(phys); + padding_size = aligned_size; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) { + padding_start += org_size; + padding_size -= org_size; + } + + memset(padding_start, 0, padding_size); + } + + iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); + if ((iova == DMA_MAPPING_ERROR) && is_swiotlb_buffer(phys)) + swiotlb_tbl_unmap_single(dev, phys, org_size, + aligned_size, dir, attrs); + + return iova; +} + static void __iommu_dma_free_pages(struct page **pages, int count) { while (count--) @@ -697,11 +773,15 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - arch_sync_dma_for_cpu(phys, size, dir); + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_cpu(phys, size, dir); + + if (is_swiotlb_buffer(phys)) + swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_CPU); } static void iommu_dma_sync_single_for_device(struct device *dev, @@ -709,11 +789,15 @@ static void iommu_dma_sync_single_for_device(struct device *dev, { phys_addr_t phys; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) return; phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - arch_sync_dma_for_device(phys, size, dir); + if (is_swiotlb_buffer(phys)) + swiotlb_tbl_sync_single(dev, phys, size, dir, SYNC_FOR_DEVICE); + + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_device(phys, size, dir); } static void iommu_dma_sync_sg_for_cpu(struct device *dev, @@ -723,11 +807,17 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg; int i; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) return; - for_each_sg(sgl, sg, nelems, i) - arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); + for_each_sg(sgl, sg, nelems, i) { + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); + + if (is_swiotlb_buffer(sg_phys(sg))) + swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, + dir, SYNC_FOR_CPU); + } } static void iommu_dma_sync_sg_for_device(struct device *dev, @@ -737,11 +827,17 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg; int i; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev)) return; - for_each_sg(sgl, sg, nelems, i) - arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); + for_each_sg(sgl, sg, nelems, i) { + if (is_swiotlb_buffer(sg_phys(sg))) + swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, + dir, SYNC_FOR_DEVICE); + + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); + } } static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, @@ -750,10 +846,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); - int prot = dma_info_to_prot(dir, coherent, attrs); dma_addr_t dma_handle; - dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev)); + dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev), + coherent, dir, attrs); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && dma_handle != DMA_MAPPING_ERROR) arch_sync_dma_for_device(phys, size, dir); @@ -765,7 +861,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir); - __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs); } /* @@ -843,6 +939,39 @@ static void __invalidate_sg(struct scatterlist *sg, int nents) } } +static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + struct scatterlist *s; + int i; + + for_each_sg(sg, s, nents, i) + __iommu_dma_unmap_swiotlb(dev, sg_dma_address(s), + sg_dma_len(s), dir, attrs); +} + +static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + struct scatterlist *s; + int i; + + for_each_sg(sg, s, nents, i) { + sg_dma_address(s) = __iommu_dma_map_swiotlb(dev, sg_phys(s), + s->length, dma_get_mask(dev), + dev_is_dma_coherent(dev), dir, attrs); + if (sg_dma_address(s) == DMA_MAPPING_ERROR) + goto out_unmap; + sg_dma_len(s) = s->length; + } + + return nents; + +out_unmap: + iommu_dma_unmap_sg_swiotlb(dev, sg, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); + return 0; +} + /* * The DMA API client is passing in a scatterlist which could describe * any old buffer layout, but the IOMMU API requires everything to be @@ -869,6 +998,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_sg_for_device(dev, sg, nents, dir); + if (dev_is_untrusted(dev)) + return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); + /* * Work out how much IOVA space we need, and align the segments to * IOVA granules for the IOMMU driver to handle. With some clever @@ -938,6 +1070,11 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir); + if (dev_is_untrusted(dev)) { + iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs); + return; + } + /* * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy.