From patchwork Mon May 2 16:43:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12834463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA7C5C433EF for ; Mon, 2 May 2022 16:46:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JpEID1vnnvhHdCzeQi11h81N4qEE1kjH/RuwrJZcRQc=; b=Ny7MAUdCvEDyK/ QvKLm+Roa2lSLGRXHF+ijaRN9SEPHCv2HcbNXW+grzD6w3Z0wRv5AMi1a+rAk6F84QZ3ANy18ZnPH orGWlxQ4YWalx4Wa6yOmNt9D3ZExFHFKnl7TL1/0r4ddtvtSkKL6mVhBkMIS52nxrYDmie3O7so81 bphTfMKEuPLM2EVMcNLCEY4eerLX2y0mmsUpkm4FgQkGV93jrQgs8kcI74ttwItOmcRj/Hi/L20Fo oJKhoRTMpK9ZbwqbmeqiJF35uB8rXZElFdfJRNC7vsVQLbU1xLNjpMCCND7qM+7sY5FEURADHHr/k 2XauLca4Twt3+fFXdi9A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlZB4-001lsv-OA; Mon, 02 May 2022 16:45:38 +0000 Received: from [8.34.116.185] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nlZ99-001kgW-FN; Mon, 02 May 2022 16:43:39 +0000 From: Christoph Hellwig To: Russell King , Arnd Bergmann Cc: Linus Walleij , Andre Przywara , Andrew Lunn , Gregory Clement , Sebastian Hesselbarth , Greg Kroah-Hartman , Alan Stern , Laurentiu Tudor , Marek Szyprowski , Robin Murphy , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org Subject: [PATCH 10/10] ARM/dma-mapping: merge IOMMU ops Date: Mon, 2 May 2022 09:43:30 -0700 Message-Id: <20220502164330.229685-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220502164330.229685-1-hch@lst.de> References: <20220502164330.229685-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Robin Murphy The dma_sync_* operations are now the only difference between the coherent and non-coherent IOMMU ops. Some minor tweaks to make those safe for coherent devices with minimal overhead, and we can condense down to a single set of DMA ops. Signed-off-by: Robin Murphy Signed-off-by: Christoph Hellwig --- arch/arm/mm/dma-mapping.c | 37 +++++++++++++------------------------ 1 file changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 10e5e5800d784..dd46cce61579a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1341,6 +1341,9 @@ static void arm_iommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *s; int i; + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); @@ -1360,6 +1363,9 @@ static void arm_iommu_sync_sg_for_device(struct device *dev, struct scatterlist *s; int i; + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); } @@ -1493,12 +1499,13 @@ static void arm_iommu_sync_single_for_cpu(struct device *dev, { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); dma_addr_t iova = handle & PAGE_MASK; - struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); + struct page *page; unsigned int offset = handle & ~PAGE_MASK; - if (!iova) + if (dev->dma_coherent || !iova) return; + page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_dev_to_cpu(page, offset, size, dir); } @@ -1507,12 +1514,13 @@ static void arm_iommu_sync_single_for_device(struct device *dev, { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); dma_addr_t iova = handle & PAGE_MASK; - struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); + struct page *page; unsigned int offset = handle & ~PAGE_MASK; - if (!iova) + if (dev->dma_coherent || !iova) return; + page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_cpu_to_dev(page, offset, size, dir); } @@ -1536,22 +1544,6 @@ static const struct dma_map_ops iommu_ops = { .unmap_resource = arm_iommu_unmap_resource, }; -static const struct dma_map_ops iommu_coherent_ops = { - .alloc = arm_iommu_alloc_attrs, - .free = arm_iommu_free_attrs, - .mmap = arm_iommu_mmap_attrs, - .get_sgtable = arm_iommu_get_sgtable, - - .map_page = arm_iommu_map_page, - .unmap_page = arm_iommu_unmap_page, - - .map_sg = arm_iommu_map_sg, - .unmap_sg = arm_iommu_unmap_sg, - - .map_resource = arm_iommu_map_resource, - .unmap_resource = arm_iommu_unmap_resource, -}; - /** * arm_iommu_create_mapping * @bus: pointer to the bus holding the client device (for IOMMU calls) @@ -1750,10 +1742,7 @@ static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, return; } - if (coherent) - set_dma_ops(dev, &iommu_coherent_ops); - else - set_dma_ops(dev, &iommu_ops); + set_dma_ops(dev, &iommu_ops); } static void arm_teardown_iommu_dma_ops(struct device *dev)