From patchwork Tue Jun 14 09:20:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12880844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6C00C43334 for ; Tue, 14 Jun 2022 09:33:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c67XWs4jR5wu24RZOKqu3lvbHzHy0tZ0/VokQoeLvTU=; b=z6Yx8ApZFMOmz3 0gBx8pr+H6jQQqvGGK9tFIShlwsT/8UMLX05cc8gMQlN+JqZ+GL56w5T6+TwKMWc8HHMvyXNEhJrF AesJPAoubK+Q4/0NkwcSzcga+jM5a17TMMsSFrlt7gNHlNXp3zUoLbSalJ64Z6zcZSrD3cNbROPX2 NnugCaT+gRkf/SDEbthzK1J3WXrlVgfHFHW0iJgBz4nZoNYisuSZ5po6ruOkgzdiOgcQTNuXpqeYA VTk7ZyylPQEOw29l/9LcOEgfmiJkWjRNyuQ/5i0LYMIEr98sK7kQa8GsWCHz49Kv2RyK1mduQkUp3 WDxlppQTUNzc1OFpZGkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o12um-008lCj-LI; Tue, 14 Jun 2022 09:32:48 +0000 Received: from [2001:4bb8:180:36f6:1fed:6d48:cf16:d13c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o12jf-008fUj-QZ; Tue, 14 Jun 2022 09:21:20 +0000 From: Christoph Hellwig To: Russell King , Arnd Bergmann Cc: Linus Walleij , Andre Przywara , Andrew Lunn , Gregory Clement , Sebastian Hesselbarth , Greg Kroah-Hartman , Alan Stern , Laurentiu Tudor , Marek Szyprowski , Robin Murphy , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org, Marc Zyngier Subject: [PATCH 10/10] ARM/dma-mapping: merge IOMMU ops Date: Tue, 14 Jun 2022 11:20:47 +0200 Message-Id: <20220614092047.572235-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220614092047.572235-1-hch@lst.de> References: <20220614092047.572235-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Robin Murphy The dma_sync_* operations are now the only difference between the coherent and non-coherent IOMMU ops. Some minor tweaks to make those safe for coherent devices with minimal overhead, and we can condense down to a single set of DMA ops. Signed-off-by: Robin Murphy Signed-off-by: Christoph Hellwig Tested-by: Marc Zyngier --- arch/arm/mm/dma-mapping.c | 37 +++++++++++++------------------------ 1 file changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index e7ccf7c82e025..e68d1d2ac4be0 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1341,6 +1341,9 @@ static void arm_iommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *s; int i; + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); @@ -1360,6 +1363,9 @@ static void arm_iommu_sync_sg_for_device(struct device *dev, struct scatterlist *s; int i; + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); } @@ -1493,12 +1499,13 @@ static void arm_iommu_sync_single_for_cpu(struct device *dev, { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); dma_addr_t iova = handle & PAGE_MASK; - struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); + struct page *page; unsigned int offset = handle & ~PAGE_MASK; - if (!iova) + if (dev->dma_coherent || !iova) return; + page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_dev_to_cpu(page, offset, size, dir); } @@ -1507,12 +1514,13 @@ static void arm_iommu_sync_single_for_device(struct device *dev, { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); dma_addr_t iova = handle & PAGE_MASK; - struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); + struct page *page; unsigned int offset = handle & ~PAGE_MASK; - if (!iova) + if (dev->dma_coherent || !iova) return; + page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_cpu_to_dev(page, offset, size, dir); } @@ -1536,22 +1544,6 @@ static const struct dma_map_ops iommu_ops = { .unmap_resource = arm_iommu_unmap_resource, }; -static const struct dma_map_ops iommu_coherent_ops = { - .alloc = arm_iommu_alloc_attrs, - .free = arm_iommu_free_attrs, - .mmap = arm_iommu_mmap_attrs, - .get_sgtable = arm_iommu_get_sgtable, - - .map_page = arm_iommu_map_page, - .unmap_page = arm_iommu_unmap_page, - - .map_sg = arm_iommu_map_sg, - .unmap_sg = arm_iommu_unmap_sg, - - .map_resource = arm_iommu_map_resource, - .unmap_resource = arm_iommu_unmap_resource, -}; - /** * arm_iommu_create_mapping * @bus: pointer to the bus holding the client device (for IOMMU calls) @@ -1750,10 +1742,7 @@ static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size, return; } - if (coherent) - set_dma_ops(dev, &iommu_coherent_ops); - else - set_dma_ops(dev, &iommu_ops); + set_dma_ops(dev, &iommu_ops); } static void arm_teardown_iommu_dma_ops(struct device *dev)