From patchwork Tue Jan 3 23:13:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 9495895 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4CB2C606A7 for ; Tue, 3 Jan 2017 23:14:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F86527165 for ; Tue, 3 Jan 2017 23:14:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 34287271CB; Tue, 3 Jan 2017 23:14:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A76A27C7A for ; Tue, 3 Jan 2017 23:14:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761800AbdACXN6 (ORCPT ); Tue, 3 Jan 2017 18:13:58 -0500 Received: from mout.kundenserver.de ([212.227.17.24]:57542 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759986AbdACXN4 (ORCPT ); Tue, 3 Jan 2017 18:13:56 -0500 Received: from wuerfel.localnet ([78.43.21.235]) by mrelayeu.kundenserver.de (mreue102 [212.227.15.145]) with ESMTPSA (Nemesis) id 0M2dm9-1cfXQ33ax3-00sOhG; Wed, 04 Jan 2017 00:13:18 +0100 From: Arnd Bergmann To: linux-arm-kernel@lists.infradead.org Cc: Will Deacon , Nikita Yushchenko , Catalin Marinas , linux-kernel@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Simon Horman , linux-pci@vger.kernel.org, Bjorn Helgaas , artemi.ivanov@cogentembedded.com Subject: Re: [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Date: Wed, 04 Jan 2017 00:13:16 +0100 Message-ID: <5224989.KFLmAz9Gqk@wuerfel> User-Agent: KMail/5.1.3 (Linux/4.4.0-34-generic; KDE/5.18.0; x86_64; ; ) In-Reply-To: <20170103184444.GP6986@arm.com> References: <1483044304-2085-1-git-send-email-nikita.yoush@cogentembedded.com> <20170103184444.GP6986@arm.com> MIME-Version: 1.0 X-Provags-ID: V03:K0:PAW0V86kHa7epbuwVxmxSvYM66TOsIIejpajafMwos0Wk+P4xLw p5hbiIQb3kPigMRNFJtK5WNUJJLpFWTxcE95NVhXNh46Od7pgZ3a2Eh4bjQMppwjWoCGCaA rqtWjOPzFzuFxMyKlqclaSqZUJZKcA7uM9KIzewJbf8Ah1GfruTL8f3XGmyY7VC4FYIHoWb 1biR4iyTI3HW0QkVpqmBQ== X-UI-Out-Filterresults: notjunk:1; V01:K0:uusVk9TTGxY=:nczdbW2VDsvNSERs4VQzV6 XfEC3egOZJ399PPKxo8etNDA8BFBc9wY8ixlPfuwlSOM807uZTF00gVhgNfryU8aKi3qhKdlE gXEvG9U/az30z1TR8IMjm7Je8kqjDVibm1DEPTeqiYuKSKzmymsaR647vA5AjMq8M24Y3FFPG VEX+5F5JOuN6NOzM7diJ4GkxN28yhnHf3imDIxju9dGEVl4swYZyQI0LCNHeHCB7aEv22W0s0 6HxTCKoE52tUh2hPf7eKIh1FktyoeBkbmjLyX41UupY31S8fJbPGr4vxRPwEYY/+6N1+sMDGL 22ak3tm5SdsNbAanVPWPGY8CHXby3vEIPwsXMNfQaZU2rwGDpkYO+JLI/tp9/jMp9Yw5zoQT7 mQh1K1ebPrJzCiXL8AOJxghaTwbKzHQSs7Esb9qJMreeykwKfnxKyqFQvWQNo6Ay+GwHwh9dS UV6zV1TwaOgPEfbAWYqljBONznqNputVM/KMdiIX0s9ahyMVuWEizqaOKqA5X4ogigj6hK/bO cCHVjkwMABLiAni7+8oXogeGzK3LV2GVqYsxfnQmj7d3Xcbk41z9Vx7ij0MlnlUXH0fDXqVWV CICG4MEAdaj88i0eN6aZqnv3aw9V/gN7Fm41xAtFAgQ9ZFa1OBEVbGs+wTW4qgFhiB31q5jjq LHNK8qdYyxYQs48VYBsWHkd2DQLzIymk8zEPX/wYUOwxHvR8ZfcGvlArxe9E11BNvCNv0AWWo hljhTMY50Ll+580J Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tuesday, January 3, 2017 6:44:44 PM CET Will Deacon wrote: > > @@ -347,6 +348,16 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, > > > > static int __swiotlb_dma_supported(struct device *hwdev, u64 mask) > > { > > +#ifdef CONFIG_PCI > > + if (dev_is_pci(hwdev)) { > > + struct pci_dev *pdev = to_pci_dev(hwdev); > > + struct pci_host_bridge *br = pci_find_host_bridge(pdev->bus); > > + > > + if (br->dev.dma_mask && (*br->dev.dma_mask) && > > + (mask & (*br->dev.dma_mask)) != mask) > > + return 0; > > + } > > +#endif > > Hmm, but this makes it look like the problem is both arm64 and swiotlb > specific, when in reality it's not. Perhaps another hack you could try > would be to register a PCI bus notifier in the host bridge looking for > BUS_NOTIFY_BIND_DRIVER, then you could proxy the DMA ops for each child > device before the driver has probed, but adding a dma_set_mask callback > to limit the mask to what you need? > > I agree that it would be better if dma_set_mask handled all of this > transparently, but it's all based on the underlying ops rather than the > bus type. This is what I prototyped a long time ago when this first came up. I still think this needs to be solved properly for all of arm64, not with a PCI specific hack, and in particular not using notifiers. Arnd commit 9a57d58d116800a535510053136c6dd7a9c26e25 Author: Arnd Bergmann Date: Tue Nov 17 14:06:55 2015 +0100 [EXPERIMENTAL] ARM64: check implement dma_set_mask Needs work for coherent mask Signed-off-by: Arnd Bergmann --- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/include/asm/device.h b/arch/arm64/include/asm/device.h index 243ef256b8c9..a57e7bb10e71 100644 --- a/arch/arm64/include/asm/device.h +++ b/arch/arm64/include/asm/device.h @@ -22,6 +22,7 @@ struct dev_archdata { void *iommu; /* private IOMMU data */ #endif bool dma_coherent; + u64 parent_dma_mask; }; struct pdev_archdata { diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 290a84f3351f..aa65875c611b 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -352,6 +352,31 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask) return 1; } +static int __swiotlb_set_dma_mask(struct device *dev, u64 mask) +{ + /* device is not DMA capable */ + if (!dev->dma_mask) + return -EIO; + + /* mask is below swiotlb bounce buffer, so fail */ + if (!swiotlb_dma_supported(dev, mask)) + return -EIO; + + /* + * because of the swiotlb, we can return success for + * larger masks, but need to ensure that bounce buffers + * are used above parent_dma_mask, so set that as + * the effective mask. + */ + if (mask > dev->archdata.parent_dma_mask) + mask = dev->archdata.parent_dma_mask; + + + *dev->dma_mask = mask; + + return 0; +} + static struct dma_map_ops swiotlb_dma_ops = { .alloc = __dma_alloc, .free = __dma_free, @@ -367,6 +392,7 @@ static struct dma_map_ops swiotlb_dma_ops = { .sync_sg_for_device = __swiotlb_sync_sg_for_device, .dma_supported = __swiotlb_dma_supported, .mapping_error = swiotlb_dma_mapping_error, + .set_dma_mask = __swiotlb_set_dma_mask, }; static int __init atomic_pool_init(void) @@ -957,6 +983,18 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, if (!dev->archdata.dma_ops) dev->archdata.dma_ops = &swiotlb_dma_ops; + /* + * we don't yet support buses that have a non-zero mapping. + * Let's hope we won't need it + */ + WARN_ON(dma_base != 0); + + /* + * Whatever the parent bus can set. A device must not set + * a DMA mask larger than this. + */ + dev->archdata.parent_dma_mask = size; + dev->archdata.dma_coherent = coherent; __iommu_setup_dma_ops(dev, dma_base, size, iommu); }