From patchwork Tue Jan 10 14:00:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikita Yushchenko X-Patchwork-Id: 9507543 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3B019601EA for ; Tue, 10 Jan 2017 14:01:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D0FC2850E for ; Tue, 10 Jan 2017 14:01:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2172A28548; Tue, 10 Jan 2017 14:01:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EB412850E for ; Tue, 10 Jan 2017 14:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756002AbdAJOBB (ORCPT ); Tue, 10 Jan 2017 09:01:01 -0500 Received: from mail-lf0-f48.google.com ([209.85.215.48]:34644 "EHLO mail-lf0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933206AbdAJOA7 (ORCPT ); Tue, 10 Jan 2017 09:00:59 -0500 Received: by mail-lf0-f48.google.com with SMTP id v186so61751535lfa.1 for ; Tue, 10 Jan 2017 06:00:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cogentembedded-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vEx+q6RciabdTInQTvWrtSDZa4vqFn/q/a8WnTzPrac=; b=AvLPfHUslhEDNrSN2vaXee44aziendbKdGKQSk7/n+JSH4IfsxX9DGS9Nu00dFHWqF kuCYwhU2CzLTXq+Q/O5/4b3byBzBKkS/8BM0x+0KJGGPvETouGuh+sUjrHOUu7xLOydw 0eH5iTItwH7Xm/hJhRc8/fQm8gHw+iCvrdkubV/GM2xXPPkA9IZNajWnP1myKouqQus2 SKamcgIqoHl+HBJ9X5AV3nokVjEg4cXNMP/zN6IAoeKiicP12CIFvacuj2KqTmZM/vmF uwJPqvjPEWT1BU7zc+LrJGo2ch4FwKtjuj9GeFqpJkG8IHj2zNyeBB/EPf7gymoRLf5X ncyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vEx+q6RciabdTInQTvWrtSDZa4vqFn/q/a8WnTzPrac=; b=Q41OYmN83RODZEe9GinbjiHDf4W1IhoAJ1WDyCw+nMsumHCatAo8BXxMW7Aux2t+tz UjObpfHymLb3TbpfBqWrzL2XcwR0V3s6u7ikGzYL4XxgpDSw4XOS7gwgNXfa8mHkrhyX f4zTaAm47UiR8lrVl7T/cV+MwKS4ttNIQoNnwH0Bhrr2kXwNP+qgb0z7xw0iU/BPXhHU DXNxCNYnLuC0pEBluhx51Mc7uIakpOWy/UklI4xPElya2IzH0qmIBCiahXbSWRRP39E9 sNlrU3m9fw4k3BUz09PK1VsD65iDN1XlwktmBOFsUGPgzWVOYHoPyk8Jsw++etUTljhj 2YEQ== X-Gm-Message-State: AIkVDXKXrPoZ6/Sm3J+q24EJbbcPRBO7ZXkMlvfTfk3433h5vPSjjrp4LhOUTdU+rle+EA== X-Received: by 10.25.22.104 with SMTP id m101mr1065886lfi.17.1484056857221; Tue, 10 Jan 2017 06:00:57 -0800 (PST) Received: from hugenb.home (nikaet.starlink.ru. [94.141.168.29]) by smtp.gmail.com with ESMTPSA id y10sm489155lja.45.2017.01.10.06.00.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 10 Jan 2017 06:00:56 -0800 (PST) From: Nikita Yushchenko To: Robin Murphy , Will Deacon , Arnd Bergmann Cc: linux-arm-kernel@lists.infradead.org, linux-renesas-soc@vger.kernel.org, Simon Horman , Bjorn Helgaas , fkan@apm.com, Nikita Yushchenko Subject: [PATCH] arm64: avoid increasing DMA masks above what hardware supports Date: Tue, 10 Jan 2017 17:00:44 +0300 Message-Id: <1484056844-9567-1-git-send-email-nikita.yoush@cogentembedded.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <11daacde-5399-039f-80a3-01d7bd13e9e8@arm.com> References: <11daacde-5399-039f-80a3-01d7bd13e9e8@arm.com> Sender: linux-renesas-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are cases when device supports wide DMA addresses wider than device's connection supports. In this case driver sets DMA mask based on knowledge of device capabilities. That must succeed to allow drivers to initialize. However, swiotlb or iommu still need knowledge about actual device capabilities. To avoid breakage, actual mask must not be set wider than device connection allows. Signed-off-by: Nikita Yushchenko CC: Arnd Bergmann CC: Robin Murphy CC: Will Deacon --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/device.h | 1 + arch/arm64/include/asm/dma-mapping.h | 3 +++ arch/arm64/mm/dma-mapping.c | 43 ++++++++++++++++++++++++++++++++++++ 4 files changed, 50 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1117421..afb2c08 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -216,6 +216,9 @@ config NEED_DMA_MAP_STATE config NEED_SG_DMA_LENGTH def_bool y +config ARCH_HAS_DMA_SET_COHERENT_MASK + def_bool y + config SMP def_bool y diff --git a/arch/arm64/include/asm/device.h b/arch/arm64/include/asm/device.h index 243ef25..a57e7bb 100644 --- a/arch/arm64/include/asm/device.h +++ b/arch/arm64/include/asm/device.h @@ -22,6 +22,7 @@ struct dev_archdata { void *iommu; /* private IOMMU data */ #endif bool dma_coherent; + u64 parent_dma_mask; }; struct pdev_archdata { diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h index ccea82c..eab36d2 100644 --- a/arch/arm64/include/asm/dma-mapping.h +++ b/arch/arm64/include/asm/dma-mapping.h @@ -51,6 +51,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, const struct iommu_ops *iommu, bool coherent); #define arch_setup_dma_ops arch_setup_dma_ops +#define HAVE_ARCH_DMA_SET_MASK 1 +extern int dma_set_mask(struct device *dev, u64 dma_mask); + #ifdef CONFIG_IOMMU_DMA void arch_teardown_dma_ops(struct device *dev); #define arch_teardown_dma_ops arch_teardown_dma_ops diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index e040827..7b1bb87 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -203,6 +203,37 @@ static void __dma_free(struct device *dev, size_t size, __dma_free_coherent(dev, size, swiotlb_addr, dma_handle, attrs); } +int dma_set_mask(struct device *dev, u64 dma_mask) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (mask > dev->archdata.parent_dma_mask) + mask = dev->archdata.parent_dma_mask; + + if (ops->set_dma_mask) + return ops->set_dma_mask(dev, mask); + + if (!dev->dma_mask || !dma_supported(dev, mask)) + return -EIO; + + *dev->dma_mask = mask; + return 0; +} +EXPORT_SYMBOL(dma_set_mask); + +int dma_set_coherent_mask(struct device *dev, u64 mask) +{ + if (mask > dev->archdata.parent_dma_mask) + mask = dev->archdata.parent_dma_mask; + + if (!dma_supported(dev, mask)) + return -EIO; + + dev->coherent_dma_mask = mask; + return 0; +} +EXPORT_SYMBOL(dma_set_coherent_mask); + static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, @@ -958,6 +989,18 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, if (!dev->archdata.dma_ops) dev->archdata.dma_ops = &swiotlb_dma_ops; + /* + * we don't yet support buses that have a non-zero mapping. + * Let's hope we won't need it + */ + WARN_ON(dma_base != 0); + + /* + * Whatever the parent bus can set. A device must not set + * a DMA mask larger than this. + */ + dev->archdata.parent_dma_mask = size - 1; + dev->archdata.dma_coherent = coherent; __iommu_setup_dma_ops(dev, dma_base, size, iommu); }