From patchwork Mon Jul 17 08:58:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 9844299 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5402560392 for ; Mon, 17 Jul 2017 08:59:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 454F323B23 for ; Mon, 17 Jul 2017 08:59:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 39F1B23E64; Mon, 17 Jul 2017 08:59:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9D44623B23 for ; Mon, 17 Jul 2017 08:59:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=t5FCVXZvRfqTkoZhOXvjY8RppNJuWFZjOrpRJC8kZkE=; b=G/euEiMLNMShM+NRHSBpbIzFuF ur6OcnOnj3truRT8R3jgCwZw+5slPkJVJEdycZrSG8lebxnFYGTqm+FqGOU6iAU3BzAiQyutcYxd8 kd6BSQ6/LK+zcFFeXSOQBsxbWmOrtudwW6rtI+UwPZRfHWGS9GJe3NfJA0Jz5Nm8ZvjMoNl9kDghm Nf7iC329o5XX+iaSNq2d30YKDem90UmMBq5qo8YN0FJ05yYW7sOc3TTr83Mcw9YiNglcTtLwmrD5B WFDiWnONP5r4BhHQCNAt7f2/pWUYCUGNzaGlWqQtgt1P7M3jvqrRh9XO0LhJCOP3uXwNLsbcrWkao +qUKqg8w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dX1sZ-00009S-5R; Mon, 17 Jul 2017 08:59:47 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dX1s4-00080y-Jn for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2017 08:59:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9852415AD; Mon, 17 Jul 2017 01:58:56 -0700 (PDT) Received: from login2.euhpc.arm.com (login2.euhpc.arm.com [10.6.26.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 571383F3E1; Mon, 17 Jul 2017 01:58:54 -0700 (PDT) From: Vladimir Murzin To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 2/2] ARM: NOMMU: Wire-up default DMA interface Date: Mon, 17 Jul 2017 09:58:05 +0100 Message-Id: <1500281885-3034-3-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1500281885-3034-1-git-send-email-vladimir.murzin@arm.com> References: <1500281885-3034-1-git-send-email-vladimir.murzin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170717_015917_486200_DB1E57C6 X-CRM114-Status: GOOD ( 12.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: vitaly_kuzmichev@mentor.com, sza@esh.hu, arnd@arndb.de, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, linux@armlinux.org.uk, george_davis@mentor.com, kbuild-all@01.org, benjamin.gaignard@linaro.org, akpm@linux-foundation.org, m.szyprowski@samsung.com, robin.murphy@arm.com, hch@lst.de, alexandre.torgue@st.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The way how default DMA pool is exposed has changed and now we need to use dedicated interface to work with it. This patch makes alloc/release operations to use such interface. Since default DMA pool is not handled by generic code anymore we have to implement our own mmap operation. Signed-off-by: Vladimir Murzin --- arch/arm/mm/dma-mapping-nommu.c | 45 ++++++++++++++++++++++++++++++++--------- 1 file changed, 36 insertions(+), 9 deletions(-) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index 90ee354..6db5fc2 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -40,9 +40,21 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, { const struct dma_map_ops *ops = &dma_noop_ops; + void *ret; /* - * We are here because: + * Try generic allocator first if we are advertised that + * consistency is not required. + */ + + if (attrs & DMA_ATTR_NON_CONSISTENT) + return ops->alloc(dev, size, dma_handle, gfp, attrs); + + ret = dma_alloc_from_global_coherent(size, dma_handle); + + /* + * dma_alloc_from_global_coherent() may fail because: + * * - no consistent DMA region has been defined, so we can't * continue. * - there is no space left in consistent DMA region, so we @@ -50,11 +62,8 @@ static void *arm_nommu_dma_alloc(struct device *dev, size_t size, * advertised that consistency is not required. */ - if (attrs & DMA_ATTR_NON_CONSISTENT) - return ops->alloc(dev, size, dma_handle, gfp, attrs); - - WARN_ON_ONCE(1); - return NULL; + WARN_ON_ONCE(ret == NULL); + return ret; } static void arm_nommu_dma_free(struct device *dev, size_t size, @@ -63,14 +72,31 @@ static void arm_nommu_dma_free(struct device *dev, size_t size, { const struct dma_map_ops *ops = &dma_noop_ops; - if (attrs & DMA_ATTR_NON_CONSISTENT) + if (attrs & DMA_ATTR_NON_CONSISTENT) { ops->free(dev, size, cpu_addr, dma_addr, attrs); - else - WARN_ON_ONCE(1); + } else { + int ret = dma_release_from_global_coherent(get_order(size), + cpu_addr); + + WARN_ON_ONCE(ret == 0); + } return; } +static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + int ret; + + if (dma_mmap_from_global_coherent(vma, cpu_addr, size, &ret)) + return ret; + + return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size); +} + + static void __dma_page_cpu_to_dev(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { @@ -173,6 +199,7 @@ static void arm_nommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist const struct dma_map_ops arm_nommu_dma_ops = { .alloc = arm_nommu_dma_alloc, .free = arm_nommu_dma_free, + .mmap = arm_nommu_dma_mmap, .map_page = arm_nommu_dma_map_page, .unmap_page = arm_nommu_dma_unmap_page, .map_sg = arm_nommu_dma_map_sg,