From patchwork Mon Jun 26 09:18:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 9808943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4250E60209 for ; Mon, 26 Jun 2017 09:20:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B99928437 for ; Mon, 26 Jun 2017 09:20:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4012F28534; Mon, 26 Jun 2017 09:20:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A09728161 for ; Mon, 26 Jun 2017 09:20:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kTZ5FTt+/FSl/XfQP1+fxThDXcedgVxPMTeeRS08Fa0=; b=bErUiPNk5u7FYkuVvtCnFnghbO v8MyIH0zIywCSB9L2qvQ6u+J4sl6DURDuQKRNVQ+bt0H9UBYOi9/CbanKVNdSXSJvQDn0DrTIcexf FXJMAGTRo8oevz5cgLKT/WqdfugbqkLDGKso25db493lLClQ02/r8+MqjbLI3o9rtBmOHAkgKqMyz dDR2UOnQCGwhrM64Q/hWPa3LIadfIVp4bVV7ouiE7f5xgk5oXuSxFWPaJi4Q2zB2izcv6XbNYHO8G 5iNmomVQyAdYO2W1jSi4E5axxv4uhT5L97ZgOrpHhJjUBpuno8TeZ91Tg9GOWjWbFaMuqNop1q3QI Y01TN2uA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dPQCJ-000483-Fy; Mon, 26 Jun 2017 09:20:43 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dPQBT-00028A-Ok for linux-arm-kernel@lists.infradead.org; Mon, 26 Jun 2017 09:19:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8E0415BF; Mon, 26 Jun 2017 02:19:31 -0700 (PDT) Received: from bc-a7-1-12.euhpc.arm.com. (bc-a7-1-12.euhpc.arm.com [10.6.2.73]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4F1ED3F587; Mon, 26 Jun 2017 02:19:29 -0700 (PDT) From: Vladimir Murzin To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 4/7] drivers: dma-coherent: Introduce default DMA pool Date: Mon, 26 Jun 2017 10:18:58 +0100 Message-Id: <1498468741-12020-5-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1498468741-12020-1-git-send-email-vladimir.murzin@arm.com> References: <1498468741-12020-1-git-send-email-vladimir.murzin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170626_021952_299181_675F1A53 X-CRM114-Status: GOOD ( 15.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , sza@esh.hu, arnd@arndb.de, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, Michal Nazarewicz , linux@armlinux.org.uk, Rob Herring , kbuild-all@01.org, benjamin.gaignard@linaro.org, akpm@linux-foundation.org, m.szyprowski@samsung.com, robin.murphy@arm.com, hch@lst.de, alexandre.torgue@st.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces default coherent DMA pool similar to default CMA area concept. To keep other users safe code kept under CONFIG_ARM. Cc: Michal Nazarewicz Cc: Marek Szyprowski Cc: Rob Herring Cc: Mark Rutland Cc: Greg Kroah-Hartman Suggested-by: Robin Murphy Tested-by: Benjamin Gaignard Tested-by: Andras Szemzo Tested-by: Alexandre TORGUE Signed-off-by: Vladimir Murzin --- .../bindings/reserved-memory/reserved-memory.txt | 3 ++ drivers/base/dma-coherent.c | 59 +++++++++++++++++++--- 2 files changed, 55 insertions(+), 7 deletions(-) diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index 3da0ebd..16291f2 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -68,6 +68,9 @@ Linux implementation note: - If a "linux,cma-default" property is present, then Linux will use the region for the default pool of the contiguous memory allocator. +- If a "linux,dma-default" property is present, then Linux will use the + region for the default pool of the consistent DMA allocator. + Device node references to reserved memory ----------------------------------------- Regions in the /reserved-memory node may be referenced by other device diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c index 99c9695..2ae24c2 100644 --- a/drivers/base/dma-coherent.c +++ b/drivers/base/dma-coherent.c @@ -19,6 +19,15 @@ struct dma_coherent_mem { bool use_dev_dma_pfn_offset; }; +static struct dma_coherent_mem *dma_coherent_default_memory __ro_after_init; + +static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *dev) +{ + if (dev && dev->dma_mem) + return dev->dma_mem; + return dma_coherent_default_memory; +} + static inline dma_addr_t dma_get_device_base(struct device *dev, struct dma_coherent_mem * mem) { @@ -93,6 +102,9 @@ static void dma_release_coherent_memory(struct dma_coherent_mem *mem) static int dma_assign_coherent_memory(struct device *dev, struct dma_coherent_mem *mem) { + if (!dev) + return -ENODEV; + if (dev->dma_mem) return -EBUSY; @@ -171,15 +183,12 @@ EXPORT_SYMBOL(dma_mark_declared_memory_occupied); int dma_alloc_from_coherent(struct device *dev, ssize_t size, dma_addr_t *dma_handle, void **ret) { - struct dma_coherent_mem *mem; + struct dma_coherent_mem *mem = dev_get_coherent_memory(dev); int order = get_order(size); unsigned long flags; int pageno; int dma_memory_map; - if (!dev) - return 0; - mem = dev->dma_mem; if (!mem) return 0; @@ -233,7 +242,7 @@ EXPORT_SYMBOL(dma_alloc_from_coherent); */ int dma_release_from_coherent(struct device *dev, int order, void *vaddr) { - struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL; + struct dma_coherent_mem *mem = dev_get_coherent_memory(dev); if (mem && vaddr >= mem->virt_base && vaddr < (mem->virt_base + (mem->size << PAGE_SHIFT))) { @@ -267,7 +276,7 @@ EXPORT_SYMBOL(dma_release_from_coherent); int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, void *vaddr, size_t size, int *ret) { - struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL; + struct dma_coherent_mem *mem = dev_get_coherent_memory(dev); if (mem && vaddr >= mem->virt_base && vaddr + size <= (mem->virt_base + (mem->size << PAGE_SHIFT))) { @@ -297,6 +306,8 @@ EXPORT_SYMBOL(dma_mmap_from_coherent); #include #include +static struct reserved_mem *dma_reserved_default_memory __initdata; + static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) { struct dma_coherent_mem *mem = rmem->priv; @@ -318,7 +329,8 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev) static void rmem_dma_device_release(struct reserved_mem *rmem, struct device *dev) { - dev->dma_mem = NULL; + if (dev) + dev->dma_mem = NULL; } static const struct reserved_mem_ops rmem_dma_ops = { @@ -338,6 +350,12 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem) pr_err("Reserved memory: regions without no-map are not yet supported\n"); return -EINVAL; } + + if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) { + WARN(dma_reserved_default_memory, + "Reserved memory: region for default DMA coherent area is redefined\n"); + dma_reserved_default_memory = rmem; + } #endif rmem->ops = &rmem_dma_ops; @@ -345,5 +363,32 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem) &rmem->base, (unsigned long)rmem->size / SZ_1M); return 0; } + +static int __init dma_init_reserved_memory(void) +{ + const struct reserved_mem_ops *ops; + int ret; + + if (!dma_reserved_default_memory) + return -ENOMEM; + + ops = dma_reserved_default_memory->ops; + + /* + * We rely on rmem_dma_device_init() does not propagate error of + * dma_assign_coherent_memory() for "NULL" device. + */ + ret = ops->device_init(dma_reserved_default_memory, NULL); + + if (!ret) { + dma_coherent_default_memory = dma_reserved_default_memory->priv; + pr_info("DMA: default coherent area is set\n"); + } + + return ret; +} + +core_initcall(dma_init_reserved_memory); + RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup); #endif