From patchwork Mon Jul 14 08:28:07 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 4543351 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0C9B9C0514 for ; Mon, 14 Jul 2014 08:31:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E05A820158 for ; Mon, 14 Jul 2014 08:31:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CFC7D20103 for ; Mon, 14 Jul 2014 08:31:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X6bdC-000654-7G; Mon, 14 Jul 2014 08:29:06 +0000 Received: from mailout4.w1.samsung.com ([210.118.77.14]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X6bco-0005vR-Tx for linux-arm-kernel@lists.infradead.org; Mon, 14 Jul 2014 08:28:44 +0000 Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout4.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8P005JM0UY1F10@mailout4.w1.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 14 Jul 2014 09:28:11 +0100 (BST) X-AuditID: cbfec7f4-b7fac6d000006cfe-f9-53c394a171d5 Received: from eusync3.samsung.com ( [203.254.199.213]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id 71.CE.27902.1A493C35; Mon, 14 Jul 2014 09:28:18 +0100 (BST) Received: from amdc1339.digital.local ([106.116.147.30]) by eusync3.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8P00JJ10UYW070@eusync3.samsung.com>; Mon, 14 Jul 2014 09:28:17 +0100 (BST) From: Marek Szyprowski To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 RESEND 4/4] drivers: dma-contiguous: add initialization from device tree Date: Mon, 14 Jul 2014 10:28:07 +0200 Message-id: <1405326487-15346-5-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.9.2 In-reply-to: <1405326487-15346-1-git-send-email-m.szyprowski@samsung.com> References: <1405326487-15346-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRmVeSWpSXmKPExsVy+t/xq7qLphwONvh2idFizvo1bBaPX89j sfg76Ri7xftlPYwW84+cY7U48GcHo8XK7mY2i53r3jFanG16w26xvXMGu8WXKw+ZLDY9vsZq cXnXHDaLtUfusltseHmQyWLB8RZWiz/T5SzWHFnMbvF3+yYWi/UzXrNYLJx/n93i5ccTLA5i HmvmrWH0+P1rEqPH5b5eJo+ds+6ye3S9vcLkcefaHjaPEzN+s3g8OLSZxWPzknqP2/8eM3us +/OKyaP/r4HH3F19jB59W1YxenzeJBfAH8Vlk5Kak1mWWqRvl8CV8ffdcraCfwYV3w9tZm5g nKHRxcjJISFgInGlr4sZwhaTuHBvPRuILSSwlFFizYZcCLuPSeLdHz8Qm03AUKLrbRdYjYiA m8S/dYeAbC4OZoHXrBIrHn1mAUkIC8RKXP31mx3EZhFQlehZuogRxOYV8JD41rCTFWKZnMT/ lyuYQGxOAU+JGZ/fs0Ms85B4f7mXdQIj7wJGhlWMoqmlyQXFSem5hnrFibnFpXnpesn5uZsY IZH0ZQfj4mNWhxgFOBiVeHgrxA4HC7EmlhVX5h5ilOBgVhLhDXcDCvGmJFZWpRblxxeV5qQW H2Jk4uCUamDUm+LQFsykPOcdb3Vqqb/E+faFdh/eSdYv0T/FZWAaqbYuq3G1qZarYp5KsM/h g+V6j2NWdm49cjP/SgtjyPUZxvLpbaF31k5xcFve0GT/TGWu3GT5vPq1DV/OGX45/cBn71b5 9/cV1umnJB8qb5x3esMqCT93rpfm/fek0h4sEuh6dnuGiZ0SS3FGoqEWc1FxIgAuPgkDggIA AA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140714_012843_159318_B64945FD X-CRM114-Status: GOOD ( 18.42 ) X-Spam-Score: -5.0 (-----) Cc: Jon Medhurst , devicetree@vger.kernel.org, Laura Abbott , Andrew Morton , Arnd Bergmann , Josh Cartwright , Catalin Marinas , Tomasz Figa , Will Deacon , Michal Nazarewicz , linaro-mm-sig@lists.linaro.org, Marc , Nishanth Peethambaran , Paul Mackerras , "Aneesh Kumar K.V." , Grant Likely , Joonsoo Kim , Sascha Hauer , Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a function to create CMA region from previously reserved memory and add support for handling 'shared-dma-pool' reserved-memory device tree nodes. Based on previous code provided by Josh Cartwright Signed-off-by: Marek Szyprowski --- drivers/base/dma-contiguous.c | 67 +++++++++++++++++++++++++++++++++++++++++++ include/linux/cma.h | 3 ++ mm/cma.c | 62 ++++++++++++++++++++++++++++++++------- 3 files changed, 121 insertions(+), 11 deletions(-) diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c index 6606abdf880c..b77ea8bac176 100644 --- a/drivers/base/dma-contiguous.c +++ b/drivers/base/dma-contiguous.c @@ -211,3 +211,70 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, { return cma_release(dev_get_cma_area(dev), pages, count); } + +/* + * Support for reserved memory regions defined in device tree + */ +#ifdef CONFIG_OF_RESERVED_MEM +#include +#include +#include + +#undef pr_fmt +#define pr_fmt(fmt) fmt + +static void rmem_cma_device_init(struct reserved_mem *rmem, struct device *dev) +{ + struct cma *cma = rmem->priv; + dev_set_cma_area(dev, cma); +} + +static void rmem_cma_device_release(struct reserved_mem *rmem, + struct device *dev) +{ + dev_set_cma_area(dev, NULL); +} + +static const struct reserved_mem_ops rmem_cma_ops = { + .device_init = rmem_cma_device_init, + .device_release = rmem_cma_device_release, +}; + +static int __init rmem_cma_setup(struct reserved_mem *rmem) +{ + phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t mask = align - 1; + unsigned long node = rmem->fdt_node; + struct cma *cma; + int err; + + if (!of_get_flat_dt_prop(node, "reusable", NULL) || + of_get_flat_dt_prop(node, "no-map", NULL)) + return -EINVAL; + + if ((rmem->base & mask) || (rmem->size & mask)) { + pr_err("Reserved memory: incorrect alignment of CMA region\n"); + return -EINVAL; + } + + err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma); + if (err) { + pr_err("Reserved memory: unable to setup CMA region\n"); + return err; + } + /* Architecture specific contiguous memory fixup. */ + dma_contiguous_early_fixup(rmem->base, rmem->size); + + if (of_get_flat_dt_prop(node, "linux,cma-default", NULL)) + dma_contiguous_set_default(cma); + + rmem->ops = &rmem_cma_ops; + rmem->priv = cma; + + pr_info("Reserved memory: created CMA memory pool at %pa, size %ld MiB\n", + &rmem->base, (unsigned long)rmem->size / SZ_1M); + + return 0; +} +RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup); +#endif diff --git a/include/linux/cma.h b/include/linux/cma.h index 32cab7a425f9..9a18a2b1934c 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -16,6 +16,9 @@ extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base, phys_addr_t limit, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, struct cma **res_cma); +extern int cma_init_reserved_mem(phys_addr_t size, + phys_addr_t base, int order_per_bit, + struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); extern bool cma_release(struct cma *cma, struct page *pages, int count); #endif diff --git a/mm/cma.c b/mm/cma.c index 4b251b037e1b..b3d8b925ad34 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -140,6 +140,54 @@ static int __init cma_init_reserved_areas(void) core_initcall(cma_init_reserved_areas); /** + * cma_init_reserved_mem() - create custom contiguous area from reserved memory + * @base: Base address of the reserved area + * @size: Size of the reserved area (in bytes), + * @order_per_bit: Order of pages represented by one bit on bitmap. + * @res_cma: Pointer to store the created cma region. + * + * This function creates custom contiguous area from already reserved memory. + */ +int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, + int order_per_bit, struct cma **res_cma) +{ + struct cma *cma; + phys_addr_t alignment; + + /* Sanity checks */ + if (cma_area_count == ARRAY_SIZE(cma_areas)) { + pr_err("Not enough slots for CMA reserved regions!\n"); + return -ENOSPC; + } + + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + + /* ensure minimal alignment requied by mm core */ + alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + + /* alignment should be aligned with order_per_bit */ + if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit)) + return -EINVAL; + + if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size) + return -EINVAL; + + /* + * Each reserved area must be initialised later, when more kernel + * subsystems (like slab allocator) are available. + */ + cma = &cma_areas[cma_area_count]; + cma->base_pfn = PFN_DOWN(base); + cma->count = size >> PAGE_SHIFT; + cma->order_per_bit = order_per_bit; + *res_cma = cma; + cma_area_count++; + + return 0; +} + +/** * cma_declare_contiguous() - reserve custom contiguous area * @base: Base address of the reserved area optional, use 0 for any * @size: Size of the reserved area (in bytes), @@ -162,7 +210,6 @@ int __init cma_declare_contiguous(phys_addr_t base, phys_addr_t alignment, unsigned int order_per_bit, bool fixed, struct cma **res_cma) { - struct cma *cma; int ret = 0; pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", @@ -214,16 +261,9 @@ int __init cma_declare_contiguous(phys_addr_t base, } } - /* - * Each reserved area must be initialised later, when more kernel - * subsystems (like slab allocator) are available. - */ - cma = &cma_areas[cma_area_count]; - cma->base_pfn = PFN_DOWN(base); - cma->count = size >> PAGE_SHIFT; - cma->order_per_bit = order_per_bit; - *res_cma = cma; - cma_area_count++; + ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma); + if (ret) + goto err; pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M, (unsigned long)base);