From patchwork Tue Dec 10 00:12:14 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 3313751 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 552BE9F1F0 for ; Tue, 10 Dec 2013 00:14:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5E921202E6 for ; Tue, 10 Dec 2013 00:14:48 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 53D8120219 for ; Tue, 10 Dec 2013 00:14:47 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VqAx9-0007y0-3i; Tue, 10 Dec 2013 00:13:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VqAwq-0007SM-Fn; Tue, 10 Dec 2013 00:13:12 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VqAwL-0007Oy-CX for linux-arm-kernel@lists.infradead.org; Tue, 10 Dec 2013 00:12:43 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id D3D5C13EF05; Tue, 10 Dec 2013 00:12:23 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id C5C4113F2B7; Tue, 10 Dec 2013 00:12:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id EA05E13EF05; Tue, 10 Dec 2013 00:12:22 +0000 (UTC) From: Laura Abbott To: linux-arm-kernel@lists.infradead.org, Will Deacon , Catalin Marinas , Marek Szyprowski Subject: [PATCH 3/3] swiotlb: Add support for CMA allocations Date: Mon, 9 Dec 2013 16:12:14 -0800 Message-Id: <1386634334-31139-4-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1386634334-31139-1-git-send-email-lauraa@codeaurora.org> References: <1386634334-31139-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131209_191241_646775_BEE9BDA0 X-CRM114-Status: GOOD ( 22.33 ) X-Spam-Score: -1.9 (-) Cc: Laura Abbott , Konrad Rzeszutek Wilk X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Some architectures may implement the CMA APIs to allow allocation of larger contiguous blocks of memory. Add support in the swiotlb alloc/free functions to allocate from the CMA APIs instead of the basic page allocator. Cc: Will Deacon Cc: Catalin Marinas Cc: Konrad Rzeszutek Wilk Cc: Marek Szyprowski Signed-off-by: Laura Abbott --- lib/swiotlb.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 files changed, 86 insertions(+), 6 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c index e4399fa..77b4b17 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -29,6 +29,9 @@ #include #include #include +#include +#include +#include #include #include @@ -610,6 +613,66 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, } EXPORT_SYMBOL_GPL(swiotlb_tbl_sync_single); +static void * __alloc_from_contiguous(struct device *hwdev, size_t size, + struct page **ret_page) +{ + unsigned long order = get_order(size); + size_t count = size >> PAGE_SHIFT; + struct page *page; + void *ptr = NULL; + + page = dma_alloc_from_contiguous(hwdev, count, order); + if (!page) + return NULL; + + if (PageHighMem(page)) { + struct vm_struct *area; + unsigned long addr; + + /* + * DMA allocation can be mapped to user space, so lets + * set VM_USERMAP flags too. + */ + area = get_vm_area(size, VM_USERMAP); + if (!area) + goto err; + addr = (unsigned long)area->addr; + area->phys_addr = __pfn_to_phys(page_to_pfn(page)); + + if (ioremap_page_range(addr, addr + size, area->phys_addr, + PAGE_KERNEL)) { + vunmap((void *)addr); + goto err; + } + ptr = area->addr; + } else { + ptr = page_address(page); + } + + *ret_page = page; + return ptr; + +err: + dma_release_from_contiguous(hwdev, page, count); + return NULL; +} + +static void __free_from_contiguous(struct device *hwdev, struct page *page, + void *cpu_addr, size_t size) +{ + if (PageHighMem(page)) { + struct vm_struct *area = find_vm_area(cpu_addr); + if (!area) { + WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr); + return; + } + unmap_kernel_range((unsigned long)cpu_addr, size); + vunmap(cpu_addr); + } + dma_release_from_contiguous(hwdev, page, size >> PAGE_SHIFT); +} + + void * swiotlb_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags) @@ -618,18 +681,27 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size, void *ret; int order = get_order(size); u64 dma_mask = DMA_BIT_MASK(32); + struct page *page; if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; - ret = (void *)__get_free_pages(flags, order); - if (ret) { + if (IS_ENABLED(CONFIG_DMA_CMA)) { + ret = __alloc_from_contiguous(hwdev, size, &page); + dev_addr = phys_to_dma(hwdev, page_to_phys(page)); + } else { + ret = (void *)__get_free_pages(flags, order); dev_addr = swiotlb_virt_to_bus(hwdev, ret); + } + if (ret) { if (dev_addr + size - 1 > dma_mask) { /* * The allocated memory isn't reachable by the device. */ - free_pages((unsigned long) ret, order); + if(IS_ENABLED(CONFIG_DMA_CMA)) + __free_from_contiguous(hwdev, page, ret, size); + else + free_pages((unsigned long) ret, order); ret = NULL; } } @@ -673,11 +745,19 @@ swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, phys_addr_t paddr = dma_to_phys(hwdev, dev_addr); WARN_ON(irqs_disabled()); - if (!is_swiotlb_buffer(paddr)) - free_pages((unsigned long)vaddr, get_order(size)); - else + if (!is_swiotlb_buffer(paddr)) { + if (IS_ENABLED(CONFIG_DMA_CMA)) { + __free_from_contiguous(hwdev, + pfn_to_page(paddr >> PAGE_SHIFT), + vaddr, + size); + } else { + free_pages((unsigned long)vaddr, get_order(size)); + } + } else { /* DMA_TO_DEVICE to avoid memcpy in swiotlb_tbl_unmap_single */ swiotlb_tbl_unmap_single(hwdev, paddr, size, DMA_TO_DEVICE); + } } EXPORT_SYMBOL(swiotlb_free_coherent);