From patchwork Thu Feb 7 14:59:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 2111301 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 1786EDFB7B for ; Thu, 7 Feb 2013 15:03:46 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1U3Sx7-0006B3-VH; Thu, 07 Feb 2013 14:59:53 +0000 Received: from mailout4.samsung.com ([203.254.224.34]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1U3Sx3-0006Ak-Ck for linux-arm-kernel@lists.infradead.org; Thu, 07 Feb 2013 14:59:51 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MHU0065UUZJIN00@mailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 07 Feb 2013 23:59:44 +0900 (KST) X-AuditID: cbfee61a-b7f7d6d000000f4e-88-5113c160dff9 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id F1.AE.03918.061C3115; Thu, 07 Feb 2013 23:59:44 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MHU00EHMUZ9TP80@mmp2.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 07 Feb 2013 23:59:44 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org Subject: [PATCHv3 1/2] ARM: dma-mapping: add support for CMA regions placed in highmem zone Date: Thu, 07 Feb 2013 15:59:24 +0100 Message-id: <1360249164-1898-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1359984182-6307-1-git-send-email-m.szyprowski@samsung.com> References: <1359984182-6307-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrIJMWRmVeSWpSXmKPExsVy+t9jQd2Eg8KBBlt/81tsenyN1YHRY/OS +gDGKC6blNSczLLUIn27BK6Mnx2PWAtea1Us2HaIrYFxs1IXIyeHhICJxIufv1khbDGJC/fW s3UxcnEICUxnlPi6aQGUs4FJ4t7FqewgVWwChhJdb7vYQGwRAQ+JJyvOMYPYzAKHmSSmT4gA sYUFYiVmfIOoZxFQlViz+DoLiM0r4C5x5OURIJsDaJuCxJxJNiBhTqAxT09tZASxhYBKLn+d zDqBkXcBI8MqRtHUguSC4qT0XEO94sTc4tK8dL3k/NxNjGCPP5PawbiyweIQowAHoxIP742l QoFCrIllxZW5hxglOJiVRHjl1wsHCvGmJFZWpRblxxeV5qQWH2KU5mBREudlPPUkQEggPbEk NTs1tSC1CCbLxMEp1cAYYfv1ecOJw4t4ozifOBfIOFs6f3BO95V89jz4TrHvxNBXnyr5Jt43 N/2y4NibWwfV1Xj/saws8apf7WJYc/y4x80j7YYdcxw4zpzjDeN/4/fiw5q1aj72kaJ7Ux+l mPzbrTFR/ti+qXumvSuaIvPSVEQuanL0i/n/D3g8iH7gHjK7fqd1WFCFEktxRqKhFnNRcSIA qNizbfQBAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130207_095949_931708_22780E76 X-CRM114-Status: GOOD ( 17.32 ) X-Spam-Score: -4.6 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [203.254.224.34 listed in list.dnswl.org] 3.0 KHOP_BIG_TO_CC Sent to 10+ recipients instaed of Bcc or a list -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Russell King - ARM Linux , Arnd Bergmann , Michal Nazarewicz , heesub.shin@samsung.com, Minchan Kim , Kyungmin Park , sj2202.park@samsung.com, lauraa@quicinc.com, Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: Marek Szyprowski Signed-off-by: Kyungmin Park Acked-by: Michal Nazarewicz --- Changle log: v3: fixed build break for non-MMU builds (thanks to Thierry Reding!) v2: restructured code and made all himem checks positive ('if (PageHighMem(page))' instead of 'if (!PageHighMem(page))') --- arch/arm/mm/dma-mapping.c | 57 +++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 15 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index de93ecd..5f79361 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -186,13 +186,24 @@ static u64 get_coherent_dma_mask(struct device *dev) static void __dma_clear_buffer(struct page *page, size_t size) { - void *ptr; /* * Ensure that the allocated pages are zeroed, and that any data * lurking in the kernel direct-mapped region is invalidated. */ - ptr = page_address(page); - if (ptr) { + if (PageHighMem(page)) { + phys_addr_t base = __pfn_to_phys(page_to_pfn(page)); + phys_addr_t end = base + size; + while (size > 0) { + void *ptr = kmap_atomic(page); + memset(ptr, 0, PAGE_SIZE); + dmac_flush_range(ptr, ptr + PAGE_SIZE); + kunmap_atomic(ptr); + page++; + size -= PAGE_SIZE; + } + outer_flush_range(base, end); + } else { + void *ptr = page_address(page); memset(ptr, 0, size); dmac_flush_range(ptr, ptr + size); outer_flush_range(__pa(ptr), __pa(ptr) + size); @@ -243,7 +254,8 @@ static void __dma_free_buffer(struct page *page, size_t size) #endif static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page); + pgprot_t prot, struct page **ret_page, + const void *caller); static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, pgprot_t prot, struct page **ret_page, @@ -346,10 +358,11 @@ static int __init atomic_pool_init(void) goto no_pages; if (IS_ENABLED(CONFIG_CMA)) - ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page); + ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page, + atomic_pool_init); else ptr = __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot, - &page, NULL); + &page, atomic_pool_init); if (ptr) { int i; @@ -542,27 +555,41 @@ static int __free_from_pool(void *start, size_t size) } static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page) + pgprot_t prot, struct page **ret_page, + const void *caller) { unsigned long order = get_order(size); size_t count = size >> PAGE_SHIFT; struct page *page; + void *ptr; page = dma_alloc_from_contiguous(dev, count, order); if (!page) return NULL; __dma_clear_buffer(page, size); - __dma_remap(page, size, prot); + if (PageHighMem(page)) { + ptr = __dma_alloc_remap(page, size, GFP_KERNEL, prot, caller); + if (!ptr) { + dma_release_from_contiguous(dev, page, count); + return NULL; + } + } else { + __dma_remap(page, size, prot); + ptr = page_address(page); + } *ret_page = page; - return page_address(page); + return ptr; } static void __free_from_contiguous(struct device *dev, struct page *page, - size_t size) + void *cpu_addr, size_t size) { - __dma_remap(page, size, pgprot_kernel); + if (PageHighMem(page)) + __dma_free_remap(cpu_addr, size); + else + __dma_remap(page, size, pgprot_kernel); dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); } @@ -583,9 +610,9 @@ static inline pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot) #define __get_dma_pgprot(attrs, prot) __pgprot(0) #define __alloc_remap_buffer(dev, size, gfp, prot, ret, c) NULL #define __alloc_from_pool(size, ret_page) NULL -#define __alloc_from_contiguous(dev, size, prot, ret) NULL +#define __alloc_from_contiguous(dev, size, prot, ret, c) NULL #define __free_from_pool(cpu_addr, size) 0 -#define __free_from_contiguous(dev, page, size) do { } while (0) +#define __free_from_contiguous(dev, page, cpu_addr, size) do { } while (0) #define __dma_free_remap(cpu_addr, size) do { } while (0) #endif /* CONFIG_MMU */ @@ -645,7 +672,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, else if (!IS_ENABLED(CONFIG_CMA)) addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); else - addr = __alloc_from_contiguous(dev, size, prot, &page); + addr = __alloc_from_contiguous(dev, size, prot, &page, caller); if (addr) *handle = pfn_to_dma(dev, page_to_pfn(page)); @@ -739,7 +766,7 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr, * Non-atomic allocations cannot be freed with IRQs disabled */ WARN_ON(irqs_disabled()); - __free_from_contiguous(dev, page, size); + __free_from_contiguous(dev, page, cpu_addr, size); } }