From patchwork Mon Nov 11 23:26:49 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 3170191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 82A799F39E for ; Mon, 11 Nov 2013 23:28:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8647B2042A for ; Mon, 11 Nov 2013 23:28:19 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7FACC20429 for ; Mon, 11 Nov 2013 23:28:18 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vg0tW-00050q-Ox; Mon, 11 Nov 2013 23:27:47 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vg0tO-00054b-UI; Mon, 11 Nov 2013 23:27:38 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vg0t6-00050M-QQ for linux-arm-kernel@lists.infradead.org; Mon, 11 Nov 2013 23:27:22 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 942FB13EFB1; Mon, 11 Nov 2013 23:27:00 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 86F2313F078; Mon, 11 Nov 2013 23:27:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 1502F13EFB1; Mon, 11 Nov 2013 23:27:00 +0000 (UTC) From: Laura Abbott To: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: [RFC PATCH 1/4] arm: mm: Add iotable_init_novmreserve Date: Mon, 11 Nov 2013 15:26:49 -0800 Message-Id: <1384212412-21236-2-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1384212412-21236-1-git-send-email-lauraa@codeaurora.org> References: <1384212412-21236-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131111_182721_096610_22329C97 X-CRM114-Status: GOOD ( 18.39 ) X-Spam-Score: -1.9 (-) Cc: Laura Abbott X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP iotable_init is currently used by dma_contiguous_remap to remap CMA memory appropriately. This has the side effect of reserving the area of CMA in the vmalloc tracking structures. This is fine under normal circumstances but it creates conflicts if we want to track lowmem in vmalloc. Since dma_contiguous_remap is only really concerned with the remapping, introduce iotable_init_novmreserve to allow remapping of pages without reserving the virtual address in vmalloc space. Signed-off-by: Laura Abbott --- arch/arm/include/asm/mach/map.h | 2 ++ arch/arm/mm/dma-mapping.c | 2 +- arch/arm/mm/ioremap.c | 5 +++-- arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 17 ++++++++++++++--- 5 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h index 2fe141f..02e3509 100644 --- a/arch/arm/include/asm/mach/map.h +++ b/arch/arm/include/asm/mach/map.h @@ -37,6 +37,7 @@ struct map_desc { #ifdef CONFIG_MMU extern void iotable_init(struct map_desc *, int); +extern void iotable_init_novmreserve(struct map_desc *, int); extern void vm_reserve_area_early(unsigned long addr, unsigned long size, void *caller); @@ -56,6 +57,7 @@ extern int ioremap_page(unsigned long virt, unsigned long phys, const struct mem_type *mtype); #else #define iotable_init(map,num) do { } while (0) +#define iotable_init_novmreserve(map,num) do { } while(0) #define vm_reserve_area_early(a,s,c) do { } while (0) #endif diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 7f9b179..bf80e43 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -435,7 +435,7 @@ void __init dma_contiguous_remap(void) addr += PMD_SIZE) pmd_clear(pmd_off_k(addr)); - iotable_init(&map, 1); + iotable_init_novmreserve(&map, 1); } } diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index f123d6e..ad92d4f 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -84,14 +84,15 @@ struct static_vm *find_static_vm_vaddr(void *vaddr) return NULL; } -void __init add_static_vm_early(struct static_vm *svm) +void __init add_static_vm_early(struct static_vm *svm, bool add_to_vm) { struct static_vm *curr_svm; struct vm_struct *vm; void *vaddr; vm = &svm->vm; - vm_area_add_early(vm); + if (add_to_vm) + vm_area_add_early(vm); vaddr = vm->addr; list_for_each_entry(curr_svm, &static_vmlist, list) { diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d5a4e9a..27a3680 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -75,7 +75,7 @@ struct static_vm { extern struct list_head static_vmlist; extern struct static_vm *find_static_vm_vaddr(void *vaddr); -extern __init void add_static_vm_early(struct static_vm *svm); +extern __init void add_static_vm_early(struct static_vm *svm, bool add_to_vm); #endif diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 53cdbd3..b83ed88 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -817,7 +817,8 @@ static void __init create_mapping(struct map_desc *md) /* * Create the architecture specific mappings */ -void __init iotable_init(struct map_desc *io_desc, int nr) +static void __init __iotable_init(struct map_desc *io_desc, int nr, + bool add_to_vm) { struct map_desc *md; struct vm_struct *vm; @@ -838,10 +839,20 @@ void __init iotable_init(struct map_desc *io_desc, int nr) vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING; vm->flags |= VM_ARM_MTYPE(md->type); vm->caller = iotable_init; - add_static_vm_early(svm++); + add_static_vm_early(svm++, add_to_vm); } } +void __init iotable_init(struct map_desc *io_desc, int nr) +{ + return __iotable_init(io_desc, nr, true); +} + +void __init iotable_init_novmreserve(struct map_desc *io_desc, int nr) +{ + return __iotable_init(io_desc, nr, false); +} + void __init vm_reserve_area_early(unsigned long addr, unsigned long size, void *caller) { @@ -855,7 +866,7 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size, vm->size = size; vm->flags = VM_IOREMAP | VM_ARM_EMPTY_MAPPING; vm->caller = caller; - add_static_vm_early(svm); + add_static_vm_early(svm, true); } #ifndef CONFIG_ARM_LPAE