From patchwork Thu Jan 2 21:53:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 3427231 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 67590C02DC for ; Thu, 2 Jan 2014 21:54:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 78ACA20127 for ; Thu, 2 Jan 2014 21:54:56 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 831FA20122 for ; Thu, 2 Jan 2014 21:54:55 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VyqDi-0008GS-Jx; Thu, 02 Jan 2014 21:54:26 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VyqDY-0006z4-4Q; Thu, 02 Jan 2014 21:54:16 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VyqDK-0006tL-G0 for linux-arm-kernel@lists.infradead.org; Thu, 02 Jan 2014 21:54:12 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id C703E13F05F; Thu, 2 Jan 2014 21:53:43 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id B83B213F114; Thu, 2 Jan 2014 21:53:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 0B02513F08B; Thu, 2 Jan 2014 21:53:43 +0000 (UTC) From: Laura Abbott To: Andrew Morton , Kyungmin Park , Dave Hansen , linux-mm@kvack.org, Russell King Subject: [RFC PATCHv3 10/11] arm: Use for_each_potential_vmalloc_area Date: Thu, 2 Jan 2014 13:53:28 -0800 Message-Id: <1388699609-18214-11-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1388699609-18214-1-git-send-email-lauraa@codeaurora.org> References: <1388699609-18214-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140102_165403_022007_97271FD6 X-CRM114-Status: GOOD ( 15.67 ) X-Spam-Score: -2.4 (--) Cc: linux-arm-kernel@lists.infradead.org, Laura Abbott , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP With CONFIG_INTERMIX_VMALLOC it is no longer the case that all vmalloc is contained between VMALLOC_START and VMALLOC_END. Some portions of code still rely on operating on all those regions however. Use for_each_potential_vmalloc_area where appropriate to do whatever is necessary to those regions. Signed-off-by: Laura Abbott --- arch/arm/kvm/mmu.c | 12 ++++++++---- arch/arm/mm/ioremap.c | 12 ++++++++---- arch/arm/mm/mmu.c | 9 +++++++-- 3 files changed, 23 insertions(+), 10 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 58090698..4d2ca7e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -225,16 +225,20 @@ void free_boot_hyp_pgd(void) void free_hyp_pgds(void) { unsigned long addr; + int i; + unsigned long vstart, unsigned long vend; free_boot_hyp_pgd(); mutex_lock(&kvm_hyp_pgd_mutex); if (hyp_pgd) { - for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE) - unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); - for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE) - unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); + for_each_potential_nonvmalloc_area(&vstart, &vend, &i) + for (addr = vstart; addr < vend; addr += PGDIR_SIZE) + unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); + for_each_potential_vmalloc_area(&vstart, &vend, &i) + for (addr = vstart; addr < vend; addr += PGDIR_SIZE) + unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); kfree(hyp_pgd); hyp_pgd = NULL; diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index ad92d4f..892bc82 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -115,13 +115,17 @@ EXPORT_SYMBOL(ioremap_page); void __check_vmalloc_seq(struct mm_struct *mm) { unsigned int seq; + int i; + unsigned long vstart, vend; do { seq = init_mm.context.vmalloc_seq; - memcpy(pgd_offset(mm, VMALLOC_START), - pgd_offset_k(VMALLOC_START), - sizeof(pgd_t) * (pgd_index(VMALLOC_END) - - pgd_index(VMALLOC_START))); + + for_each_potential_vmalloc_area(&vstart, &vend, &i) + memcpy(pgd_offset(mm, vstart), + pgd_offset_k(vstart), + sizeof(pgd_t) * (pgd_index(vend) - + pgd_index(vstart))); mm->context.vmalloc_seq = seq; } while (seq != init_mm.context.vmalloc_seq); } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 55bd742..af8e43c 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1217,6 +1217,8 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) struct map_desc map; unsigned long addr; void *vectors; + unsigned long vstart, vend; + int i; /* * Allocate the vector page early. @@ -1225,8 +1227,11 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) early_trap_init(vectors); - for (addr = VMALLOC_START; addr; addr += PMD_SIZE) - pmd_clear(pmd_off_k(addr)); + + for_each_potential_vmalloc_area(&vstart, &vend, &i) + for (addr = vstart; addr < vend; addr += PMD_SIZE) { + pmd_clear(pmd_off_k(addr)); + } /* * Map the kernel if it is XIP.