From patchwork Fri Nov 30 16:29:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1825711 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 28AE4DF24C for ; Fri, 30 Nov 2012 16:29:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756553Ab2K3Q3b (ORCPT ); Fri, 30 Nov 2012 11:29:31 -0500 Received: from mail-vc0-f170.google.com ([209.85.220.170]:44342 "EHLO mail-vc0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751182Ab2K3Q3a (ORCPT ); Fri, 30 Nov 2012 11:29:30 -0500 Received: by mail-vc0-f170.google.com with SMTP id fl11so28794830vcb.1 for ; Fri, 30 Nov 2012 08:29:29 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=EbYr1ALrcQ/o4X5qyWLFU0b5LAqmAcMSo+xMyh3p9pU=; b=NBO9T47Vlce/mdh8R+/JbSjZWU6Vzp5611VBjaMn60P4OgfBbq06H81TzKqfhOBLdy 8RhmKv3ccuQUEuRMpzdZlPUmlZT/IuKfmNdpG0ZTV5t40g8GFEDyVLg7jNR2AmHIYSao D+5RnigJ7IafOVvhacw4/4HtKgtVLOBb7nogl1lvp2xfzAjeIT2SgeEXP6gggtf/RYvl G/jV7bPfP3RaHVA8YcIYr7jnfSjk/WwQD8yY+O+iugZkQlP1bbkbvGySBzic9Rs8AeKu W6YBWs0HWSGi4cyJYeZDDgzzOREfbA5hAyvHnNIGNwLXFy0p2Ay2YYNTvPRjemHmBPje jWWQ== MIME-Version: 1.0 Received: by 10.58.2.71 with SMTP id 7mr1462855ves.42.1354292967856; Fri, 30 Nov 2012 08:29:27 -0800 (PST) Received: by 10.220.127.16 with HTTP; Fri, 30 Nov 2012 08:29:27 -0800 (PST) X-Originating-IP: [72.80.83.148] In-Reply-To: <20121130105843.GB26305@mudshark.cambridge.arm.com> References: <20121110154203.2836.46686.stgit@chazy-air> <20121110154224.2836.21775.stgit@chazy-air> <20121119141631.GU3205@mudshark.cambridge.arm.com> <20121130105843.GB26305@mudshark.cambridge.arm.com> Date: Fri, 30 Nov 2012 11:29:27 -0500 Message-ID: Subject: Re: [PATCH v4 02/14] ARM: Section based HYP idmap From: Christoffer Dall To: Will Deacon Cc: "kvm@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.cs.columbia.edu" , Marc Zyngier , Marcelo Tosatti , "dave.martin@linaro.org" , "robherring2@gmail.com" X-Gm-Message-State: ALoCoQkFkEPvKOfzWubo2Ig1K0h2qgLj7XJKeSIOv5Opp+nxSuuxDCbl166EVVpielXi6RBLgZMi Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Fri, Nov 30, 2012 at 5:58 AM, Will Deacon wrote: > On Thu, Nov 29, 2012 at 06:59:07PM +0000, Christoffer Dall wrote: >> On Mon, Nov 19, 2012 at 9:16 AM, Will Deacon wrote: >> > >> > I also think it might be cleaner to declare the hyp_pgd next to the >> > idmap_pgd then add the hyp_idmap_setup code to init_static_idmap, so >> > that we don't add new entry points to this file. The teardown can all be >> > moved into the kvm/ code as it doesn't need any of the existing idmap >> > functions. >> > >> >> hmm, we had some attempts at this earlier, but it wasn't all that >> nice. Allocating the hyp_pgd inside idmap.c requires some more logic, >> and the #ifdefs inside init_static_idmap are also not pretty. > > [...] > >> I see how you would like to clean this up, but it's not really hard to >> read or understand, imho: > > I have to disagree. As it stands, idmap.c has *no* entry points for > manipulating the page tables, which is a good thing because it puts all the > mapping code in one place and makes it both easy to use and easy to reason > about. Your patch starts exposing a load of this stuff to the rest of the > kernel, which is actively going against the design that we currently have. > having not been a part of that design discussion, I still don't quite see the harm. On the contrary, now we get code in KVM that makes assumptions about how things are mapped in idmap when unmapping. Also it feels a bit weird that the highly kvm-specific hyp_pgd is now exported everywhere and initialized in idmap.c, but nothing that I can't live with. In any case, you know much better how the idmap code is viewed, so I'll try to follow your thoughts. > > Here is a quick, untested patch to fix that (modulo the ARM_VIRT_EXT implies > ARM_LPAE change I suggested). > thanks, I tweaked it a bit to avoid the over 80-character lines and renamed kvm_mmu_exit to kvm_clear_hyp_idmap, which is what it actually does: * In order to soft-boot, we need to switch to a 1:1 mapping for the --- -Christoffer -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/include/asm/idmap.h b/arch/arm/include/asm/idmap.h index 36708ba..3e79d33 100644 --- a/arch/arm/include/asm/idmap.h +++ b/arch/arm/include/asm/idmap.h @@ -9,11 +9,10 @@ extern pgd_t *idmap_pgd; -void setup_mm_for_reboot(void); - #ifdef CONFIG_ARM_VIRT_EXT -void hyp_idmap_teardown(pgd_t *hyp_pgd); -void hyp_idmap_setup(pgd_t *hyp_pgd); +extern pgd_t *hyp_pgd; #endif +void setup_mm_for_reboot(void); + #endif /* __ASM_IDMAP_H */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 4d55218..c7f939d 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -34,5 +34,5 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); unsigned long kvm_mmu_get_httbr(void); int kvm_mmu_init(void); -void kvm_mmu_exit(void); +void kvm_clear_hyp_idmap(void); #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index b93da23..5b86d27 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -1004,7 +1004,7 @@ static int init_hyp_mode(void) /* * Unmap the identity mapping */ - kvm_mmu_exit(); + kvm_clear_hyp_idmap(); /* * Map the Hyp-code called directly from the host diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 59d422f..50ac252 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -33,8 +33,9 @@ #include "trace.h" +extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; + static DEFINE_MUTEX(kvm_hyp_pgd_mutex); -static pgd_t *hyp_pgd; static void kvm_tlb_flush_vmid(struct kvm *kvm) { @@ -725,15 +726,34 @@ unsigned long kvm_mmu_get_httbr(void) int kvm_mmu_init(void) { - hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); - if (!hyp_pgd) - return -ENOMEM; - - hyp_idmap_setup(hyp_pgd); - return 0; + return hyp_pgd ? 0 : -ENOMEM; } -void kvm_mmu_exit(void) +/** + * kvm_clear_idmap - remove all idmaps from the hyp pgd + * + * Free the underlying pmds for all pgds in range and clear the pgds (but + * don't free them) afterwards. + */ +void kvm_clear_hyp_idmap(void) { - hyp_idmap_teardown(hyp_pgd); + unsigned long addr, end; + unsigned long next; + pgd_t *pgd = hyp_pgd; + + addr = virt_to_phys(__hyp_idmap_text_start); + end = virt_to_phys(__hyp_idmap_text_end); + + pgd += pgd_index(addr); + do { + next = pgd_addr_end(addr, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + pud_t *pud = pud_offset(pgd, addr); + pmd_t *pmd = pmd_offset(pud, addr); + + pud_clear(pud); + clean_pmd_entry(pmd); + pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK)); + } while (pgd++, addr = next, addr < end); } diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c index ea7430e..c97612e 100644 --- a/arch/arm/mm/idmap.c +++ b/arch/arm/mm/idmap.c @@ -86,65 +86,43 @@ static void identity_mapping_add(pgd_t *pgd, const char *text_start, } while (pgd++, addr = next, addr != end); } -extern char __idmap_text_start[], __idmap_text_end[]; +#ifdef CONFIG_ARM_VIRT_EXT +pgd_t *hyp_pgd; -static int __init init_static_idmap(void) +extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; + +static int __init init_static_idmap_hyp(void) { - idmap_pgd = pgd_alloc(&init_mm); - if (!idmap_pgd) + hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); + if (!hyp_pgd) return -ENOMEM; - identity_mapping_add(idmap_pgd, __idmap_text_start, - __idmap_text_end, 0); + identity_mapping_add(hyp_pgd, __hyp_idmap_text_start, + __hyp_idmap_text_end, PMD_SECT_AP1); return 0; } -early_initcall(init_static_idmap); - -#if defined(CONFIG_ARM_VIRT_EXT) && defined(CONFIG_ARM_LPAE) -static void hyp_idmap_del_pmd(pgd_t *pgd, unsigned long addr) +#else +static int __init init_static_idmap_hyp(void) { - pud_t *pud; - pmd_t *pmd; - - pud = pud_offset(pgd, addr); - pmd = pmd_offset(pud, addr); - pud_clear(pud); - clean_pmd_entry(pmd); - pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK)); + return 0; } +#endif -extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; +extern char __idmap_text_start[], __idmap_text_end[]; -/* - * This version actually frees the underlying pmds for all pgds in range and - * clear the pgds themselves afterwards. - */ -void hyp_idmap_teardown(pgd_t *hyp_pgd) +static int __init init_static_idmap(void) { - unsigned long addr, end; - unsigned long next; - pgd_t *pgd = hyp_pgd; - - addr = virt_to_phys(__hyp_idmap_text_start); - end = virt_to_phys(__hyp_idmap_text_end); + idmap_pgd = pgd_alloc(&init_mm); + if (!idmap_pgd) + return -ENOMEM; - pgd += pgd_index(addr); - do { - next = pgd_addr_end(addr, end); - if (!pgd_none_or_clear_bad(pgd)) - hyp_idmap_del_pmd(pgd, addr); - } while (pgd++, addr = next, addr < end); -} -EXPORT_SYMBOL_GPL(hyp_idmap_teardown); + identity_mapping_add(idmap_pgd, __idmap_text_start, + __idmap_text_end, 0); -void hyp_idmap_setup(pgd_t *hyp_pgd) -{ - identity_mapping_add(hyp_pgd, __hyp_idmap_text_start, - __hyp_idmap_text_end, PMD_SECT_AP1); + return init_static_idmap_hyp(); } -EXPORT_SYMBOL_GPL(hyp_idmap_setup); -#endif +early_initcall(init_static_idmap); /*