From patchwork Sat Nov 10 15:42:17 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1724081 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 9A3853FDD2 for ; Sat, 10 Nov 2012 15:45:10 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TXDCY-0006xj-9F; Sat, 10 Nov 2012 15:42:30 +0000 Received: from mail-wi0-f169.google.com ([209.85.212.169]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TXDCP-0006wj-Bi for linux-arm-kernel@lists.infradead.org; Sat, 10 Nov 2012 15:42:23 +0000 Received: by mail-wi0-f169.google.com with SMTP id hq4so1200202wib.0 for ; Sat, 10 Nov 2012 07:42:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=subject:to:from:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding :x-gm-message-state; bh=LZQvcrB1165t7DI6FclD6+sLoQBXjy6gglrMATBh+dg=; b=QGZnHdbDNc1hlpJMEsbg3df3bVZQnet1E6IDcdBlsCtt7eCXENlfDXVchoLCeNmd6l z2tlVZNKv2pYWsgYODZsSV8dQoWsrwRdATLtmYPnQxk1xikRBfqIrA4+CoCi/sa5oNOU Zjgu8bclALUSgVeY7OI3SyFhXFgKnGl3ORaH/1RX2Gi1fbOjSCLUWC4fQJ/GRofVeRIF JGLLUE5pBbxypx3xOfCAaTTnHY64LN75/9ECMnDY7SQODYexr0vMHxYcLD/AHuMxDdi0 RTb1hQx+2tp6eU7XPA/gfLYFh3RueoKHG7FrsvBfM7QlsCwxOf2p/UbsH//iYDcT4JkI KysQ== Received: by 10.180.87.40 with SMTP id u8mr7625639wiz.3.1352562139240; Sat, 10 Nov 2012 07:42:19 -0800 (PST) Received: from [127.0.1.1] (ip1.c116.obr91.cust.comxnet.dk. [87.72.8.103]) by mx.google.com with ESMTPS id q10sm2797330wie.2.2012.11.10.07.42.18 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 10 Nov 2012 07:42:18 -0800 (PST) Subject: [PATCH v4 01/14] ARM: Add page table and page defines needed by KVM To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu From: Christoffer Dall Date: Sat, 10 Nov 2012 16:42:17 +0100 Message-ID: <20121110154217.2836.41905.stgit@chazy-air> In-Reply-To: <20121110154203.2836.46686.stgit@chazy-air> References: <20121110154203.2836.46686.stgit@chazy-air> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Gm-Message-State: ALoCoQmu4WxSH90QMhCJJgFQI815jUxZQPGMh0P6RkPojqvk4Ftk2aznZ3hO13e7OOiwvR2LCcQJ X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121110_104221_765241_53802CD4 X-CRM114-Status: GOOD ( 12.12 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.169 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Marcelo Tosatti X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org KVM uses the stage-2 page tables and the Hyp page table format, so we define the fields and page protection flags needed by KVM. The nomenclature is this: - page_hyp: PL2 code/data mappings - page_hyp_device: PL2 device mappings (vgic access) - page_s2: Stage-2 code/data page mappings - page_s2_device: Stage-2 device mappings (vgic access) Reviewed-by: Marcelo Tosatti Christoffer Dall --- arch/arm/include/asm/pgtable-3level.h | 18 ++++++++++++++++++ arch/arm/include/asm/pgtable.h | 7 +++++++ arch/arm/mm/mmu.c | 25 +++++++++++++++++++++++++ 3 files changed, 50 insertions(+) diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index b249035..eaba5a4 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -102,11 +102,29 @@ */ #define L_PGD_SWAPPER (_AT(pgdval_t, 1) << 55) /* swapper_pg_dir entry */ +/* + * 2nd stage PTE definitions for LPAE. + */ +#define L_PTE_S2_MT_UNCACHED (_AT(pteval_t, 0x5) << 2) /* MemAttr[3:0] */ +#define L_PTE_S2_MT_WRITETHROUGH (_AT(pteval_t, 0xa) << 2) /* MemAttr[3:0] */ +#define L_PTE_S2_MT_WRITEBACK (_AT(pteval_t, 0xf) << 2) /* MemAttr[3:0] */ +#define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ +#define L_PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */ + +/* + * Hyp-mode PL2 PTE definitions for LPAE. + */ +#define L_PTE_HYP L_PTE_USER + #ifndef __ASSEMBLY__ #define pud_none(pud) (!pud_val(pud)) #define pud_bad(pud) (!(pud_val(pud) & 2)) #define pud_present(pud) (pud_val(pud)) +#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ + PMD_TYPE_TABLE) +#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ + PMD_TYPE_SECT) #define pud_clear(pudp) \ do { \ diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 08c1231..dfb3918 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -70,6 +70,9 @@ extern void __pgd_error(const char *file, int line, pgd_t); extern pgprot_t pgprot_user; extern pgprot_t pgprot_kernel; +extern pgprot_t pgprot_hyp_device; +extern pgprot_t pgprot_s2; +extern pgprot_t pgprot_s2_device; #define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b)) @@ -82,6 +85,10 @@ extern pgprot_t pgprot_kernel; #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY) #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN) #define PAGE_KERNEL_EXEC pgprot_kernel +#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP) +#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP) +#define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY) +#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_USER | L_PTE_S2_RDONLY) #define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN) #define __PAGE_SHARED __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_XN) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 941dfb9..087d949 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -57,43 +57,61 @@ static unsigned int cachepolicy __initdata = CPOLICY_WRITEBACK; static unsigned int ecc_mask __initdata = 0; pgprot_t pgprot_user; pgprot_t pgprot_kernel; +pgprot_t pgprot_hyp_device; +pgprot_t pgprot_s2; +pgprot_t pgprot_s2_device; EXPORT_SYMBOL(pgprot_user); EXPORT_SYMBOL(pgprot_kernel); +EXPORT_SYMBOL(pgprot_hyp_device); +EXPORT_SYMBOL(pgprot_s2); +EXPORT_SYMBOL(pgprot_s2_device); struct cachepolicy { const char policy[16]; unsigned int cr_mask; pmdval_t pmd; pteval_t pte; + pteval_t pte_s2; }; +#ifdef CONFIG_ARM_LPAE +#define s2_policy(policy) policy +#else +#define s2_policy(policy) 0 +#endif + static struct cachepolicy cache_policies[] __initdata = { { .policy = "uncached", .cr_mask = CR_W|CR_C, .pmd = PMD_SECT_UNCACHED, .pte = L_PTE_MT_UNCACHED, + .pte_s2 = s2_policy(L_PTE_S2_MT_UNCACHED), }, { .policy = "buffered", .cr_mask = CR_C, .pmd = PMD_SECT_BUFFERED, .pte = L_PTE_MT_BUFFERABLE, + .pte_s2 = s2_policy(L_PTE_S2_MT_UNCACHED), }, { .policy = "writethrough", .cr_mask = 0, .pmd = PMD_SECT_WT, .pte = L_PTE_MT_WRITETHROUGH, + .pte_s2 = s2_policy(L_PTE_S2_MT_WRITETHROUGH), }, { .policy = "writeback", .cr_mask = 0, .pmd = PMD_SECT_WB, .pte = L_PTE_MT_WRITEBACK, + .pte_s2 = s2_policy(L_PTE_S2_MT_WRITEBACK), }, { .policy = "writealloc", .cr_mask = 0, .pmd = PMD_SECT_WBWA, .pte = L_PTE_MT_WRITEALLOC, + .pte_s2 = s2_policy(L_PTE_S2_MT_WRITEBACK), } }; @@ -310,6 +328,7 @@ static void __init build_mem_type_table(void) struct cachepolicy *cp; unsigned int cr = get_cr(); pteval_t user_pgprot, kern_pgprot, vecs_pgprot; + pteval_t hyp_device_pgprot, s2_pgprot, s2_device_pgprot; int cpu_arch = cpu_architecture(); int i; @@ -421,6 +440,8 @@ static void __init build_mem_type_table(void) */ cp = &cache_policies[cachepolicy]; vecs_pgprot = kern_pgprot = user_pgprot = cp->pte; + s2_pgprot = cp->pte_s2; + hyp_device_pgprot = s2_device_pgprot = mem_types[MT_DEVICE].prot_pte; /* * ARMv6 and above have extended page tables. @@ -444,6 +465,7 @@ static void __init build_mem_type_table(void) user_pgprot |= L_PTE_SHARED; kern_pgprot |= L_PTE_SHARED; vecs_pgprot |= L_PTE_SHARED; + s2_pgprot |= L_PTE_SHARED; mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_S; mem_types[MT_DEVICE_WC].prot_pte |= L_PTE_SHARED; mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_S; @@ -498,6 +520,9 @@ static void __init build_mem_type_table(void) pgprot_user = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | user_pgprot); pgprot_kernel = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | kern_pgprot); + pgprot_s2 = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | s2_pgprot); + pgprot_s2_device = __pgprot(s2_device_pgprot); + pgprot_hyp_device = __pgprot(hyp_device_pgprot); mem_types[MT_LOW_VECTORS].prot_l1 |= ecc_mask; mem_types[MT_HIGH_VECTORS].prot_l1 |= ecc_mask;