From patchwork Mon Dec 15 07:28:08 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5491351 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4E78F9F326 for ; Mon, 15 Dec 2014 07:40:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4E42C209F7 for ; Mon, 15 Dec 2014 07:40:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 53A85209EB for ; Mon, 15 Dec 2014 07:40:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y0QE2-0004mj-3m; Mon, 15 Dec 2014 07:37:50 +0000 Received: from mailout2.w2.samsung.com ([211.189.100.12] helo=usmailout2.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y0QBk-0001en-D2 for linux-arm-kernel@lists.infradead.org; Mon, 15 Dec 2014 07:35:29 +0000 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NGM008XZ52GM610@mailout2.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 15 Dec 2014 02:35:04 -0500 (EST) X-AuditID: cbfec37c-b7f496d000000b40-63-548e8f283aab Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id 93.98.02880.82F8E845; Mon, 15 Dec 2014 02:35:04 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NGM00JN852GKH50@usmmp2.samsung.com>; Mon, 15 Dec 2014 02:35:04 -0500 (EST) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.160.8.49) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Sun, 14 Dec 2014 23:35:03 -0800 From: Mario Smarduch To: pbonzini@redhat.com, james.hogan@imgtec.com, christoffer.dall@linaro.org, agraf@suse.de, marc.zyngier@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, catalin.marinas@arm.com Subject: [PATCH v15 11/11] KVM: arm/arm64: Add support to dissolve huge PUD Date: Sun, 14 Dec 2014 23:28:08 -0800 Message-id: <1418628488-3696-12-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1418628488-3696-1-git-send-email-m.smarduch@samsung.com> References: <1418628488-3696-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.160.8.49] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrPLMWRmVeSWpSXmKPExsVy+t9hP12N/r4QgxeLdCxOXPnHaDF9xXYW i/fLehgtXrwGcuc3NzJavJv3gtmi+1kzo8WbT9oWc6YWWnw8dZzdYtPja6wWf+/8Y7PYv+0f q8WcMw9YLCa92cbkwO+xZt4aRo+Djw6xefTsPMPocefaHjaP85vWMHtsXlLv8X7fVTaPzaer PT5vkgvgjOKySUnNySxLLdK3S+DK6GmewVJwRKvi1oZ1zA2My5S6GDk5JARMJNYtfskMYYtJ XLi3ng3EFhJYxijR3pHdxcgFZPcySTQ1zGCGcM4zSsxa38oEUsUmoCux/95GdpCEiMABRokT G38xgTjMAm8ZJXac/ANWJSzgLdG1vxFsLouAqsSUb3uBbA4OXgE3iQ29ISCmhICCxJxJNiAm J1B0725liCNcJf5uuA82hFdAUOLH5HssICXMAhISzz8rQZSoSmy7+ZwRZsjGBT4TGIVmIWmY hdCwgJFpFaNYaXFyQXFSemqFsV5xYm5xaV66XnJ+7iZGSKTV7GC899XmEKMAB6MSD28EY1+I EGtiWXFl7iFGCQ5mJRHe7nigEG9KYmVValF+fFFpTmrxIUYmDk6pBsaV7SvPa6pNz5Q5fvr/ tCTFU8tWnem6YteemvGS4+nknauiHN2TSw0ub5N0eXgwyONR8Re+orzOXAV3vakh73Wl/m99 yGZpO+vj+imP7Ha1aBZxyrNfn7f6VfbbH982cjXd8N1X7ZB51ZxJ22vuqTR9w6VpAmqu60MW hdnulDl591NKmt4KoTtKLMUZiYZazEXFiQAohclDkgIAAA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141214_233528_542074_AB6F88CC X-CRM114-Status: GOOD ( 17.35 ) X-Spam-Score: -5.0 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds the same support for PUD huge page as for PMD. Huge PUD is write protected for initial memory region write protection. Code to dissolve huge PUD is supported in user_mem_abort(). At this time this code has not been tested, but similar approach to current ARMv8 page logging test is in work, limiting kernel memory and mapping in 1 or 2GB into Guest address space on a 4k page/48 bit host, some host kernel test code needs to be added to detect page fault to this region and side step general processing. Also similar to PMD case all pages in range are marked dirty when PUD entry is cleared. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_mmu.h | 8 +++++ arch/arm/kvm/mmu.c | 64 ++++++++++++++++++++++++++++++++-- arch/arm64/include/asm/kvm_mmu.h | 9 +++++ arch/arm64/include/asm/pgtable-hwdef.h | 3 ++ 4 files changed, 81 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index dda0046..703d04d 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -133,6 +133,14 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; } +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + return false; +} /* Open coded p*d_addr_end that can deal with 64bit addresses */ #define kvm_pgd_addr_end(addr, end) \ diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 59003df..35840fb 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -109,6 +109,55 @@ void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd) } } +/** + * stage2_find_pud() - find a PUD entry + * @kvm: pointer to kvm structure. + * @addr: IPA address + * + * Return address of PUD entry or NULL if not allocated. + */ +static pud_t *stage2_find_pud(struct kvm *kvm, phys_addr_t addr) +{ + pgd_t *pgd; + + pgd = kvm->arch.pgd + pgd_index(addr); + if (pgd_none(*pgd)) + return NULL; + + return pud_offset(pgd, addr); +} + +/** + * stage2_dissolve_pud() - clear and flush huge PUD entry + * @kvm: pointer to kvm structure. + * @addr IPA + * + * Function clears a PUD entry, flushes addr 1st and 2nd stage TLBs. Marks all + * pages in the range dirty. + */ +void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr) +{ + pud_t *pud; + gfn_t gfn; + long i; + + pud = stage2_find_pud(kvm, addr); + if (pud && !pud_none(*pud) && kvm_pud_huge(*pud)) { + pud_clear(pud); + kvm_tlb_flush_vmid_ipa(kvm, addr); + put_page(virt_to_page(pud)); +#ifdef CONFIG_SMP + gfn = (addr & PUD_MASK) >> PAGE_SHIFT; + /* + * Mark all pages in PUD range dirty, in case other + * CPUs are writing to it. + */ + for (i = 0; i < PTRS_PER_PUD * PTRS_PER_PMD; i++) + mark_page_dirty(kvm, gfn + i); +#endif + } +} + static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, int min, int max) { @@ -761,6 +810,13 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, unsigned long iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; unsigned long logging_active = flags & KVM_S2PTE_FLAG_LOGGING_ACTIVE; + /* + * While dirty page logging - dissolve huge PUD, then continue on to + * allocate page. + */ + if (logging_active) + stage2_dissolve_pud(kvm, addr); + /* Create stage-2 page table mapping - Levels 0 and 1 */ pmd = stage2_get_pmd(kvm, cache, addr); if (!pmd) { @@ -964,9 +1020,11 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end) do { next = kvm_pud_addr_end(addr, end); if (!pud_none(*pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(kvm_pud_huge(*pud)); - stage2_wp_pmds(pud, addr, next); + if (kvm_pud_huge(*pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else + stage2_wp_pmds(pud, addr, next); } } while (pud++, addr = next, addr != end); } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index f925e40..3b692c5 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -137,6 +137,15 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd) return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY; } +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + pud_val(*pud) = (pud_val(*pud) & ~PUD_S2_RDWR) | PUD_S2_RDONLY; +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + return (pud_val(*pud) & PUD_S2_RDWR) == PUD_S2_RDONLY; +} #define kvm_pgd_addr_end(addr, end) pgd_addr_end(addr, end) #define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index 5f930cc..1714c84 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -122,6 +122,9 @@ #define PMD_S2_RDONLY (_AT(pmdval_t, 1) << 6) /* HAP[2:1] */ #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ +#define PUD_S2_RDONLY (_AT(pudval_t, 1) << 6) /* HAP[2:1] */ +#define PUD_S2_RDWR (_AT(pudval_t, 3) << 6) /* HAP[2:1] */ + /* * Memory Attribute override for Stage-2 (MemAttr[3:0]) */