From patchwork Tue Jun 3 23:19:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4291711 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E4B629F326 for ; Tue, 3 Jun 2014 23:22:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E0BF220254 for ; Tue, 3 Jun 2014 23:22:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D1A6720218 for ; Tue, 3 Jun 2014 23:22:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wry03-0007yL-7K; Tue, 03 Jun 2014 23:20:11 +0000 Received: from mailout4.w2.samsung.com ([211.189.100.14] helo=usmailout4.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wrxzn-0006nM-Q7 for linux-arm-kernel@lists.infradead.org; Tue, 03 Jun 2014 23:19:56 +0000 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N6M00AQX8SKYKB0@usmailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Tue, 03 Jun 2014 19:19:33 -0400 (EDT) X-AuditID: cbfec37c-b7fd06d000004f49-6e-538e580477f1 Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id 01.82.20297.4085E835; Tue, 03 Jun 2014 19:19:32 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N6M00AWY8SKIMC0@usmmp1.samsung.com>; Tue, 03 Jun 2014 19:19:32 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Tue, 03 Jun 2014 16:19:31 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [PATCH v7 2/4] arm: dirty page logging inital mem region write protect (w/no huge PUD support) Date: Tue, 03 Jun 2014 16:19:25 -0700 Message-id: <1401837567-5527-3-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> References: <1401837567-5527-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrDLMWRmVeSWpSXmKPExsVy+t9hX12WiL5gg8fHTSxevP7HaNH7/yKr xf2r3xkt5kwttPh46ji7xabH11gt/t75x2Yx58wDFotJb7YxWXyYsZLRgctjzbw1jB6zGnrZ PO5c28PmcX7TGmaPzUvqPfq2rGL0+LxJLoA9issmJTUnsyy1SN8ugSvj0dF9LAVHzCv+733L 3sD4SqeLkZNDQsBEYtvObUwQtpjEhXvr2boYuTiEBJYxSiz7fYIdJCEk0MskMakhHMK+yCjx /mIViM0moCux/95GsBoRgVCJKctfM4E0Mws8Z5T49fYN2FRhgUyJZys6WUBsFgFViZ5r15i7 GDk4eAVcJX5MKwExJQQUJOZMsgGp4BRwk9hybg5YhRBQRdsicZAwr4CgxI/J91hAwswCEhLP PytBHKMqse3mc0aIIUoSq4+YT2AUmoWkYRZCwwJGplWMYqXFyQXFSempFcZ6xYm5xaV56XrJ +bmbGCExUrOD8d5Xm0OMAhyMSjy8E272BguxJpYVV+YeYpTgYFYS4f2s2hcsxJuSWFmVWpQf X1Sak1p8iJGJg1OqgTHgMUvBr6rDzKtfcqq+W9R4xZJt1eZpSU9C7PZsMf1m4uU/vbA4dzrX 1agp4YxiHUK7FXSPt3rsmvXMe69xP0vA7QzxO+rnDzGEfnTbteO9Ymk736Wt65b0bb2if10+ Td76zIVZG3beONmxa7piRP209ylSCkazpFuz7l5nvb/oYfnp18mB06YrsRRnJBpqMRcVJwIA Jh7gjW8CAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140603_161955_923947_5BD991CF X-CRM114-Status: GOOD ( 17.98 ) X-Spam-Score: -5.7 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, sungjinn.chung@samsung.com, gavin.guo@canonical.com, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Patch adds memslot support for initial write protection and split up of huge pages. This patch series assumes that huge PUDs will not be used to map VM memory. This patch depends on the unmap_range() patch, it needs to be applied first. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 2 + arch/arm/include/asm/kvm_mmu.h | 20 ++++++ arch/arm/include/asm/pgtable-3level.h | 1 + arch/arm/kvm/arm.c | 6 ++ arch/arm/kvm/mmu.c | 114 +++++++++++++++++++++++++++++++++ 5 files changed, 143 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 193ceaf..59565f5 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -231,4 +231,6 @@ int kvm_perf_teardown(void); u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 5cc0b0f..08ab5e8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) pmd_val(*pmd) |= L_PMD_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; +} + +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) +{ + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; +} + +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) +{ + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; +} + /* Open coded p*d_addr_end that can deal with 64bit addresses */ #define kvm_pgd_addr_end(addr, end) \ ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 85c60ad..d8bb40b 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -129,6 +129,7 @@ #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ /* diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 3c82b37..dfd63ac 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -242,6 +242,12 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + /* + * At this point memslot has been committed and the there is an + * allocated dirty_bitmap[] so marking of diryt pages works now on. + */ + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_wp_memory_region(kvm, mem->slot); } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ef29540..e5dff85 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -760,6 +760,120 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } + +/** + * stage2_wp_pte_range - write protect PTE range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pte_range(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + if (!kvm_s2pte_readonly(pte)) + kvm_set_s2pte_readonly(pte); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_wp_pmd_range - write protect PMD range + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pmd_range(pud_t *pud, phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = pmd_offset(pud, addr); + + do { + next = kvm_pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + /* + * Write Protect the PMD, give user_mem_abort() + * a choice to clear and fault on demand or + * break up the huge page. + */ + if (!kvm_s2pmd_readonly(pmd)) + kvm_set_s2pmd_readonly(pmd); + } else + stage2_wp_pte_range(pmd, addr, next); + + } + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_wp_pud_range - write protect PUD range + * @kvm: pointer to kvm structure + * @pud: pointer to pgd entry + * @addr: range start address + * @end: range end address + * + * While walking the PUD range huge PUD pages are ignored, in the future this + * may need to be revisited. Determine how to handle huge PUDs when logging + * of dirty pages is enabled. + */ +static void stage2_wp_pud_range(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = pud_offset(pgd, addr); + do { + /* Check for contention every PUD range and release CPU */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + next = kvm_pud_addr_end(addr, end); + /* TODO: huge PUD not supported, revisit later */ + if (!pud_none(*pud)) + stage2_wp_pmd_range(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * kvm_mmu_wp_memory_region() - initial write protected of memory region slot + * @kvm: The KVM pointer + * @slot: The memory slot to write protect + * + * Called to start logging dirty pages after memory region + * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns + * all present PMD and PTEs are write protected in the memory region. + * Afterwards read of dirty page log can be called. Pages not present are + * write protected on future access in user_mem_abort(). + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) +{ + pgd_t *pgd; + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + phys_addr_t next; + + spin_lock(&kvm->mmu_lock); + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = kvm_pgd_addr_end(addr, end); + if (pgd_present(*pgd)) + stage2_wp_pud_range(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status)