From patchwork Wed Oct 22 22:38:45 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5137441 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 147E29F349 for ; Wed, 22 Oct 2014 22:42:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1E04920253 for ; Wed, 22 Oct 2014 22:42:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6DAAB2024F for ; Wed, 22 Oct 2014 22:42:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4Yy-0004cC-66; Wed, 22 Oct 2014 22:39:28 +0000 Received: from mailout1.w2.samsung.com ([211.189.100.11] helo=usmailout1.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4Yn-0004MZ-GM for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 22:39:19 +0000 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NDV00JAOAWW8P60@mailout1.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 18:38:56 -0400 (EDT) X-AuditID: cbfec37b-b7f296d000006be0-70-54483200206b Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex2.samsung.com (USCPEXMTA) with SMTP id B4.6D.27616.00238445; Wed, 22 Oct 2014 18:38:56 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NDV00IZHAWVKX60@usmmp2.samsung.com>; Wed, 22 Oct 2014 18:38:56 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Wed, 22 Oct 2014 15:38:55 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, pbonzini@redhat.com, agraf@suse.de, catalin.marinas@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, james.hogan@imgtec.com, marc.zyngier@arm.com, xiaoguangrong@linux.vnet.ibm.com Subject: [PATCH v12 5/6] arm: KVM: dirty log read write protect support Date: Wed, 22 Oct 2014 15:38:45 -0700 Message-id: <1414017526-5870-1-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrCLMWRmVeSWpSXmKPExsVy+t9hP10GI48Qg2XXhCxOXPnHaDF9xXYW i/fLehgtXrwGcuc3NzJavJv3gtmi+1kzo8WbT9oWc6YWWnw8dZzdYtPja6wWf+/8Y7PYv+0f q8WcMw9YLCa92cZksfD/TUYHAY8189Ywehx8dIjNo2fnGUaPO9f2sHk8OLSZxeP8pjXMHpuX 1Hu833eVzWPz6WqPz5vkAriiuGxSUnMyy1KL9O0SuDIa3+1lKjgpXrFr9jaWBsajwl2MnBwS AiYSb7unMEPYYhIX7q1n62Lk4hASWMYocbhlIiuE08skMeVGMxOEc5FRYtLhd2AtbAK6Evvv bWQHSYgINDFJ3P5zC8xhFjjLKNG+5Sc7SJWwgLtE+8I7QLM4OFgEVCWa+2tBwrwCrhKTL35l AglLCChIzJlkAxEWlPgx+R4LSJhZQELi+WclkLAQUOO2m88ZIS5Vkph2+Cr7BEaBWUg6ZiF0 LGBkWsUoVlqcXFCclJ5aYaRXnJhbXJqXrpecn7uJERJT1TsY7361OcQowMGoxMM7g8M9RIg1 say4MvcQowQHs5II7zZBjxAh3pTEyqrUovz4otKc1OJDjEwcnFINjM07dNd03pg4KV5ayfFM vXDMSxv+1UuOsVW0Nu96tDslpjbkYuHPz4ocuUrx7HPv/7mU8M7Kp9fUsvRhupv30R1P9j2/ 81ZswZtPLPf31hs7JodWnD0aP8/Myq40TEq0bKW8xZKeaX5OsQxvJNXlP+x/sXFzZju/X6j+ 7Q/nzogrtJQJSe7bqcRSnJFoqMVcVJwIAEWd3pyHAgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141022_153917_675732_C18A49E2 X-CRM114-Status: GOOD ( 10.44 ) X-Spam-Score: -6.4 (------) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support to track VM dirty pages, between dirty log reads. Pages that have been dirtied since last log read are write protected again, in preparation of next dirty log read. In addition ARMv7 dirty log read function is pushed up to generic layer. Signed-off-by: Mario Smarduch --- arch/arm/kvm/Kconfig | 1 + arch/arm/kvm/Makefile | 1 + arch/arm/kvm/arm.c | 2 ++ arch/arm/kvm/mmu.c | 22 ++++++++++++++++++++++ 4 files changed, 26 insertions(+) diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig index a099df4..9a0bd8e 100644 --- a/arch/arm/kvm/Kconfig +++ b/arch/arm/kvm/Kconfig @@ -24,6 +24,7 @@ config KVM select KVM_MMIO select KVM_ARM_HOST select HAVE_KVM_ARCH_TLB_FLUSH_ALL + select KVM_GENERIC_DIRTYLOG depends on ARM_VIRT_EXT && ARM_LPAE ---help--- Support hosting virtualized guest machines. You will also diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile index f7057ed..3480897 100644 --- a/arch/arm/kvm/Makefile +++ b/arch/arm/kvm/Makefile @@ -23,3 +23,4 @@ obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o obj-$(CONFIG_KVM_ARM_VGIC) += $(KVM)/arm/vgic.o obj-$(CONFIG_KVM_ARM_VGIC) += $(KVM)/arm/vgic-v2.o obj-$(CONFIG_KVM_ARM_TIMER) += $(KVM)/arm/arch_timer.o +obj-$(CONFIG_KVM_GENERIC_DIRTYLOG) += $(KVM)/dirtylog.o diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index a99e0cd..94bf645 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -737,10 +737,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } } +#ifdef CONFIG_ARM64 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { return -EINVAL; } +#endif static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3b86522..e348386 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -872,6 +872,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } + +/** + * kvm_mmu_write_protected_pt_masked() - write protect dirty pages set in mask + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire kvm_mmu_lock. + */ +void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} #endif static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,