From patchwork Fri Nov 7 00:40:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5249041 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6E9579F2F1 for ; Fri, 7 Nov 2014 00:48:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8933D20121 for ; Fri, 7 Nov 2014 00:48:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9E5002011D for ; Fri, 7 Nov 2014 00:48:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmXhA-0007FM-KP; Fri, 07 Nov 2014 00:46:32 +0000 Received: from mailout2.w2.samsung.com ([211.189.100.12] helo=usmailout2.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XmXh5-0006y3-Iv for linux-arm-kernel@lists.infradead.org; Fri, 07 Nov 2014 00:46:28 +0000 Received: from uscpsbgex4.samsung.com (u125.gpu85.samsung.co.kr [203.254.195.125]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NEN0034Y8SUYW00@mailout2.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 06 Nov 2014 19:46:06 -0500 (EST) X-AuditID: cbfec37d-b7f3b6d000005695-e2-545c164ef11a Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex4.samsung.com (USCPEXMTA) with SMTP id 3F.15.22165.E461C545; Thu, 06 Nov 2014 19:46:06 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NEN00KO08SUH880@usmmp1.samsung.com>; Thu, 06 Nov 2014 19:46:06 -0500 (EST) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Thu, 06 Nov 2014 16:46:06 -0800 From: Mario Smarduch To: pbonzini@redhat.com, james.hogan@imgtec.com, christoffer.dall@linaro.org, agraf@suse.de, marc.zyngier@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, catalin.marinas@arm.com Subject: [PATCH v13 6/7] arm: KVM: dirty log read write protect support Date: Thu, 06 Nov 2014 16:40:47 -0800 Message-id: <1415320848-13813-7-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1415320848-13813-1-git-send-email-m.smarduch@samsung.com> References: <1415320848-13813-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrPLMWRmVeSWpSXmKPExsVy+t9hX10/sZgQgyMXjSxOXPnHaDF9xXYW i/fLehgtXrwGcuc3NzJavJv3gtmi+1kzo8WbT9oWc6YWWnw8dZzdYtPja6wWf+/8Y7PYv+0f q8WcMw9YLCa92cbkwO+xZt4aRo+Djw6xefTsPMPocefaHjaP85vWMHtsXlLv8X7fVTaPzaer PT5vkgvgjOKySUnNySxLLdK3S+DKaP7RwlrwQbbiya1tjA2MyyS6GDk5JARMJPqPrmaGsMUk LtxbzwZiCwksY5To/M4PYfcyScy4DlTPBWRfZJS4+XorWBGbgK7E/nsb2UESIgIHGCVObPzF BOIwC7xllNhx8g+Qw8EhLOAucfWuIEgDi4CqxNa3F8HCvAJuEmemRoKYEgIKEnMm2YBUcAIV b96xgxlir5vE3Yc/WEFsXgFBiR+T77GAlDMLSEg8/6wEUaIqse3mc0aI85Ukph2+yj6BUWgW ko5ZCB0LGJlWMYqVFicXFCelp1aY6BUn5haX5qXrJefnbmKERFrtDsb7X20OMQpwMCrx8N7g jQkRYk0sK67MPcQowcGsJMJrww4U4k1JrKxKLcqPLyrNSS0+xMjEwSnVwJgaZbt/frnx1ZQr 3zd9z7WQnaM/J2lh7363W4odMkaMGcenuJZpJ0nNz29dmv9mLx/3jPlyrAJslYdlsg/yLrSO /L52gfWLxSqrA0p95ke11WkLy82tSD3UecfTZaJa5dbzuy1Ml/5LXRxvmvVlKXtB+DnPH02r A5dvkpcISjpy3Y31n+A8diWW4oxEQy3mouJEALYppyySAgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141106_164627_707346_9B48BBE5 X-CRM114-Status: GOOD ( 14.82 ) X-Spam-Score: -5.6 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support to track dirty pages between user space KVM_GET_DIRTY_LOG ioctl calls. We call kvm_get_dirty_log_protect() function to do most of the work. Signed-off-by: Mario Smarduch Reviewed-by: Marc Zyngier --- arch/arm/kvm/arm.c | 37 +++++++++++++++++++++++++++++++++++++ arch/arm/kvm/mmu.c | 22 ++++++++++++++++++++++ virt/kvm/kvm_main.c | 3 +-- 3 files changed, 60 insertions(+), 2 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index a99e0cd..212d835 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -737,9 +737,46 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } } +/** + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot + * @kvm: kvm instance + * @log: slot id and address to which we copy the log + * + * We need to keep it in mind that VCPU threads can write to the bitmap + * concurrently. So, to avoid losing data, we keep the following order for + * each bit: + * + * 1. Take a snapshot of the bit and clear it if needed. + * 2. Write protect the corresponding page. + * 3. Copy the snapshot to the userspace. + * 4. Flush TLB's if needed. + * + * Steps 1,2,3 are handled by kvm_get_dirty_log_protect(). + * Between 2 and 4, the guest may write to the page using the remaining TLB + * entry. This is not a problem because the page is reported dirty using + * the snapshot taken before and step 4 ensures that writes done after + * exiting to userspace will be logged for the next call. + */ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { +#ifdef CONFIG_ARM + int r; + bool is_dirty = false; + + mutex_lock(&kvm->slots_lock); + + r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); + if (r) + goto out; + + if (is_dirty) + kvm_flush_remote_tlbs(kvm); +out: + mutex_unlock(&kvm->slots_lock); + return r; +#else /* ARM64 */ return -EINVAL; +#endif } static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3b86522..2f5131e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -872,6 +872,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } + +/** + * kvm_arch_mmu_write_protect_pt_masked() - write protect dirty pages + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire kvm_mmu_lock. + */ +void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} #endif static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f017760..c80dd2f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -982,8 +982,7 @@ out: EXPORT_SYMBOL_GPL(kvm_get_dirty_log); #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS) || \ - defined(CONFIG_IA64) || defined(CONFIG_X86) || defined(CONFIG_ARM) || \ - defined(CONFIG_ARM64) + defined(CONFIG_IA64) || defined(CONFIG_X86) || defined(CONFIG_ARM64) /* * For architectures that don't use kvm_get_dirty_log_protect() for dirty page * logging, calling this function is illegal. Otherwise the function is defined