From patchwork Tue Sep 23 00:54:49 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4951641 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 93BB1BEEA5 for ; Tue, 23 Sep 2014 01:00:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9591C20160 for ; Tue, 23 Sep 2014 01:00:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AC15020154 for ; Tue, 23 Sep 2014 01:00:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XWER8-0008UE-Tb; Tue, 23 Sep 2014 00:58:34 +0000 Received: from mailout2.w2.samsung.com ([211.189.100.12] helo=usmailout2.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XWER4-0008Cn-JB for linux-arm-kernel@lists.infradead.org; Tue, 23 Sep 2014 00:58:31 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NCB00KMSXCXOS00@mailout2.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 22 Sep 2014 20:58:09 -0400 (EDT) X-AuditID: cbfec37a-b7f216d000000635-d5-5420c5a15cab Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id FF.0A.01589.1A5C0245; Mon, 22 Sep 2014 20:58:09 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NCB001BFXCWGDA0@usmmp1.samsung.com>; Mon, 22 Sep 2014 20:58:09 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Mon, 22 Sep 2014 17:58:07 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, gleb@kernel.org, agraf@suse.de, borntraeger@de.ibm.com, cornelia.huck@de.ibm.com, xiaoguangrong@linux.vnet.ibm.com, ralf@linux-mips.org, catali.marinas@arm.com Subject: [PATCH v11 5/6] arm: KVM: dirty log read write protect support Date: Mon, 22 Sep 2014 17:54:49 -0700 Message-id: <1411433690-8104-6-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1411433690-8104-1-git-send-email-m.smarduch@samsung.com> References: <1411433690-8104-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuphkeLIzCtJLcpLzFFi42I5/e+wr+7CowohBvv3SFucuPKP0WL6iu0s Fte6mhktXrwGcuc3NzJafPl5ndFiztRCi4+njrNbbHp8jdXi751/bBb7t/1jtZhz5gGLxaU9 KhaT3mxjslj4/yajA7/HmnlrGD0OPjrE5rFpVSebx51re9g8jq5cy+Tx4NBmFo/zm9Ywe2xe Uu/xft9VNo/Np6s9Pm+SC+CO4rJJSc3JLEst0rdL4MrY3/OAveCrcMX8GbtZGhg3CHQxcnBI CJhINK7S7mLkBDLFJC7cW8/WxcjFISSwjFHi1MsTrBBOL5PEjdVbmCGci4wS7x/8ZQJpYRPQ ldh/byM7SEJEoJ9J4vSx7WAtzAIzGCX2LbrGBlIlLOAuMe/wIkYQm0VAVeL5r1WsIDavgKvE t8mzmCDuUJCYM8kGJMwp4CZxfOs2sHIhoJKXzWfYIMoFJX5MvscCUs4sICHx/LMSRImqxLab zxkhXlCSmHb4KvsERqFZSDpmIXQsYGRaxShWWpxcUJyUnlphqFecmFtcmpeul5yfu4kREoNV OxjvfLU5xCjAwajEw+uxRiFEiDWxrLgy9xCjBAezkghvyUSgEG9KYmVValF+fFFpTmrxIUYm Dk6pBsaK1g/bDm/64vkwXLVcK0Q0pq7SxElEnr311JKFnm89zu9XVHtw91f0bDH2I+cPh8kt uX/aZ1PjwinKednf9z4JkwnoCPxv901kscfNVGOHiSUb3i1rFPjH7i1TlOeU6RJ900KYxfug dlaUxvwNrbKzVu1nlw1JbNoZ4vqOxTM3QIw1tbK+UImlOCPRUIu5qDgRACuwImafAgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140922_175830_722378_D46ABBD6 X-CRM114-Status: GOOD ( 11.72 ) X-Spam-Score: -6.0 (------) Cc: peter.maydell@linaro.org, Mario Smarduch , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, steve.capper@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support to track VM dirty pages, between dirty log reads. Pages that have been dirtied since last log read are write protected again, in preparation of next dirty log read. In addition ARMv7 dirty log read function is pushed up to generic layer. Signed-off-by: Mario Smarduch --- arch/arm/kvm/Kconfig | 1 - arch/arm/kvm/arm.c | 2 ++ arch/arm/kvm/mmu.c | 22 ++++++++++++++++++++++ 3 files changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig index eba8b00..dddbb3d 100644 --- a/arch/arm/kvm/Kconfig +++ b/arch/arm/kvm/Kconfig @@ -23,7 +23,6 @@ config KVM select HAVE_KVM_CPU_RELAX_INTERCEPT select KVM_MMIO select KVM_ARM_HOST - select HAVE_KVM_ARCH_DIRTY_LOG select HAVE_KVM_ARCH_TLB_FLUSH_ALL depends on ARM_VIRT_EXT && ARM_LPAE && !CPU_BIG_ENDIAN ---help--- diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index e1be6c7..0546fa3 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -783,10 +783,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } } +#ifdef CONFIG_ARM64 int kvm_arch_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { return -EINVAL; } +#endif static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ba00899..5f52c8a 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -873,6 +873,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } + +/** + * kvm_mmu_write_protected_pt_masked() - write protect dirty pages set in mask + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire kvm_mmu_lock. + */ +void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} #endif static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,