From patchwork Thu Jan 15 23:58:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5643531 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 61295C058D for ; Fri, 16 Jan 2015 00:03:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7E27B2011D for ; Fri, 16 Jan 2015 00:03:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8647D2010F for ; Fri, 16 Jan 2015 00:03:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YBuLO-0001Rq-Me; Fri, 16 Jan 2015 00:00:54 +0000 Received: from mailout3.w2.samsung.com ([211.189.100.13] helo=usmailout3.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YBuLK-0001CL-BF for linux-arm-kernel@lists.infradead.org; Fri, 16 Jan 2015 00:00:51 +0000 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NI800FX3TCT5220@usmailout3.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 15 Jan 2015 19:00:29 -0500 (EST) X-AuditID: cbfec37c-b7f496d000000b40-99-54b8549de748 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id B1.24.02880.D9458B45; Thu, 15 Jan 2015 19:00:29 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NI80062KTCSVP00@usmmp2.samsung.com>; Thu, 15 Jan 2015 19:00:29 -0500 (EST) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Thu, 15 Jan 2015 16:00:28 -0800 From: Mario Smarduch To: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, james.hogan@imgtec.com, agraf@suse.de, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com Subject: [PATCH v16 02/10] KVM: Add generic support for dirty page logging Date: Thu, 15 Jan 2015 15:58:53 -0800 Message-id: <1421366341-26012-3-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1421366341-26012-1-git-send-email-m.smarduch@samsung.com> References: <1421366341-26012-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpgkeLIzCtJLcpLzFFi42I5/e+wn+7ckB0hBvdeSlicuPKP0WL6iu0s Fu+X9TBavHgN5M5vbmS0eDfvBbNF97NmRos3n7Qt5kwttPh46ji7xabH11gt/t75x2axf9s/ Vos5Zx6wWEx6s43Jgd9jzbw1jB4HHx1i8+jZeYbR4861PWwe5zetYfbYvKTe4/2+q2wem09X e3zeJBfAGcVlk5Kak1mWWqRvl8CVMevkDOaCDwoVh5bpNzBul+pi5OSQEDCRePJ/NxuELSZx 4d56IJuLQ0hgGaNE6+x/LCAJIYFeJon7VxghEhcZJY41H2QESbAJ6Ersv7eRHSQhIrCcUeL0 lt8sIA6zQBuTxOcjy5lAqoQFvCT2z9wANopFQFXi4fG/rF2MHBy8Am4Sa5bagZgSAgoScybZ gFRwCrhLTJzRzAqx2E1iyYyXYNfxCghK/Jh8jwWknFlAQuL5ZyWIElWJbTefM0I8oCQx7fBV 9gmMQrOQdMxC6FjAyLSKUay0OLmgOCk9tcJYrzgxt7g0L10vOT93EyMk1mp2MN77anOIUYCD UYmHl8Fve4gQa2JZcWXuIUYJDmYlEV5e5h0hQrwpiZVVqUX58UWlOanFhxiZODilGhht7bJS 37zdeiCIry3QU/vPd5mMHsMKE5b1NdnFCo5zag0bmE/oWG585rChxeBxieV0zbKN7/vDt5o2 aEyfUCpodaZwIfc8q4c+Z6aX66duO7dv1cnDz7omKGZ9kZ7zKm1r+cbD0QFzzr9aYRRU1XSj R+R5NrvYXL5FZeeDb+b5LXl7OHnFVgUlluKMREMt5qLiRAAFRkrAkwIAAA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150115_160050_481911_90A692DF X-CRM114-Status: GOOD ( 14.85 ) X-Spam-Score: -5.0 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, catalin.marinas@arm.com, kvm-ppc@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP kvm_get_dirty_log() provides generic handling of dirty bitmap, currently reused by several architectures. Building on that we intrdoduce kvm_get_dirty_log_protect() adding write protection to mark these pages dirty for future write access, before next KVM_GET_DIRTY_LOG ioctl call from user space. Reviewed-by: Christoffer Dall Signed-off-by: Mario Smarduch --- include/linux/kvm_host.h | 9 ++++++ virt/kvm/Kconfig | 6 ++++ virt/kvm/kvm_main.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 95 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e4d8f70..ed29e79 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -602,6 +602,15 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty); + +int kvm_get_dirty_log_protect(struct kvm *kvm, + struct kvm_dirty_log *log, bool *is_dirty); + +void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, + unsigned long mask); + int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 3796a21..314950c 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -40,3 +40,9 @@ config KVM_VFIO config HAVE_KVM_ARCH_TLB_FLUSH_ALL bool + +config HAVE_KVM_ARCH_DIRTY_LOG_PROTECT + bool + +config KVM_GENERIC_DIRTYLOG_READ_PROTECT + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 51e9dfa..55a16b2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1001,6 +1001,86 @@ out: } EXPORT_SYMBOL_GPL(kvm_get_dirty_log); +#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT +/** + * kvm_get_dirty_log_protect - get a snapshot of dirty pages, and if any pages + * are dirty write protect them for next write. + * @kvm: pointer to kvm instance + * @log: slot id and address to which we copy the log + * @is_dirty: flag set if any page is dirty + * + * We need to keep it in mind that VCPU threads can write to the bitmap + * concurrently. So, to avoid losing track of dirty pages we keep the + * following order: + * + * 1. Take a snapshot of the bit and clear it if needed. + * 2. Write protect the corresponding page. + * 3. Copy the snapshot to the userspace. + * 4. Upon return caller flushes TLB's if needed. + * + * Between 2 and 4, the guest may write to the page using the remaining TLB + * entry. This is not a problem because the page is reported dirty using + * the snapshot taken before and step 4 ensures that writes done after + * exiting to userspace will be logged for the next call. + * + */ +int kvm_get_dirty_log_protect(struct kvm *kvm, + struct kvm_dirty_log *log, bool *is_dirty) +{ + struct kvm_memory_slot *memslot; + int r, i; + unsigned long n; + unsigned long *dirty_bitmap; + unsigned long *dirty_bitmap_buffer; + + r = -EINVAL; + if (log->slot >= KVM_USER_MEM_SLOTS) + goto out; + + memslot = id_to_memslot(kvm->memslots, log->slot); + + dirty_bitmap = memslot->dirty_bitmap; + r = -ENOENT; + if (!dirty_bitmap) + goto out; + + n = kvm_dirty_bitmap_bytes(memslot); + + dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); + memset(dirty_bitmap_buffer, 0, n); + + spin_lock(&kvm->mmu_lock); + *is_dirty = false; + for (i = 0; i < n / sizeof(long); i++) { + unsigned long mask; + gfn_t offset; + + if (!dirty_bitmap[i]) + continue; + + *is_dirty = true; + + mask = xchg(&dirty_bitmap[i], 0); + dirty_bitmap_buffer[i] = mask; + + offset = i * BITS_PER_LONG; + kvm_arch_mmu_write_protect_pt_masked(kvm, memslot, offset, + mask); + } + + spin_unlock(&kvm->mmu_lock); + + r = -EFAULT; + if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) + goto out; + + r = 0; +out: + return r; +} +EXPORT_SYMBOL_GPL(kvm_get_dirty_log_protect); +#endif + bool kvm_largepages_enabled(void) { return largepages_enabled;