From patchwork Wed Oct 22 22:34:07 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5137391 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7A7439F349 for ; Wed, 22 Oct 2014 22:37:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2D37320253 for ; Wed, 22 Oct 2014 22:37:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A53F220222 for ; Wed, 22 Oct 2014 22:37:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4VA-0002Su-I7; Wed, 22 Oct 2014 22:35:32 +0000 Received: from mailout3.w2.samsung.com ([211.189.100.13] helo=usmailout3.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xh4Uj-0001Fa-Ic for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 22:35:07 +0000 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NDV00EJ3AQA1250@usmailout3.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 22 Oct 2014 18:34:59 -0400 (EDT) X-AuditID: cbfec37c-b7f9f6d000006015-e3-5448311292c0 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id AD.9D.24597.21138445; Wed, 22 Oct 2014 18:34:58 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NDV00IWSAQAKX60@usmmp2.samsung.com>; Wed, 22 Oct 2014 18:34:58 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Wed, 22 Oct 2014 15:34:57 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, pbonzini@redhat.com, agraf@suse.de, catalin.marinas@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, james.hogan@imgtec.com, marc.zyngier@arm.com, xiaoguangrong@linux.vnet.ibm.com Subject: [PATCH v12 2/6] KVM: Add generic support for dirty page logging Date: Wed, 22 Oct 2014 15:34:07 -0700 Message-id: <1414017251-5772-3-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1414017251-5772-1-git-send-email-m.smarduch@samsung.com> References: <1414017251-5772-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuplkeLIzCtJLcpLzFFi42I5/e+wn66QoUeIwY9WE4sTV/4xWkxfsZ3F 4v2yHkaLF6+B3PnNjYwW7+a9YLboftbMaPHmk7bFnKmFFh9PHWe32PT4GqvF3zv/2Cz2b/vH ajHnzAMWi0lvtjFZLPx/k9FBwGPNvDWMHgcfHWLz6Nl5htHjzrU9bB4PDm1m8Ti/aQ2zx+Yl 9R7v911l89h8utrj8ya5AK4oLpuU1JzMstQifbsEroy3y5awFdzwrpj1ZjVTA+Mzmy5GTg4J AROJCSses0DYYhIX7q1n62Lk4hASWMYoMX/BKVYIp5dJYs27/VCZi4wSMy7vYQJpYRPQldh/ byM7SEJEoIlJ4vafW2AOs8BZRon2LT/ZQaqEBTwkFs2dBmazCKhKLNy2hBHE5hVwlVj05yRQ nANouYLEnElgN3EKuElcPbkW7CYhoJJlk5cyQZQLSvyYfI8FpJxZQELi+WcliBJViW03nzNC vKAkMe3wVfYJjEKzkHTMQuhYwMi0ilGstDi5oDgpPbXCWK84Mbe4NC9dLzk/dxMjJAprdjDe +2pziFGAg1GJh3cGh3uIEGtiWXFl7iFGCQ5mJRHebYIeIUK8KYmVValF+fFFpTmpxYcYmTg4 pRoY7b9lPz2+5l4VI0fptxjOPZVnOjof7N542vLxNgazJTz6ik4WKhYP261K+W8E3FBtmrpv P8txg42XhGXUr4beOXswhvmx9rLCteLejHMmCrDlf5gvNPeWqDR/pIKVQWRvc/M5lx3hmeIp f/yknz+6H7HK6kzvhC27/r/vZF7pJsP16OX6szNuKLEUZyQaajEXFScCABJGtmOgAgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141022_153505_747217_7D704DE1 X-CRM114-Status: GOOD ( 23.74 ) X-Spam-Score: -6.4 (------) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch defines KVM_GENERIC_DIRTYLOG, and moves dirty log read function to it's own file virt/kvm/dirtylog.c. x86 is updated to use the generic dirty log interface, selecting KVM_GENERIC_DIRTYLOG in its Kconfig and makefile. No other architectures are affected, each uses it's own version. This changed from previous patch revision where non-generic architectures were modified. In subsequent patch armv7 does samething. All other architectures continue use architecture defined version. Signed-off-by: Mario Smarduch --- arch/x86/include/asm/kvm_host.h | 3 -- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/Makefile | 1 + arch/x86/kvm/x86.c | 86 ------------------------------ include/linux/kvm_host.h | 4 ++ virt/kvm/Kconfig | 3 ++ virt/kvm/dirtylog.c | 112 +++++++++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+), 89 deletions(-) create mode 100644 virt/kvm/dirtylog.c diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7c492ed..934dc24 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -805,9 +805,6 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm); unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index f9d16ff..dca6fc7 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -40,6 +40,7 @@ config KVM select HAVE_KVM_MSI select HAVE_KVM_CPU_RELAX_INTERCEPT select KVM_VFIO + select KVM_GENERIC_DIRTYLOG ---help--- Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 25d22b2..2536195 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -12,6 +12,7 @@ kvm-y += $(KVM)/kvm_main.o $(KVM)/ioapic.o \ $(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o kvm-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += $(KVM)/assigned-dev.o $(KVM)/iommu.o kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o +kvm-$(CONFIG_KVM_GENERIC_DIRTYLOG) +=$(KVM)/dirtylog.o kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \ i8254.o cpuid.o pmu.o diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8f1e22d..1467fa4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3595,92 +3595,6 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm, return 0; } -/** - * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot - * @kvm: kvm instance - * @log: slot id and address to which we copy the log - * - * We need to keep it in mind that VCPU threads can write to the bitmap - * concurrently. So, to avoid losing data, we keep the following order for - * each bit: - * - * 1. Take a snapshot of the bit and clear it if needed. - * 2. Write protect the corresponding page. - * 3. Flush TLB's if needed. - * 4. Copy the snapshot to the userspace. - * - * Between 2 and 3, the guest may write to the page using the remaining TLB - * entry. This is not a problem because the page will be reported dirty at - * step 4 using the snapshot taken before and step 3 ensures that successive - * writes will be logged for the next call. - */ -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) -{ - int r; - struct kvm_memory_slot *memslot; - unsigned long n, i; - unsigned long *dirty_bitmap; - unsigned long *dirty_bitmap_buffer; - bool is_dirty = false; - - mutex_lock(&kvm->slots_lock); - - r = -EINVAL; - if (log->slot >= KVM_USER_MEM_SLOTS) - goto out; - - memslot = id_to_memslot(kvm->memslots, log->slot); - - dirty_bitmap = memslot->dirty_bitmap; - r = -ENOENT; - if (!dirty_bitmap) - goto out; - - n = kvm_dirty_bitmap_bytes(memslot); - - dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); - memset(dirty_bitmap_buffer, 0, n); - - spin_lock(&kvm->mmu_lock); - - for (i = 0; i < n / sizeof(long); i++) { - unsigned long mask; - gfn_t offset; - - if (!dirty_bitmap[i]) - continue; - - is_dirty = true; - - mask = xchg(&dirty_bitmap[i], 0); - dirty_bitmap_buffer[i] = mask; - - offset = i * BITS_PER_LONG; - kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask); - } - - spin_unlock(&kvm->mmu_lock); - - /* See the comments in kvm_mmu_slot_remove_write_access(). */ - lockdep_assert_held(&kvm->slots_lock); - - /* - * All the TLBs can be flushed out of mmu lock, see the comments in - * kvm_mmu_slot_remove_write_access(). - */ - if (is_dirty) - kvm_flush_remote_tlbs(kvm); - - r = -EFAULT; - if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) - goto out; - - r = 0; -out: - mutex_unlock(&kvm->slots_lock); - return r; -} - int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event, bool line_status) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a4c33b3..05ebcbe 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -592,6 +592,10 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty); int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); +void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, + unsigned long mask); int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, bool line_status); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 3796a21..368fc4a 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -40,3 +40,6 @@ config KVM_VFIO config HAVE_KVM_ARCH_TLB_FLUSH_ALL bool + +config KVM_GENERIC_DIRTYLOG + bool diff --git a/virt/kvm/dirtylog.c b/virt/kvm/dirtylog.c new file mode 100644 index 0000000..67ffffa --- /dev/null +++ b/virt/kvm/dirtylog.c @@ -0,0 +1,112 @@ +/* + * kvm generic dirty logging support, used by architectures that share + * comman dirty page logging implementation. + * + * Copyright (C) 2006 Qumranet, Inc. + * Copyright 2010 Red Hat, Inc. and/or its affiliates. + * + * Authors: + * Avi Kivity + * Yaniv Kamay + * + * This file is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include + +#include +#include +#include + +/** + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot + * @kvm: kvm instance + * @log: slot id and address to which we copy the log + * + * We need to keep it in mind that VCPU threads can write to the bitmap + * concurrently. So, to avoid losing data, we keep the following order for + * each bit: + * + * 1. Take a snapshot of the bit and clear it if needed. + * 2. Write protect the corresponding page. + * 3. Flush TLB's if needed. + * 4. Copy the snapshot to the userspace. + * + * Between 2 and 3, the guest may write to the page using the remaining TLB + * entry. This is not a problem because the page will be reported dirty at + * step 4 using the snapshot taken before and step 3 ensures that successive + * writes will be logged for the next call. + */ +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) +{ + int r; + struct kvm_memory_slot *memslot; + unsigned long n, i; + unsigned long *dirty_bitmap; + unsigned long *dirty_bitmap_buffer; + bool is_dirty = false; + + mutex_lock(&kvm->slots_lock); + + r = -EINVAL; + if (log->slot >= KVM_USER_MEM_SLOTS) + goto out; + + memslot = id_to_memslot(kvm->memslots, log->slot); + + dirty_bitmap = memslot->dirty_bitmap; + r = -ENOENT; + if (!dirty_bitmap) + goto out; + + n = kvm_dirty_bitmap_bytes(memslot); + + dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); + memset(dirty_bitmap_buffer, 0, n); + + spin_lock(&kvm->mmu_lock); + + for (i = 0; i < n / sizeof(long); i++) { + unsigned long mask; + gfn_t offset; + + if (!dirty_bitmap[i]) + continue; + + is_dirty = true; + + mask = xchg(&dirty_bitmap[i], 0); + dirty_bitmap_buffer[i] = mask; + + offset = i * BITS_PER_LONG; + kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask); + } + + spin_unlock(&kvm->mmu_lock); + + /* See the comments in kvm_mmu_slot_remove_write_access(). */ + lockdep_assert_held(&kvm->slots_lock); + + /* + * All the TLBs can be flushed out of mmu lock, see the comments in + * kvm_mmu_slot_remove_write_access(). + */ + if (is_dirty) + kvm_flush_remote_tlbs(kvm); + + r = -EFAULT; + if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) + goto out; + + r = 0; +out: + mutex_unlock(&kvm->slots_lock); + return r; +}