From patchwork Mon Jan 28 09:19:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2053691 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 6C4F6E00C6 for ; Mon, 28 Jan 2013 09:19:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753527Ab3A1JTv (ORCPT ); Mon, 28 Jan 2013 04:19:51 -0500 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:54580 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753431Ab3A1JTu (ORCPT ); Mon, 28 Jan 2013 04:19:50 -0500 Received: from mfs6.rdh.ecl.ntt.co.jp (mfs6.rdh.ecl.ntt.co.jp [129.60.39.149]) by tama500.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r0S9JkCa012562; Mon, 28 Jan 2013 18:19:46 +0900 Received: from mfs6.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 9EC85E0135; Mon, 28 Jan 2013 18:19:46 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 92B6CE0134; Mon, 28 Jan 2013 18:19:46 +0900 (JST) Received: from yshpad ([129.60.241.247]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r0S9Jk8h029285; Mon, 28 Jan 2013 18:19:46 +0900 Date: Mon, 28 Jan 2013 18:19:45 +0900 From: Takuya Yoshikawa To: mtosatti@redhat.com, gleb@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH v2] KVM: set_memory_region: Identify the requested change explicitly Message-Id: <20130128181945.31f9fe0f.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM_SET_USER_MEMORY_REGION forces __kvm_set_memory_region() to identify what kind of change is being requested by checking the arguments. The current code does this checking at various points in code and each condition being used there is not easy to understand at first glance. This patch consolidates these checks and introduces an enum to name the possible changes to clean up the code. Although this does not introduce any functional changes, there is one change which optimizes the code a bit: if we have nothing to change, the new code returns 0 immediately. Note that the return value for this case cannot be changed since QEMU relies on it: we noticed this when we changed it to -EINVAL and got a section mismatch error at the final stage of live migration. Signed-off-by: Takuya Yoshikawa --- v2: updated iommu related parts virt/kvm/kvm_main.c | 64 +++++++++++++++++++++++++++++++++++---------------- 1 files changed, 44 insertions(+), 20 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3fec2cd..8ea447b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -714,6 +714,24 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, } /* + * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: + * - create a new memory slot + * - delete an existing memory slot + * - modify an existing memory slot + * -- move it in the guest physical memory space + * -- just change its flags + * + * Since flags can be changed by some of these operations, the following + * differentiation is the best we can do for __kvm_set_memory_region(): + */ +enum kvm_mr_change { + KVM_MR_CREATE, + KVM_MR_DELETE, + KVM_MR_MOVE, + KVM_MR_FLAGS_ONLY, +}; + +/* * Allocate some memory and give it an address in the guest physical address * space. * @@ -732,6 +750,7 @@ int __kvm_set_memory_region(struct kvm *kvm, struct kvm_memory_slot old, new; struct kvm_memslots *slots = NULL, *old_memslots; bool old_iommu_mapped; + enum kvm_mr_change change; r = check_memory_region_flags(mem); if (r) @@ -775,17 +794,30 @@ int __kvm_set_memory_region(struct kvm *kvm, old_iommu_mapped = old.npages; - /* - * Disallow changing a memory slot's size or changing anything about - * zero sized slots that doesn't involve making them non-zero. - */ r = -EINVAL; - if (npages && old.npages && npages != old.npages) - goto out; - if (!npages && !old.npages) + if (npages) { + if (!old.npages) + change = KVM_MR_CREATE; + else { /* Modify an existing slot. */ + if ((mem->userspace_addr != old.userspace_addr) || + (npages != old.npages)) + goto out; + + if (base_gfn != old.base_gfn) + change = KVM_MR_MOVE; + else if (new.flags != old.flags) + change = KVM_MR_FLAGS_ONLY; + else { /* Nothing to change. */ + r = 0; + goto out; + } + } + } else if (old.npages) { + change = KVM_MR_DELETE; + } else /* Modify a non-existent slot: disallowed. */ goto out; - if ((npages && !old.npages) || (base_gfn != old.base_gfn)) { + if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { /* Check for overlaps */ r = -EEXIST; kvm_for_each_memslot(slot, kvm->memslots) { @@ -803,20 +835,12 @@ int __kvm_set_memory_region(struct kvm *kvm, new.dirty_bitmap = NULL; r = -ENOMEM; - - /* - * Allocate if a slot is being created. If modifying a slot, - * the userspace_addr cannot change. - */ - if (!old.npages) { + if (change == KVM_MR_CREATE) { new.user_alloc = user_alloc; new.userspace_addr = mem->userspace_addr; if (kvm_arch_create_memslot(&new, npages)) goto out_free; - } else if (npages && mem->userspace_addr != old.userspace_addr) { - r = -EINVAL; - goto out_free; } /* Allocate page dirty bitmap if needed */ @@ -825,7 +849,7 @@ int __kvm_set_memory_region(struct kvm *kvm, goto out_free; } - if (!npages || base_gfn != old.base_gfn) { + if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) { r = -ENOMEM; slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots), GFP_KERNEL); @@ -876,7 +900,7 @@ int __kvm_set_memory_region(struct kvm *kvm, * slots (size changes, userspace addr changes) is disallowed above, * so any other attribute changes getting here can be skipped. */ - if (npages) { + if (!(change == KVM_MR_DELETE)) { if (old_iommu_mapped && ((new.flags ^ old.flags) & KVM_MEM_READONLY)) { kvm_iommu_unmap_pages(kvm, &old); @@ -891,7 +915,7 @@ int __kvm_set_memory_region(struct kvm *kvm, } /* actual memory is freed via old in kvm_free_physmem_slot below */ - if (!npages) { + if (change == KVM_MR_DELETE) { new.dirty_bitmap = NULL; memset(&new.arch, 0, sizeof(new.arch)); }