From patchwork Wed Jan 28 02:54:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kai Huang X-Patchwork-Id: 5724221 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4F4BBBFFA8 for ; Wed, 28 Jan 2015 03:03:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6E6AD20265 for ; Wed, 28 Jan 2015 03:03:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B9E22024F for ; Wed, 28 Jan 2015 03:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755099AbbA1DDS (ORCPT ); Tue, 27 Jan 2015 22:03:18 -0500 Received: from mga14.intel.com ([192.55.52.115]:46893 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754666AbbA1DDN (ORCPT ); Tue, 27 Jan 2015 22:03:13 -0500 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 27 Jan 2015 18:57:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,478,1418112000"; d="scan'208";a="677111390" Received: from kai-desktop.bj.intel.com ([10.238.135.63]) by orsmga002.jf.intel.com with ESMTP; 27 Jan 2015 19:03:12 -0800 From: Kai Huang To: pbonzini@redhat.com, gleb@kernel.org, linux@arm.linux.org.uk, kvm@vger.kernel.org Cc: Kai Huang Subject: [PATCH 4/6] KVM: x86: Change parameter of kvm_mmu_slot_remove_write_access Date: Wed, 28 Jan 2015 10:54:26 +0800 Message-Id: <1422413668-3509-5-git-send-email-kai.huang@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1422413668-3509-1-git-send-email-kai.huang@linux.intel.com> References: <1422413668-3509-1-git-send-email-kai.huang@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch changes the second parameter of kvm_mmu_slot_remove_write_access from 'slot id' to 'struct kvm_memory_slot *' to align with kvm_x86_ops dirty logging hooks, which will be introduced in further patch. Better way is to change second parameter of kvm_arch_commit_memory_region from 'struct kvm_userspace_memory_region *' to 'struct kvm_memory_slot * new', but it requires changes on other non-x86 ARCH too, so avoid it now. Signed-off-by: Kai Huang Reviewed-by: Radim Kr?má? --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu.c | 5 ++--- arch/x86/kvm/x86.c | 10 +++++++--- 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4f6369b..67a98d7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -834,7 +834,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, u64 dirty_mask, u64 nx_mask, u64 x_mask); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); +void kvm_mmu_slot_remove_write_access(struct kvm *kvm, + struct kvm_memory_slot *memslot); void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index fb35535..6c24af3 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4410,14 +4410,13 @@ void kvm_mmu_setup(struct kvm_vcpu *vcpu) init_kvm_mmu(vcpu); } -void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) +void kvm_mmu_slot_remove_write_access(struct kvm *kvm, + struct kvm_memory_slot *memslot) { - struct kvm_memory_slot *memslot; gfn_t last_gfn; int i; bool flush = false; - memslot = id_to_memslot(kvm->memslots, slot); last_gfn = memslot->base_gfn + memslot->npages - 1; spin_lock(&kvm->mmu_lock); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1e10e3f..3a7fcff 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7538,7 +7538,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { - + struct kvm_memory_slot *new; int nr_mmu_pages = 0; if ((mem->slot >= KVM_USER_MEM_SLOTS) && (change == KVM_MR_DELETE)) { @@ -7557,6 +7557,10 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, if (nr_mmu_pages) kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + + /* It's OK to get 'new' slot here as it has already been installed */ + new = id_to_memslot(kvm->memslots, mem->slot); + /* * Write protect all pages for dirty logging. * @@ -7566,8 +7570,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * * See the comments in fast_page_fault(). */ - if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) - kvm_mmu_slot_remove_write_access(kvm, mem->slot); + if ((change != KVM_MR_DELETE) && (new->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_slot_remove_write_access(kvm, new); } void kvm_arch_flush_shadow_all(struct kvm *kvm)