From patchwork Tue May 12 21:55:43 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 23359 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n4CMDsp6029789 for ; Tue, 12 May 2009 22:13:55 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755241AbZELWNs (ORCPT ); Tue, 12 May 2009 18:13:48 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754972AbZELWNs (ORCPT ); Tue, 12 May 2009 18:13:48 -0400 Received: from mx2.redhat.com ([66.187.237.31]:45590 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754700AbZELWNq (ORCPT ); Tue, 12 May 2009 18:13:46 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n4CMDm5A013376 for ; Tue, 12 May 2009 18:13:48 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n4CMDlD5013258; Tue, 12 May 2009 18:13:47 -0400 Received: from amt.cnet (vpn-10-31.str.redhat.com [10.32.10.31]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n4CMDjUb006049; Tue, 12 May 2009 18:13:46 -0400 Received: from amt.cnet (amt.cnet [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 8524B6800D2; Tue, 12 May 2009 18:59:45 -0300 (BRT) Received: (from marcelo@localhost) by amt.cnet (8.14.3/8.14.3/Submit) id n4CLxiPC022931; Tue, 12 May 2009 18:59:44 -0300 Message-Id: <20090512215641.475435331@amt.cnet> User-Agent: quilt/0.47-1 Date: Tue, 12 May 2009 18:55:43 -0300 From: mtosatti@redhat.com To: avi@redhat.com Cc: kvm@vger.kernel.org, Marcelo Tosatti Subject: [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock References: <20090512215542.687077672@amt.cnet> Content-Disposition: inline; filename=set-mem-lock X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_handle_hva, called by MMU notifiers, manipulates mmu data only with the protection of mmu_lock. Update kvm_mmu_change_mmu_pages callers to take mmu_lock, thus protecting against kvm_handle_hva. Signed-off-by: Marcelo Tosatti --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: kvm-pending/arch/x86/kvm/mmu.c =================================================================== --- kvm-pending.orig/arch/x86/kvm/mmu.c +++ kvm-pending/arch/x86/kvm/mmu.c @@ -2723,7 +2723,6 @@ void kvm_mmu_slot_remove_write_access(st { struct kvm_mmu_page *sp; - spin_lock(&kvm->mmu_lock); list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) { int i; u64 *pt; @@ -2738,7 +2737,6 @@ void kvm_mmu_slot_remove_write_access(st pt[i] &= ~PT_WRITABLE_MASK; } kvm_flush_remote_tlbs(kvm); - spin_unlock(&kvm->mmu_lock); } void kvm_mmu_zap_all(struct kvm *kvm) Index: kvm-pending/arch/x86/kvm/x86.c =================================================================== --- kvm-pending.orig/arch/x86/kvm/x86.c +++ kvm-pending/arch/x86/kvm/x86.c @@ -1607,10 +1607,12 @@ static int kvm_vm_ioctl_set_nr_mmu_pages return -EINVAL; down_write(&kvm->slots_lock); + spin_lock(&kvm->mmu_lock); kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages); kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages; + spin_unlock(&kvm->mmu_lock); up_write(&kvm->slots_lock); return 0; } @@ -1786,7 +1788,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kv /* If nothing is dirty, don't bother messing with page tables. */ if (is_dirty) { + spin_lock(&kvm->mmu_lock); kvm_mmu_slot_remove_write_access(kvm, log->slot); + spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); memslot = &kvm->memslots[log->slot]; n = ALIGN(memslot->npages, BITS_PER_LONG) / 8; @@ -4530,12 +4534,14 @@ int kvm_arch_set_memory_region(struct kv } } + spin_lock(&kvm->mmu_lock); if (!kvm->arch.n_requested_mmu_pages) { unsigned int nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm); kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); } kvm_mmu_slot_remove_write_access(kvm, mem->slot); + spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); return 0;