From patchwork Thu May 21 04:50:15 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 25174 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n4L4oulA004423 for ; Thu, 21 May 2009 04:50:56 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750929AbZEUEuw (ORCPT ); Thu, 21 May 2009 00:50:52 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750924AbZEUEuw (ORCPT ); Thu, 21 May 2009 00:50:52 -0400 Received: from mx2.redhat.com ([66.187.237.31]:40120 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750847AbZEUEuv (ORCPT ); Thu, 21 May 2009 00:50:51 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n4L4op8b001337; Thu, 21 May 2009 00:50:51 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n4L4oogD008687; Thu, 21 May 2009 00:50:50 -0400 Received: from amt.cnet (vpn-10-16.str.redhat.com [10.32.10.16]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n4L4olln004659; Thu, 21 May 2009 00:50:48 -0400 Received: from amt.cnet (amt.cnet [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 6E5065C8004; Thu, 21 May 2009 01:50:23 -0300 (BRT) Received: (from marcelo@localhost) by amt.cnet (8.14.3/8.14.3/Submit) id n4L4oF0D001126; Thu, 21 May 2009 01:50:15 -0300 Date: Thu, 21 May 2009 01:50:15 -0300 From: Marcelo Tosatti To: Avi Kivity Cc: kvm@vger.kernel.org, Christian Borntraeger Subject: Re: [patch 0/4] move irq protection role to separate lock v2 Message-ID: <20090521045015.GA1104@amt.cnet> References: <4A1413C3.4020606@redhat.com> <20090520184841.954066003@localhost.localdomain> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20090520184841.954066003@localhost.localdomain> User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Wed, May 20, 2009 at 03:48:41PM -0300, Marcelo Tosatti wrote: > Addressing comment and covering irqfd (did not remove the deadlock avoidance > there since it might be useful later). I can't see any reason why serializing the vm ioctl is a bad idea. There is one indication of disagreement here: http://patchwork.kernel.org/patch/5661/ "The original intent was that kvm_arch_vcpu_create() not "link in" the vcpu to any registers. That allows most of the vcpu creation to happen outside a lock." But I fail to see the case where vcpu creation is a fast path (unless you're benchmarking cpu hotplug/hotunplug). Note there is no risk of a deadlock in case a vcpu thread attempts to execute vm_ioctl. Also vm_ioctl_lock will always be the first lock grabbed in the vm_ioctl path, and not used in any other path, there's no risk of deadlock. The reason for this is there are sites, such as KVM_CREATE_PIT, which handle concurrency properly (with custom locking that becomes obsolete), but some don't. --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: kvm/include/linux/kvm_host.h =================================================================== --- kvm.orig/include/linux/kvm_host.h +++ kvm/include/linux/kvm_host.h @@ -131,9 +131,9 @@ struct kvm { KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; struct list_head vm_list; + struct mutex vm_ioctl_lock; /* protects concurrent vm fd ioctls */ struct mutex lock; /* * - protects mmio_bus, pio_bus. - * - protects a few concurrent ioctl's (FIXME). * - protects concurrent create_vcpu, but * kvm->vcpus walkers do it locklessly (FIXME). */ Index: kvm/virt/kvm/kvm_main.c =================================================================== --- kvm.orig/virt/kvm/kvm_main.c +++ kvm/virt/kvm/kvm_main.c @@ -988,6 +988,7 @@ static struct kvm *kvm_create_vm(void) INIT_LIST_HEAD(&kvm->irqfds); mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); + mutex_init(&kvm->vm_ioctl_lock); kvm_io_bus_init(&kvm->mmio_bus); init_rwsem(&kvm->slots_lock); atomic_set(&kvm->users_count, 1); @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi if (kvm->mm != current->mm) return -EIO; + + mutex_lock(&kvm->vm_ioctl_lock); + switch (ioctl) { case KVM_CREATE_VCPU: r = kvm_vm_ioctl_create_vcpu(kvm, arg); @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: + mutex_unlock(&kvm->vm_ioctl_lock); return r; }