Message ID | 20090521045015.GA1104@amt.cnet (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Am Donnerstag 21 Mai 2009 06:50:15 schrieb Marcelo Tosatti: > But I fail to see the case where vcpu creation is a fast path (unless > you're benchmarking cpu hotplug/hotunplug). [...] > @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi > > if (kvm->mm != current->mm) > return -EIO; > + > + mutex_lock(&kvm->vm_ioctl_lock); > + > switch (ioctl) { > case KVM_CREATE_VCPU: > r = kvm_vm_ioctl_create_vcpu(kvm, arg); > @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi > r = kvm_arch_vm_ioctl(filp, ioctl, arg); > } > out: > + mutex_unlock(&kvm->vm_ioctl_lock); > return r; > } The thing that looks worrysome is that the s390 version of kvm_arch_vm_ioctl has KVM_S390_INTERRUPT. This allows userspace to inject interrupts - which would be serialized. The thing is, that external interrupts and I/O interrupts are floating - which means they can arrive on all cpus. This is somewhat of a fast path. On the other hand, kvm_s390_inject_vm already takes the kvm->lock to protect agains hotplug. With this patch we might be able to remove the kvm->lock in kvm_s390_inject_vm - that would reduce the impact. This needs more thinking on our side. Christian -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Christian Bornträger wrote: >> @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi >> >> if (kvm->mm != current->mm) >> return -EIO; >> + >> + mutex_lock(&kvm->vm_ioctl_lock); >> + >> switch (ioctl) { >> case KVM_CREATE_VCPU: >> r = kvm_vm_ioctl_create_vcpu(kvm, arg); >> @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi >> r = kvm_arch_vm_ioctl(filp, ioctl, arg); >> } >> out: >> + mutex_unlock(&kvm->vm_ioctl_lock); >> return r; >> } >> > > The thing that looks worrysome is that the s390 version of kvm_arch_vm_ioctl > has KVM_S390_INTERRUPT. This allows userspace to inject interrupts - which > would be serialized. The thing is, that external interrupts and I/O interrupts > are floating - which means they can arrive on all cpus. This is somewhat of a > fast path. > On the other hand, kvm_s390_inject_vm already takes the kvm->lock to protect > agains hotplug. With this patch we might be able to remove the kvm->lock in > kvm_s390_inject_vm - that would reduce the impact. > > This needs more thinking on our side. > x86 actually shares the same problem. KVM_IRQ_LINE interrupts may arrive at any vcpu. Furthermore, with irqfd interrupts may be injected from userspace (the vm process or other processes) or from the kernel (assigned device, kernel virtio-net device). So we have the same motivation to drop this lock and replace it by rcu for the fast paths.
On Thu, May 21, 2009 at 10:26:57AM +0300, Avi Kivity wrote: > Christian Bornträger wrote: >>> @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi >>> >>> if (kvm->mm != current->mm) >>> return -EIO; >>> + >>> + mutex_lock(&kvm->vm_ioctl_lock); >>> + >>> switch (ioctl) { >>> case KVM_CREATE_VCPU: >>> r = kvm_vm_ioctl_create_vcpu(kvm, arg); >>> @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi >>> r = kvm_arch_vm_ioctl(filp, ioctl, arg); >>> } >>> out: >>> + mutex_unlock(&kvm->vm_ioctl_lock); >>> return r; >>> } >>> >> >> The thing that looks worrysome is that the s390 version of >> kvm_arch_vm_ioctl has KVM_S390_INTERRUPT. This allows userspace to >> inject interrupts - which would be serialized. The thing is, that >> external interrupts and I/O interrupts are floating - which means they >> can arrive on all cpus. This is somewhat of a fast path. >> On the other hand, kvm_s390_inject_vm already takes the kvm->lock to >> protect agains hotplug. With this patch we might be able to remove the >> kvm->lock in kvm_s390_inject_vm - that would reduce the impact. >> >> This needs more thinking on our side. >> > > x86 actually shares the same problem. KVM_IRQ_LINE interrupts may > arrive at any vcpu. Furthermore, with irqfd interrupts may be injected > from userspace (the vm process or other processes) or from the kernel > (assigned device, kernel virtio-net device). So we have the same > motivation to drop this lock and replace it by rcu for the fast paths. OK, will use the lock to serialize individual ioctl commands that are not performance sensitive and need serialization. Any objection to v2 of the irq_lock patch? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti wrote: >> x86 actually shares the same problem. KVM_IRQ_LINE interrupts may >> arrive at any vcpu. Furthermore, with irqfd interrupts may be injected >> from userspace (the vm process or other processes) or from the kernel >> (assigned device, kernel virtio-net device). So we have the same >> motivation to drop this lock and replace it by rcu for the fast paths. >> > > OK, will use the lock to serialize individual ioctl commands that are > not performance sensitive and need serialization. > Locks should protect data, not operations. Something named ioctl_lock indicates something is wrong. > Any objection to v2 of the irq_lock patch? > Haven't done a detailed review yet, sorry.
Index: kvm/include/linux/kvm_host.h =================================================================== --- kvm.orig/include/linux/kvm_host.h +++ kvm/include/linux/kvm_host.h @@ -131,9 +131,9 @@ struct kvm { KVM_PRIVATE_MEM_SLOTS]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; struct list_head vm_list; + struct mutex vm_ioctl_lock; /* protects concurrent vm fd ioctls */ struct mutex lock; /* * - protects mmio_bus, pio_bus. - * - protects a few concurrent ioctl's (FIXME). * - protects concurrent create_vcpu, but * kvm->vcpus walkers do it locklessly (FIXME). */ Index: kvm/virt/kvm/kvm_main.c =================================================================== --- kvm.orig/virt/kvm/kvm_main.c +++ kvm/virt/kvm/kvm_main.c @@ -988,6 +988,7 @@ static struct kvm *kvm_create_vm(void) INIT_LIST_HEAD(&kvm->irqfds); mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); + mutex_init(&kvm->vm_ioctl_lock); kvm_io_bus_init(&kvm->mmio_bus); init_rwsem(&kvm->slots_lock); atomic_set(&kvm->users_count, 1); @@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi if (kvm->mm != current->mm) return -EIO; + + mutex_lock(&kvm->vm_ioctl_lock); + switch (ioctl) { case KVM_CREATE_VCPU: r = kvm_vm_ioctl_create_vcpu(kvm, arg); @@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: + mutex_unlock(&kvm->vm_ioctl_lock); return r; }