diff mbox

[v2,2/2] vhost: double check used memslots number

Message ID 1513327555-17520-3-git-send-email-jianjay.zhou@huawei.com (mailing list archive)
State New, archived
Headers show

Commit Message

Zhoujian (jay) Dec. 15, 2017, 8:45 a.m. UTC
If the VM already has N(N>8) available memory slots for vhost user,
the VM will be crashed in vhost_user_set_mem_table if we try to
hotplug the first vhost user NIC.
This patch checks if memslots number exceeded or not after updating
vhost_user_used_memslots.

Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
---
 hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
 1 file changed, 23 insertions(+), 4 deletions(-)

Comments

Igor Mammedov Dec. 22, 2017, 6:48 p.m. UTC | #1
On Fri, 15 Dec 2017 16:45:55 +0800
Jay Zhou <jianjay.zhou@huawei.com> wrote:

> If the VM already has N(N>8) available memory slots for vhost user,
> the VM will be crashed in vhost_user_set_mem_table if we try to
> hotplug the first vhost user NIC.
> This patch checks if memslots number exceeded or not after updating
> vhost_user_used_memslots.
Can't understand commit message, pls rephrase (what is being fixed, and how it's fixed)
also include reproducing steps for crash and maybe describe call flow/backtrace
that triggers crash.

PS:
I wasn't able to reproduce crash

> 
> Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> ---
>  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
>  1 file changed, 23 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 59a32e9..e45f5e2 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
>      event_notifier_cleanup(&vq->masked_notifier);
>  }
>  
> +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev *hdev)
> +{
> +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> +        error_report("vhost backend memory slots limit is less"
> +                " than current number of present memory slots");
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
>  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>                     VhostBackendType backend_type, uint32_t busyloop_timeout)
>  {
> @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>          goto fail;
>      }
>  
> -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> -        error_report("vhost backend memory slots limit is less"
> -                " than current number of present memory slots");
> +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
why do you keep this check?
it seems always be false



>          r = -1;
>          goto fail;
>      }
> @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>      hdev->memory_changed = false;
>      memory_listener_register(&hdev->memory_listener, &address_space_memory);
>      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> +
> +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> +        r = -1;
> +        if (busyloop_timeout) {
> +            goto fail_busyloop;
> +        } else {
> +            goto fail;
> +        }
> +    }
seem to be right thing to do, since after registering listener for the first time
used_memslots will be updated to actual value.


I did some testing and without this hunk/patch

on 'device_add  virtio-net-pci,netdev=net0' qemu prints:

qemu-system-x86_64: vhost_set_mem_table failed: Argument list too long (7)
qemu-system-x86_64: unable to start vhost net: 7: falling back on userspace virtio

and network is operational in guest, but with this patch

"netdev_add ...,vhost-on" prints:

vhost backend memory slots limit is less than current number of present memory slots
vhost-net requested but could not be initialized

and following "device_add  virtio-net-pci,netdev=net0" prints:

TUNSETOFFLOAD ioctl() failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor

adapter is still hot-plugged but guest networking is broken (can't get IP address via DHCP)

so patch seems introduces a regression or something broken elsewhere and this exposes issue,
not sure what qemu reaction should be in this case
 i.e. when netdev_add fails 
    1: should we fail followed up device_add or 
    2: make it fall back to userspace virtio

I'd go for #2,
Michael what's your take on it?

> +
>      return 0;
>  
>  fail_busyloop:
Michael S. Tsirkin Dec. 22, 2017, 9:15 p.m. UTC | #2
On Fri, Dec 22, 2017 at 07:48:55PM +0100, Igor Mammedov wrote:
> On Fri, 15 Dec 2017 16:45:55 +0800
> Jay Zhou <jianjay.zhou@huawei.com> wrote:
> 
> > If the VM already has N(N>8) available memory slots for vhost user,
> > the VM will be crashed in vhost_user_set_mem_table if we try to
> > hotplug the first vhost user NIC.
> > This patch checks if memslots number exceeded or not after updating
> > vhost_user_used_memslots.
> Can't understand commit message, pls rephrase (what is being fixed, and how it's fixed)
> also include reproducing steps for crash and maybe describe call flow/backtrace
> that triggers crash.
> 
> PS:
> I wasn't able to reproduce crash
> 
> > 
> > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > ---
> >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> >  1 file changed, 23 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 59a32e9..e45f5e2 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
> >      event_notifier_cleanup(&vq->masked_notifier);
> >  }
> >  
> > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev *hdev)
> > +{
> > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > +        error_report("vhost backend memory slots limit is less"
> > +                " than current number of present memory slots");
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >                     VhostBackendType backend_type, uint32_t busyloop_timeout)
> >  {
> > @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >          goto fail;
> >      }
> >  
> > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > -        error_report("vhost backend memory slots limit is less"
> > -                " than current number of present memory slots");
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> why do you keep this check?
> it seems always be false
> 
> 
> 
> >          r = -1;
> >          goto fail;
> >      }
> > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >      hdev->memory_changed = false;
> >      memory_listener_register(&hdev->memory_listener, &address_space_memory);
> >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > +
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > +        r = -1;
> > +        if (busyloop_timeout) {
> > +            goto fail_busyloop;
> > +        } else {
> > +            goto fail;
> > +        }
> > +    }
> seem to be right thing to do, since after registering listener for the first time
> used_memslots will be updated to actual value.
> 
> 
> I did some testing and without this hunk/patch
> 
> on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> 
> qemu-system-x86_64: vhost_set_mem_table failed: Argument list too long (7)
> qemu-system-x86_64: unable to start vhost net: 7: falling back on userspace virtio
> 
> and network is operational in guest, but with this patch
> 
> "netdev_add ...,vhost-on" prints:
> 
> vhost backend memory slots limit is less than current number of present memory slots
> vhost-net requested but could not be initialized
> 
> and following "device_add  virtio-net-pci,netdev=net0" prints:
> 
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> 
> adapter is still hot-plugged but guest networking is broken (can't get IP address via DHCP)
> 
> so patch seems introduces a regression or something broken elsewhere and this exposes issue,
> not sure what qemu reaction should be in this case
>  i.e. when netdev_add fails 
>     1: should we fail followed up device_add or 
>     2: make it fall back to userspace virtio
> 
> I'd go for #2,
> Michael what's your take on it?

OK but there's a vhost force flag, if that is set we definitely should
fail device_add.

Also, hotplug can follow device_add, should be handled similarly.

> > +
> >      return 0;
> >  
> >  fail_busyloop:
Zhoujian (jay) Dec. 23, 2017, 8:27 a.m. UTC | #3
> -----Original Message-----
> From: Igor Mammedov [mailto:imammedo@redhat.com]
> Sent: Saturday, December 23, 2017 2:49 AM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>
> Cc: qemu-devel@nongnu.org; mst@redhat.com; Huangweidong (C)
> <weidong.huang@huawei.com>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> wangxin (U) <wangxinxin.wang@huawei.com>; Liuzhe (Cloud Open Labs, NFV)
> <gary.liuzhe@huawei.com>; dgilbert@redhat.com
> Subject: Re: [PATCH v2 2/2] vhost: double check used memslots number
> 
> On Fri, 15 Dec 2017 16:45:55 +0800
> Jay Zhou <jianjay.zhou@huawei.com> wrote:
> 
> > If the VM already has N(N>8) available memory slots for vhost user,
> > the VM will be crashed in vhost_user_set_mem_table if we try to
> > hotplug the first vhost user NIC.
> > This patch checks if memslots number exceeded or not after updating
> > vhost_user_used_memslots.
> Can't understand commit message, pls rephrase (what is being fixed, and
> how it's fixed) also include reproducing steps for crash and maybe
> describe call flow/backtrace that triggers crash.

Sorry about my pool english

> 
> PS:
> I wasn't able to reproduce crash

Steps to reproduce:
(1) start up a VM successfully without any vhost device
(2) hotplug 8 DIMM memory successfully
(3) hotplug a vhost-user NIC, the VM crashed, it asserted
    at the line
        assert(fd_num < VHOST_MEMORY_MAX_NREGIONS);
    in vhost_user_set_mem_table()

Regards,
Jay

> >
> > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > ---
> >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> >  1 file changed, 23 insertions(+), 4 deletions(-)
> >
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index
> > 59a32e9..e45f5e2 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct
> vhost_virtqueue *vq)
> >      event_notifier_cleanup(&vq->masked_notifier);
> >  }
> >
> > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev
> > +*hdev) {
> > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > +        error_report("vhost backend memory slots limit is less"
> > +                " than current number of present memory slots");
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >                     VhostBackendType backend_type, uint32_t
> > busyloop_timeout)  { @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct
> > vhost_dev *hdev, void *opaque,
> >          goto fail;
> >      }
> >
> > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > -        error_report("vhost backend memory slots limit is less"
> > -                " than current number of present memory slots");
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> why do you keep this check?
> it seems always be false
> 
> 
> 
> >          r = -1;
> >          goto fail;
> >      }
> > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void
> *opaque,
> >      hdev->memory_changed = false;
> >      memory_listener_register(&hdev->memory_listener,
> &address_space_memory);
> >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > +
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > +        r = -1;
> > +        if (busyloop_timeout) {
> > +            goto fail_busyloop;
> > +        } else {
> > +            goto fail;
> > +        }
> > +    }
> seem to be right thing to do, since after registering listener for the
> first time used_memslots will be updated to actual value.
> 
> 
> I did some testing and without this hunk/patch
> 
> on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> 
> qemu-system-x86_64: vhost_set_mem_table failed: Argument list too long (7)
> qemu-system-x86_64: unable to start vhost net: 7: falling back on
> userspace virtio
> 
> and network is operational in guest, but with this patch
> 
> "netdev_add ...,vhost-on" prints:
> 
> vhost backend memory slots limit is less than current number of present
> memory slots vhost-net requested but could not be initialized
> 
> and following "device_add  virtio-net-pci,netdev=net0" prints:
> 
> TUNSETOFFLOAD ioctl() failed: Bad file descriptor TUNSETOFFLOAD ioctl()
> failed: Bad file descriptor
> 
> adapter is still hot-plugged but guest networking is broken (can't get IP
> address via DHCP)
> 
> so patch seems introduces a regression or something broken elsewhere and
> this exposes issue, not sure what qemu reaction should be in this case
> i.e. when netdev_add fails
>     1: should we fail followed up device_add or
>     2: make it fall back to userspace virtio
> 
> I'd go for #2,
> Michael what's your take on it?
> 
> > +
> >      return 0;
> >
> >  fail_busyloop:
Zhoujian (jay) Dec. 23, 2017, 8:49 a.m. UTC | #4
[...]

> > ---
> >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> >  1 file changed, 23 insertions(+), 4 deletions(-)
> >
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index
> > 59a32e9..e45f5e2 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct
> vhost_virtqueue *vq)
> >      event_notifier_cleanup(&vq->masked_notifier);
> >  }
> >
> > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev
> > +*hdev) {
> > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > +        error_report("vhost backend memory slots limit is less"
> > +                " than current number of present memory slots");
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> >                     VhostBackendType backend_type, uint32_t
> > busyloop_timeout)  { @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct
> > vhost_dev *hdev, void *opaque,
> >          goto fail;
> >      }
> >
> > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > -        error_report("vhost backend memory slots limit is less"
> > -                " than current number of present memory slots");
> > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> why do you keep this check?
> it seems always be false
> 

If a vhost device has been already added successfully, i.e. its memory
Listener has been registered, i.e.
hdev->vhost_ops->vhost_set_used_memslots() has been called(used_memslots
is updated here),
then if we hotplug another same backend type vhost device,
hdev->vhost_ops->vhost_get_used_memslots() will not be 0(
used_memslots is the same for the same type backend vhost device), so it will
not always be false.

Regards,
Jay
Igor Mammedov Dec. 28, 2017, 11:29 a.m. UTC | #5
On Fri, 22 Dec 2017 23:15:09 +0200
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Fri, Dec 22, 2017 at 07:48:55PM +0100, Igor Mammedov wrote:
> > On Fri, 15 Dec 2017 16:45:55 +0800
> > Jay Zhou <jianjay.zhou@huawei.com> wrote:
> > 
> > > If the VM already has N(N>8) available memory slots for vhost user,
> > > the VM will be crashed in vhost_user_set_mem_table if we try to
> > > hotplug the first vhost user NIC.
> > > This patch checks if memslots number exceeded or not after updating
> > > vhost_user_used_memslots.
> > Can't understand commit message, pls rephrase (what is being fixed, and how it's fixed)
> > also include reproducing steps for crash and maybe describe call flow/backtrace
> > that triggers crash.
> > 
> > PS:
> > I wasn't able to reproduce crash
> > 
> > > 
> > > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > > ---
> > >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> > >  1 file changed, 23 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 59a32e9..e45f5e2 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
> > >      event_notifier_cleanup(&vq->masked_notifier);
> > >  }
> > >  
> > > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev *hdev)
> > > +{
> > > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > +        error_report("vhost backend memory slots limit is less"
> > > +                " than current number of present memory slots");
> > > +        return true;
> > > +    }
> > > +
> > > +    return false;
> > > +}
> > > +
> > >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > >                     VhostBackendType backend_type, uint32_t busyloop_timeout)
> > >  {
> > > @@ -1252,10 +1264,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > >          goto fail;
> > >      }
> > >  
> > > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > -        error_report("vhost backend memory slots limit is less"
> > > -                " than current number of present memory slots");
> > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > why do you keep this check?
> > it seems always be false
> > 
> > 
> > 
> > >          r = -1;
> > >          goto fail;
> > >      }
> > > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > >      hdev->memory_changed = false;
> > >      memory_listener_register(&hdev->memory_listener, &address_space_memory);
> > >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > > +
> > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > > +        r = -1;
> > > +        if (busyloop_timeout) {
> > > +            goto fail_busyloop;
> > > +        } else {
> > > +            goto fail;
> > > +        }
> > > +    }
> > seem to be right thing to do, since after registering listener for the first time
> > used_memslots will be updated to actual value.
> > 
> > 
> > I did some testing and without this hunk/patch
> > 
> > on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> > 
> > qemu-system-x86_64: vhost_set_mem_table failed: Argument list too long (7)
> > qemu-system-x86_64: unable to start vhost net: 7: falling back on userspace virtio
> > 
> > and network is operational in guest, but with this patch
> > 
> > "netdev_add ...,vhost-on" prints:
> > 
> > vhost backend memory slots limit is less than current number of present memory slots
> > vhost-net requested but could not be initialized
> > 
> > and following "device_add  virtio-net-pci,netdev=net0" prints:
> > 
> > TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> > TUNSETOFFLOAD ioctl() failed: Bad file descriptor
> > 
> > adapter is still hot-plugged but guest networking is broken (can't get IP address via DHCP)
> > 
> > so patch seems introduces a regression or something broken elsewhere and this exposes issue,
> > not sure what qemu reaction should be in this case
> >  i.e. when netdev_add fails 
> >     1: should we fail followed up device_add or 
> >     2: make it fall back to userspace virtio
> > 
> > I'd go for #2,
> > Michael what's your take on it?
> 
> OK but there's a vhost force flag, if that is set we definitely should
> fail device_add.
> 
> Also, hotplug can follow device_add, should be handled similarly.
I was testing with vhost-kernel (as it doesn't need extra environment to setup)
and it's able to fallback to virtio transport.

However in case of vhost-user, is there even an option to fallback to?
Perhaps our only choice here is to fail backend creation cleanly,
so no one would be able to add a frontend refering to non existing backend.


PS:
even if we have to fail on error for vhost-user, this patch shouldn't
change current vhost-kernel behavior (fallback should still work)

> 
> > > +
> > >      return 0;
> > >  
> > >  fail_busyloop:
>
Zhoujian (jay) Jan. 3, 2018, 2:19 p.m. UTC | #6
> -----Original Message-----
> From: Igor Mammedov [mailto:imammedo@redhat.com]
> Sent: Thursday, December 28, 2017 7:29 PM
> To: Michael S. Tsirkin <mst@redhat.com>
> Cc: Huangweidong (C) <weidong.huang@huawei.com>; wangxin (U)
> <wangxinxin.wang@huawei.com>; qemu-devel@nongnu.org; Liuzhe (Cloud Open
> Labs, NFV) <gary.liuzhe@huawei.com>; dgilbert@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; Zhoujian (jay) <jianjay.zhou@huawei.com>
> Subject: Re: [Qemu-devel] [PATCH v2 2/2] vhost: double check used memslots
> number
> 
> On Fri, 22 Dec 2017 23:15:09 +0200
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Fri, Dec 22, 2017 at 07:48:55PM +0100, Igor Mammedov wrote:
> > > On Fri, 15 Dec 2017 16:45:55 +0800
> > > Jay Zhou <jianjay.zhou@huawei.com> wrote:
> > >
> > > > If the VM already has N(N>8) available memory slots for vhost
> > > > user, the VM will be crashed in vhost_user_set_mem_table if we try
> > > > to hotplug the first vhost user NIC.
> > > > This patch checks if memslots number exceeded or not after
> > > > updating vhost_user_used_memslots.
> > > Can't understand commit message, pls rephrase (what is being fixed,
> > > and how it's fixed) also include reproducing steps for crash and
> > > maybe describe call flow/backtrace that triggers crash.
> > >
> > > PS:
> > > I wasn't able to reproduce crash
> > >
> > > >
> > > > Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
> > > > ---
> > > >  hw/virtio/vhost.c | 27 +++++++++++++++++++++++----
> > > >  1 file changed, 23 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index
> > > > 59a32e9..e45f5e2 100644
> > > > --- a/hw/virtio/vhost.c
> > > > +++ b/hw/virtio/vhost.c
> > > > @@ -1234,6 +1234,18 @@ static void vhost_virtqueue_cleanup(struct
> vhost_virtqueue *vq)
> > > >      event_notifier_cleanup(&vq->masked_notifier);
> > > >  }
> > > >
> > > > +static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev
> > > > +*hdev) {
> > > > +    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > > +        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > > +        error_report("vhost backend memory slots limit is less"
> > > > +                " than current number of present memory slots");
> > > > +        return true;
> > > > +    }
> > > > +
> > > > +    return false;
> > > > +}
> > > > +
> > > >  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > > >                     VhostBackendType backend_type, uint32_t
> > > > busyloop_timeout)  { @@ -1252,10 +1264,7 @@ int
> > > > vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > > >          goto fail;
> > > >      }
> > > >
> > > > -    if (hdev->vhost_ops->vhost_get_used_memslots() >
> > > > -        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
> > > > -        error_report("vhost backend memory slots limit is less"
> > > > -                " than current number of present memory slots");
> > > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > > why do you keep this check?
> > > it seems always be false
> > >
> > >
> > >
> > > >          r = -1;
> > > >          goto fail;
> > > >      }
> > > > @@ -1341,6 +1350,16 @@ int vhost_dev_init(struct vhost_dev *hdev,
> void *opaque,
> > > >      hdev->memory_changed = false;
> > > >      memory_listener_register(&hdev->memory_listener,
> &address_space_memory);
> > > >      QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
> > > > +
> > > > +    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
> > > > +        r = -1;
> > > > +        if (busyloop_timeout) {
> > > > +            goto fail_busyloop;
> > > > +        } else {
> > > > +            goto fail;
> > > > +        }
> > > > +    }
> > > seem to be right thing to do, since after registering listener for
> > > the first time used_memslots will be updated to actual value.
> > >
> > >
> > > I did some testing and without this hunk/patch
> > >
> > > on 'device_add  virtio-net-pci,netdev=net0' qemu prints:
> > >
> > > qemu-system-x86_64: vhost_set_mem_table failed: Argument list too
> > > long (7)
> > > qemu-system-x86_64: unable to start vhost net: 7: falling back on
> > > userspace virtio

Error code 7 is E2BIG, which means

    	if (mem.nregions > max_mem_regions)
		return -E2BIG;

happened in the kernel.

> > >
> > > and network is operational in guest, but with this patch
> > >
> > > "netdev_add ...,vhost-on" prints:
> > >
> > > vhost backend memory slots limit is less than current number of
> > > present memory slots vhost-net requested but could not be
> > > initialized
> > >
> > > and following "device_add  virtio-net-pci,netdev=net0" prints:
> > >
> > > TUNSETOFFLOAD ioctl() failed: Bad file descriptor TUNSETOFFLOAD
> > > ioctl() failed: Bad file descriptor
> > >
> > > adapter is still hot-plugged but guest networking is broken (can't
> > > get IP address via DHCP)
> > >
> > > so patch seems introduces a regression or something broken elsewhere
> > > and this exposes issue, not sure what qemu reaction should be in
> > > this case  i.e. when netdev_add fails
> > >     1: should we fail followed up device_add or
> > >     2: make it fall back to userspace virtio
> > >
> > > I'd go for #2,
> > > Michael what's your take on it?
> >
> > OK but there's a vhost force flag, if that is set we definitely should
> > fail device_add.
> >
> > Also, hotplug can follow device_add, should be handled similarly.
> I was testing with vhost-kernel (as it doesn't need extra environment to
> setup) and it's able to fallback to virtio transport.
> 
> However in case of vhost-user, is there even an option to fallback to?

Using error code(which do it like vhost-kernel) instead of asserting in
vhost_user_set_mem_table(), I have tested:
"netdev_add vhost-user,chardev=charnet0,id=hostnet0" is successful,
following "device_add virtio-net-pci,netdev=hostnet0,id=net0,bus=pci.0" prints:

"qemu-system-x86_64: vhost_set_mem_table failed: Interrupted system call (4)
qemu-system-x86_64: unable to start vhost net: 4: falling back on userspace virtio"

or

"qemu-system-x86_64: vhost_set_mem_table failed: Resource temporarily unavailable (11)
qemu-system-x86_64: unable to start vhost net: 11: falling back on userspace virtio"

adapter is still hot-plugged but guest networking is broken (can't get IP
address via DHCP), does this mean make no sense for vhost-user to fallback
to?

> Perhaps our only choice here is to fail backend creation cleanly, so no
> one would be able to add a frontend refering to non existing backend.

Not sure what to do.

> 
> 
> PS:
> even if we have to fail on error for vhost-user, this patch shouldn't
> change current vhost-kernel behavior (fallback should still work)

Does it mean vhost-kernel don't need to care about the value of used_memslots
(because it's able to fall back to userspace virtio)?

Is it enough to use error code in vhost_user_set_mem_table() and
vhost_kernel_set_mem_table()?
  1. If yes, how about removing the check of used_memslots totally?
  2. If no, is it enough to check used_memslots for vhost-user only after
    memory listener is registered?


Regards,
Jay

> 
> >
> > > > +
> > > >      return 0;
> > > >
> > > >  fail_busyloop:
> >
diff mbox

Patch

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 59a32e9..e45f5e2 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1234,6 +1234,18 @@  static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
     event_notifier_cleanup(&vq->masked_notifier);
 }
 
+static bool vhost_dev_used_memslots_is_exceeded(struct vhost_dev *hdev)
+{
+    if (hdev->vhost_ops->vhost_get_used_memslots() >
+        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
+        error_report("vhost backend memory slots limit is less"
+                " than current number of present memory slots");
+        return true;
+    }
+
+    return false;
+}
+
 int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
                    VhostBackendType backend_type, uint32_t busyloop_timeout)
 {
@@ -1252,10 +1264,7 @@  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
         goto fail;
     }
 
-    if (hdev->vhost_ops->vhost_get_used_memslots() >
-        hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) {
-        error_report("vhost backend memory slots limit is less"
-                " than current number of present memory slots");
+    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
         r = -1;
         goto fail;
     }
@@ -1341,6 +1350,16 @@  int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
     hdev->memory_changed = false;
     memory_listener_register(&hdev->memory_listener, &address_space_memory);
     QLIST_INSERT_HEAD(&vhost_devices, hdev, entry);
+
+    if (vhost_dev_used_memslots_is_exceeded(hdev)) {
+        r = -1;
+        if (busyloop_timeout) {
+            goto fail_busyloop;
+        } else {
+            goto fail;
+        }
+    }
+
     return 0;
 
 fail_busyloop: