diff mbox series

[RFC,6/6] vdpa: remove net cvq migration blocker

Message ID 20230706191227.835526-7-eperezma@redhat.com (mailing list archive)
State New, archived
Headers show
Series Enable vdpa net migration with features depending on CVQ | expand

Commit Message

Eugenio Perez Martin July 6, 2023, 7:12 p.m. UTC
Now that we have add migration blockers if the device does not support
all the needed features, remove the general blocker applied to all net
devices with CVQ.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 12 ------------
 1 file changed, 12 deletions(-)

Comments

Jason Wang July 10, 2023, 3:53 a.m. UTC | #1
On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Now that we have add migration blockers if the device does not support
> all the needed features, remove the general blocker applied to all net
> devices with CVQ.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

I wonder what's the difference compared if we just start cvq first in
vhost_net_start()?

Thanks

> ---
>  net/vhost-vdpa.c | 12 ------------
>  1 file changed, 12 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index d5970e9f06..5432b50498 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>          s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
>          s->vhost_vdpa.shadow_vq_ops_opaque = s;
>          s->cvq_isolated = cvq_isolated;
> -
> -        /*
> -         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> -         * there is no way to set the device state (MAC, MQ, etc) before
> -         * starting the datapath.
> -         *
> -         * Migration blocker ownership now belongs to s->vhost_vdpa.
> -         */
> -        if (!svq) {
> -            error_setg(&s->vhost_vdpa.migration_blocker,
> -                       "net vdpa cannot migrate with CVQ feature");
> -        }
>      }
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
>      if (ret) {
> --
> 2.39.3
>
Eugenio Perez Martin July 10, 2023, 7:37 a.m. UTC | #2
On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Now that we have add migration blockers if the device does not support
> > all the needed features, remove the general blocker applied to all net
> > devices with CVQ.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>
> I wonder what's the difference compared if we just start cvq first in
> vhost_net_start()?
>

That's interesting. It would complicate the for loop in
vhost_net_start, but I think it's less complicated than
should_start_op.

It would be something like moving from:

for (i = 0; i < nvhosts; i++) {
    if (i < data_queue_pairs) {
        peer = qemu_get_peer(ncs, i);
    } else {
        peer = qemu_get_peer(ncs, n->max_queue_pairs);
    }

    ...

    r = vhost_net_start_one(get_vhost_net(peer), dev);
    if (r < 0) {
        goto err_start;
    }
}

To:

for (i = 0; i < nvhosts; i++) {
    if (i == 0 && cvq) {
        peer = qemu_get_peer(ncs, n->max_queue_pairs);
    } else {
        peer = qemu_get_peer(ncs, i - cvq);
    }

    ...

    r = vhost_net_start_one(get_vhost_net(peer), dev);
    if (r < 0) {
        goto err_start;
    }
}

Is this what you have in mind?

Thanks!

> Thanks
>
> > ---
> >  net/vhost-vdpa.c | 12 ------------
> >  1 file changed, 12 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index d5970e9f06..5432b50498 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> >          s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
> >          s->vhost_vdpa.shadow_vq_ops_opaque = s;
> >          s->cvq_isolated = cvq_isolated;
> > -
> > -        /*
> > -         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> > -         * there is no way to set the device state (MAC, MQ, etc) before
> > -         * starting the datapath.
> > -         *
> > -         * Migration blocker ownership now belongs to s->vhost_vdpa.
> > -         */
> > -        if (!svq) {
> > -            error_setg(&s->vhost_vdpa.migration_blocker,
> > -                       "net vdpa cannot migrate with CVQ feature");
> > -        }
> >      }
> >      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
> >      if (ret) {
> > --
> > 2.39.3
> >
>
Eugenio Perez Martin July 20, 2023, 8:17 a.m. UTC | #3
On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Now that we have add migration blockers if the device does not support
> > > all the needed features, remove the general blocker applied to all net
> > > devices with CVQ.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >
> > I wonder what's the difference compared if we just start cvq first in
> > vhost_net_start()?
> >
>
> That's interesting. It would complicate the for loop in
> vhost_net_start, but I think it's less complicated than
> should_start_op.
>
> It would be something like moving from:
>
> for (i = 0; i < nvhosts; i++) {
>     if (i < data_queue_pairs) {
>         peer = qemu_get_peer(ncs, i);
>     } else {
>         peer = qemu_get_peer(ncs, n->max_queue_pairs);
>     }
>
>     ...
>
>     r = vhost_net_start_one(get_vhost_net(peer), dev);
>     if (r < 0) {
>         goto err_start;
>     }
> }
>
> To:
>
> for (i = 0; i < nvhosts; i++) {
>     if (i == 0 && cvq) {
>         peer = qemu_get_peer(ncs, n->max_queue_pairs);
>     } else {
>         peer = qemu_get_peer(ncs, i - cvq);
>     }
>
>     ...
>
>     r = vhost_net_start_one(get_vhost_net(peer), dev);
>     if (r < 0) {
>         goto err_start;
>     }
> }
>
> Is this what you have in mind?
>

Friendly ping.

Thanks!
Jason Wang July 20, 2023, 8:23 a.m. UTC | #4
On Thu, Jul 20, 2023 at 4:18 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > Now that we have add migration blockers if the device does not support
> > > > all the needed features, remove the general blocker applied to all net
> > > > devices with CVQ.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > >
> > > I wonder what's the difference compared if we just start cvq first in
> > > vhost_net_start()?
> > >
> >
> > That's interesting. It would complicate the for loop in
> > vhost_net_start, but I think it's less complicated than
> > should_start_op.
> >
> > It would be something like moving from:
> >
> > for (i = 0; i < nvhosts; i++) {
> >     if (i < data_queue_pairs) {
> >         peer = qemu_get_peer(ncs, i);
> >     } else {
> >         peer = qemu_get_peer(ncs, n->max_queue_pairs);
> >     }
> >
> >     ...
> >
> >     r = vhost_net_start_one(get_vhost_net(peer), dev);
> >     if (r < 0) {
> >         goto err_start;
> >     }
> > }
> >
> > To:
> >
> > for (i = 0; i < nvhosts; i++) {
> >     if (i == 0 && cvq) {
> >         peer = qemu_get_peer(ncs, n->max_queue_pairs);
> >     } else {
> >         peer = qemu_get_peer(ncs, i - cvq);
> >     }
> >
> >     ...
> >
> >     r = vhost_net_start_one(get_vhost_net(peer), dev);
> >     if (r < 0) {
> >         goto err_start;
> >     }
> > }
> >
> > Is this what you have in mind?
> >
>

Yes, something like this.

Thanks

> Friendly ping.
>
> Thanks!
>
diff mbox series

Patch

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index d5970e9f06..5432b50498 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -996,18 +996,6 @@  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
         s->cvq_isolated = cvq_isolated;
-
-        /*
-         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
-         * there is no way to set the device state (MAC, MQ, etc) before
-         * starting the datapath.
-         *
-         * Migration blocker ownership now belongs to s->vhost_vdpa.
-         */
-        if (!svq) {
-            error_setg(&s->vhost_vdpa.migration_blocker,
-                       "net vdpa cannot migrate with CVQ feature");
-        }
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {