Message ID | 20230706191227.835526-7-eperezma@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Enable vdpa net migration with features depending on CVQ | expand |
On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > Now that we have add migration blockers if the device does not support > all the needed features, remove the general blocker applied to all net > devices with CVQ. > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> I wonder what's the difference compared if we just start cvq first in vhost_net_start()? Thanks > --- > net/vhost-vdpa.c | 12 ------------ > 1 file changed, 12 deletions(-) > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c > index d5970e9f06..5432b50498 100644 > --- a/net/vhost-vdpa.c > +++ b/net/vhost-vdpa.c > @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, > s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops; > s->vhost_vdpa.shadow_vq_ops_opaque = s; > s->cvq_isolated = cvq_isolated; > - > - /* > - * TODO: We cannot migrate devices with CVQ and no x-svq enabled as > - * there is no way to set the device state (MAC, MQ, etc) before > - * starting the datapath. > - * > - * Migration blocker ownership now belongs to s->vhost_vdpa. > - */ > - if (!svq) { > - error_setg(&s->vhost_vdpa.migration_blocker, > - "net vdpa cannot migrate with CVQ feature"); > - } > } > ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs); > if (ret) { > -- > 2.39.3 >
On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote: > > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > Now that we have add migration blockers if the device does not support > > all the needed features, remove the general blocker applied to all net > > devices with CVQ. > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > I wonder what's the difference compared if we just start cvq first in > vhost_net_start()? > That's interesting. It would complicate the for loop in vhost_net_start, but I think it's less complicated than should_start_op. It would be something like moving from: for (i = 0; i < nvhosts; i++) { if (i < data_queue_pairs) { peer = qemu_get_peer(ncs, i); } else { peer = qemu_get_peer(ncs, n->max_queue_pairs); } ... r = vhost_net_start_one(get_vhost_net(peer), dev); if (r < 0) { goto err_start; } } To: for (i = 0; i < nvhosts; i++) { if (i == 0 && cvq) { peer = qemu_get_peer(ncs, n->max_queue_pairs); } else { peer = qemu_get_peer(ncs, i - cvq); } ... r = vhost_net_start_one(get_vhost_net(peer), dev); if (r < 0) { goto err_start; } } Is this what you have in mind? Thanks! > Thanks > > > --- > > net/vhost-vdpa.c | 12 ------------ > > 1 file changed, 12 deletions(-) > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c > > index d5970e9f06..5432b50498 100644 > > --- a/net/vhost-vdpa.c > > +++ b/net/vhost-vdpa.c > > @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, > > s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops; > > s->vhost_vdpa.shadow_vq_ops_opaque = s; > > s->cvq_isolated = cvq_isolated; > > - > > - /* > > - * TODO: We cannot migrate devices with CVQ and no x-svq enabled as > > - * there is no way to set the device state (MAC, MQ, etc) before > > - * starting the datapath. > > - * > > - * Migration blocker ownership now belongs to s->vhost_vdpa. > > - */ > > - if (!svq) { > > - error_setg(&s->vhost_vdpa.migration_blocker, > > - "net vdpa cannot migrate with CVQ feature"); > > - } > > } > > ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs); > > if (ret) { > > -- > > 2.39.3 > > >
On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin <eperezma@redhat.com> wrote: > > On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote: > > > > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > Now that we have add migration blockers if the device does not support > > > all the needed features, remove the general blocker applied to all net > > > devices with CVQ. > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > > I wonder what's the difference compared if we just start cvq first in > > vhost_net_start()? > > > > That's interesting. It would complicate the for loop in > vhost_net_start, but I think it's less complicated than > should_start_op. > > It would be something like moving from: > > for (i = 0; i < nvhosts; i++) { > if (i < data_queue_pairs) { > peer = qemu_get_peer(ncs, i); > } else { > peer = qemu_get_peer(ncs, n->max_queue_pairs); > } > > ... > > r = vhost_net_start_one(get_vhost_net(peer), dev); > if (r < 0) { > goto err_start; > } > } > > To: > > for (i = 0; i < nvhosts; i++) { > if (i == 0 && cvq) { > peer = qemu_get_peer(ncs, n->max_queue_pairs); > } else { > peer = qemu_get_peer(ncs, i - cvq); > } > > ... > > r = vhost_net_start_one(get_vhost_net(peer), dev); > if (r < 0) { > goto err_start; > } > } > > Is this what you have in mind? > Friendly ping. Thanks!
On Thu, Jul 20, 2023 at 4:18 PM Eugenio Perez Martin <eperezma@redhat.com> wrote: > > On Mon, Jul 10, 2023 at 9:37 AM Eugenio Perez Martin > <eperezma@redhat.com> wrote: > > > > On Mon, Jul 10, 2023 at 5:54 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > On Fri, Jul 7, 2023 at 3:12 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > > > Now that we have add migration blockers if the device does not support > > > > all the needed features, remove the general blocker applied to all net > > > > devices with CVQ. > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > > > > I wonder what's the difference compared if we just start cvq first in > > > vhost_net_start()? > > > > > > > That's interesting. It would complicate the for loop in > > vhost_net_start, but I think it's less complicated than > > should_start_op. > > > > It would be something like moving from: > > > > for (i = 0; i < nvhosts; i++) { > > if (i < data_queue_pairs) { > > peer = qemu_get_peer(ncs, i); > > } else { > > peer = qemu_get_peer(ncs, n->max_queue_pairs); > > } > > > > ... > > > > r = vhost_net_start_one(get_vhost_net(peer), dev); > > if (r < 0) { > > goto err_start; > > } > > } > > > > To: > > > > for (i = 0; i < nvhosts; i++) { > > if (i == 0 && cvq) { > > peer = qemu_get_peer(ncs, n->max_queue_pairs); > > } else { > > peer = qemu_get_peer(ncs, i - cvq); > > } > > > > ... > > > > r = vhost_net_start_one(get_vhost_net(peer), dev); > > if (r < 0) { > > goto err_start; > > } > > } > > > > Is this what you have in mind? > > > Yes, something like this. Thanks > Friendly ping. > > Thanks! >
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index d5970e9f06..5432b50498 100644 --- a/net/vhost-vdpa.c +++ b/net/vhost-vdpa.c @@ -996,18 +996,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops; s->vhost_vdpa.shadow_vq_ops_opaque = s; s->cvq_isolated = cvq_isolated; - - /* - * TODO: We cannot migrate devices with CVQ and no x-svq enabled as - * there is no way to set the device state (MAC, MQ, etc) before - * starting the datapath. - * - * Migration blocker ownership now belongs to s->vhost_vdpa. - */ - if (!svq) { - error_setg(&s->vhost_vdpa.migration_blocker, - "net vdpa cannot migrate with CVQ feature"); - } } ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs); if (ret) {
Now that we have add migration blockers if the device does not support all the needed features, remove the general blocker applied to all net devices with CVQ. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> --- net/vhost-vdpa.c | 12 ------------ 1 file changed, 12 deletions(-)