Message ID | 20230213191929.1547497-1-eperezma@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vhost: accept VIRTIO_F_ORDER_PLATFORM as a valid SVQ feature | expand |
On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > the device are ordered in a way described by the platform. Since vDPA > devices may be backed by a hardware devices, let's allow > VIRTIO_F_ORDER_PLATFORM. > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > --- > hw/virtio/vhost-shadow-virtqueue.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > index 4307296358..6bb1998f12 100644 > --- a/hw/virtio/vhost-shadow-virtqueue.c > +++ b/hw/virtio/vhost-shadow-virtqueue.c > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > switch (b) { > case VIRTIO_F_ANY_LAYOUT: > case VIRTIO_RING_F_EVENT_IDX: > + case VIRTIO_F_ORDER_PLATFORM: Do we need to add this bit to vdpa_feature_bits[] as well? Thanks > continue; > > case VIRTIO_F_ACCESS_PLATFORM: > -- > 2.31.1 >
On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote: > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > > the device are ordered in a way described by the platform. Since vDPA > > devices may be backed by a hardware devices, let's allow > > VIRTIO_F_ORDER_PLATFORM. > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > --- > > hw/virtio/vhost-shadow-virtqueue.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > > index 4307296358..6bb1998f12 100644 > > --- a/hw/virtio/vhost-shadow-virtqueue.c > > +++ b/hw/virtio/vhost-shadow-virtqueue.c > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > > switch (b) { > > case VIRTIO_F_ANY_LAYOUT: > > case VIRTIO_RING_F_EVENT_IDX: > > + case VIRTIO_F_ORDER_PLATFORM: > > Do we need to add this bit to vdpa_feature_bits[] as well? > If we want to pass it to the guest, yes we should. I'll send another patch for it. But I think that should be done on top / in parallel actually. Open question: Should all vdpa hardware devices offer it? Or this is only needed on specific archs? Thanks!
On Tue, Feb 14, 2023 at 08:02:08AM +0100, Eugenio Perez Martin wrote: > On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote: > > > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > > > the device are ordered in a way described by the platform. Since vDPA > > > devices may be backed by a hardware devices, let's allow > > > VIRTIO_F_ORDER_PLATFORM. > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > --- > > > hw/virtio/vhost-shadow-virtqueue.c | 1 + > > > 1 file changed, 1 insertion(+) > > > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > > > index 4307296358..6bb1998f12 100644 > > > --- a/hw/virtio/vhost-shadow-virtqueue.c > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c > > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > > > switch (b) { > > > case VIRTIO_F_ANY_LAYOUT: > > > case VIRTIO_RING_F_EVENT_IDX: > > > + case VIRTIO_F_ORDER_PLATFORM: > > > > Do we need to add this bit to vdpa_feature_bits[] as well? > > > > If we want to pass it to the guest, yes we should. I'll send another > patch for it. > > But I think that should be done on top / in parallel actually. > > Open question: Should all vdpa hardware devices offer it? Or this is > only needed on specific archs? > > Thanks! I don't get what this is doing at all frankly. vdpa has to pass VIRTIO_F_ORDER_PLATFORM to guest at all times - unless - it's a x86 host where it kind of works anyway - it's vduse which frankly is so slow we can do VIRTIO_F_ORDER_PLATFORM anyway. In short VIRTIO_F_ORDER_PLATFORM has nothing to do with the device and everything with qemu itself. Yea we can allow VIRTIO_F_ORDER_PLATFORM from kernel but given we never did at this point it will need a protocol feature bit. I don't think it's worth it ..
On Tue, Feb 14, 2023 at 8:51 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Tue, Feb 14, 2023 at 08:02:08AM +0100, Eugenio Perez Martin wrote: > > On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > > > > the device are ordered in a way described by the platform. Since vDPA > > > > devices may be backed by a hardware devices, let's allow > > > > VIRTIO_F_ORDER_PLATFORM. > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > > --- > > > > hw/virtio/vhost-shadow-virtqueue.c | 1 + > > > > 1 file changed, 1 insertion(+) > > > > > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > > > > index 4307296358..6bb1998f12 100644 > > > > --- a/hw/virtio/vhost-shadow-virtqueue.c > > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c > > > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > > > > switch (b) { > > > > case VIRTIO_F_ANY_LAYOUT: > > > > case VIRTIO_RING_F_EVENT_IDX: > > > > + case VIRTIO_F_ORDER_PLATFORM: > > > > > > Do we need to add this bit to vdpa_feature_bits[] as well? > > > > > > > If we want to pass it to the guest, yes we should. I'll send another > > patch for it. > > > > But I think that should be done on top / in parallel actually. > > > > Open question: Should all vdpa hardware devices offer it? Or this is > > only needed on specific archs? > > > > Thanks! > > I don't get what this is doing at all frankly. vdpa has to > pass VIRTIO_F_ORDER_PLATFORM to guest at all times - unless > - it's a x86 host where it kind of works anyway > - it's vduse which frankly is so slow we can do VIRTIO_F_ORDER_PLATFORM anyway. That was my understanding, adding vdpasim to the list of exceptions (please correct me if I'm wrong). > In short VIRTIO_F_ORDER_PLATFORM has nothing to do with the device > and everything with qemu itself. > I have little experience outside of x86 so I may be wrong here. My understanding is that this feature allows the guest to optimize barriers around memory ops: * If VIRTIO_F_ORDER_PLATFORM is not negotiated, the driver can use softer memory barriers that protects ordering between different processors. * If VIRTIO_F_ORDER_PLATFORM is negotiated, stronger ordering is needed that also protects transport (PCI) accesses This is backed up by comments in the standard: This implies that the driver needs to use memory barriers suitable for devices described by the platform; e.g. for the PCI transport in the case of hardware PCI devices. And in virtio drivers: For virtio_pci on SMP, we don't need to order with respect to MMIO accesses through relaxed memory I/O windows, so virt_mb() et al are sufficient. For using virtio to talk to real devices (eg. other heterogeneous CPUs) we do need real barriers. So the sentence "VIRTIO_F_ORDER_PLATFORM has nothing to do with the device and everything with qemu itself." is actually the reverse, and has everything to do with devices? > Yea we can allow VIRTIO_F_ORDER_PLATFORM from kernel but given > we never did at this point it will need a protocol feature bit. > I don't think it's worth it .. > With "from kernel" do you mean in vhost-kernel or in virtio ring driver? The virtio ring driver already supports them. I'm ok with leaving this for the future but that means hw devices in non-x86 platforms may not work correctly, isn't it? Thanks!
On Tue, Feb 14, 2023 at 09:36:01AM +0100, Eugenio Perez Martin wrote: > On Tue, Feb 14, 2023 at 8:51 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Tue, Feb 14, 2023 at 08:02:08AM +0100, Eugenio Perez Martin wrote: > > > On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > > > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > > > > > the device are ordered in a way described by the platform. Since vDPA > > > > > devices may be backed by a hardware devices, let's allow > > > > > VIRTIO_F_ORDER_PLATFORM. > > > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > > > --- > > > > > hw/virtio/vhost-shadow-virtqueue.c | 1 + > > > > > 1 file changed, 1 insertion(+) > > > > > > > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > > > > > index 4307296358..6bb1998f12 100644 > > > > > --- a/hw/virtio/vhost-shadow-virtqueue.c > > > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c > > > > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > > > > > switch (b) { > > > > > case VIRTIO_F_ANY_LAYOUT: > > > > > case VIRTIO_RING_F_EVENT_IDX: > > > > > + case VIRTIO_F_ORDER_PLATFORM: > > > > > > > > Do we need to add this bit to vdpa_feature_bits[] as well? > > > > > > > > > > If we want to pass it to the guest, yes we should. I'll send another > > > patch for it. > > > > > > But I think that should be done on top / in parallel actually. > > > > > > Open question: Should all vdpa hardware devices offer it? Or this is > > > only needed on specific archs? > > > > > > Thanks! > > > > I don't get what this is doing at all frankly. vdpa has to > > pass VIRTIO_F_ORDER_PLATFORM to guest at all times - unless > > - it's a x86 host where it kind of works anyway > > - it's vduse which frankly is so slow we can do VIRTIO_F_ORDER_PLATFORM anyway. > > That was my understanding, adding vdpasim to the list of exceptions > (please correct me if I'm wrong). > > > In short VIRTIO_F_ORDER_PLATFORM has nothing to do with the device > > and everything with qemu itself. > > > > I have little experience outside of x86 so I may be wrong here. My > understanding is that this feature allows the guest to optimize > barriers around memory ops: > * If VIRTIO_F_ORDER_PLATFORM is not negotiated, the driver can use > softer memory barriers that protects ordering between different > processors. > * If VIRTIO_F_ORDER_PLATFORM is negotiated, stronger ordering is > needed that also protects transport (PCI) accesses > > This is backed up by comments in the standard: > This implies that the driver needs to use memory barriers suitable for > devices described by the platform; e.g. for the PCI transport in the > case of hardware PCI devices. > > And in virtio drivers: > For virtio_pci on SMP, we don't need to order with respect to MMIO > accesses through relaxed memory I/O windows, so virt_mb() et al are > sufficient. > For using virtio to talk to real devices (eg. other heterogeneous > CPUs) we do need real barriers. > > So the sentence "VIRTIO_F_ORDER_PLATFORM has nothing to do with the > device and everything with qemu itself." is actually the reverse, and > has everything to do with devices? Point is this is not device's decision. > > Yea we can allow VIRTIO_F_ORDER_PLATFORM from kernel but given > > we never did at this point it will need a protocol feature bit. > > I don't think it's worth it .. > > > > With "from kernel" do you mean in vhost-kernel or in virtio ring > driver? The virtio ring driver already supports them. vhost-kernel > I'm ok with leaving this for the future but that means hw devices in > non-x86 platforms may not work correctly, isn't it? > > Thanks! You need to pass this to guest. My point is that there is no reason to get it from the kernel driver. QEMU can figure out whether the flag is needed itself.
On Wed, Mar 1, 2023 at 10:37 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Tue, Feb 14, 2023 at 09:36:01AM +0100, Eugenio Perez Martin wrote: > > On Tue, Feb 14, 2023 at 8:51 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > On Tue, Feb 14, 2023 at 08:02:08AM +0100, Eugenio Perez Martin wrote: > > > > On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote: > > > > > > > > > > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and > > > > > > the device are ordered in a way described by the platform. Since vDPA > > > > > > devices may be backed by a hardware devices, let's allow > > > > > > VIRTIO_F_ORDER_PLATFORM. > > > > > > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com> > > > > > > --- > > > > > > hw/virtio/vhost-shadow-virtqueue.c | 1 + > > > > > > 1 file changed, 1 insertion(+) > > > > > > > > > > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c > > > > > > index 4307296358..6bb1998f12 100644 > > > > > > --- a/hw/virtio/vhost-shadow-virtqueue.c > > > > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c > > > > > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) > > > > > > switch (b) { > > > > > > case VIRTIO_F_ANY_LAYOUT: > > > > > > case VIRTIO_RING_F_EVENT_IDX: > > > > > > + case VIRTIO_F_ORDER_PLATFORM: > > > > > > > > > > Do we need to add this bit to vdpa_feature_bits[] as well? > > > > > > > > > > > > > If we want to pass it to the guest, yes we should. I'll send another > > > > patch for it. > > > > > > > > But I think that should be done on top / in parallel actually. > > > > > > > > Open question: Should all vdpa hardware devices offer it? Or this is > > > > only needed on specific archs? > > > > > > > > Thanks! > > > > > > I don't get what this is doing at all frankly. vdpa has to > > > pass VIRTIO_F_ORDER_PLATFORM to guest at all times - unless > > > - it's a x86 host where it kind of works anyway > > > - it's vduse which frankly is so slow we can do VIRTIO_F_ORDER_PLATFORM anyway. > > > > That was my understanding, adding vdpasim to the list of exceptions > > (please correct me if I'm wrong). > > > > > In short VIRTIO_F_ORDER_PLATFORM has nothing to do with the device > > > and everything with qemu itself. > > > > > > > I have little experience outside of x86 so I may be wrong here. My > > understanding is that this feature allows the guest to optimize > > barriers around memory ops: > > * If VIRTIO_F_ORDER_PLATFORM is not negotiated, the driver can use > > softer memory barriers that protects ordering between different > > processors. > > * If VIRTIO_F_ORDER_PLATFORM is negotiated, stronger ordering is > > needed that also protects transport (PCI) accesses > > > > This is backed up by comments in the standard: > > This implies that the driver needs to use memory barriers suitable for > > devices described by the platform; e.g. for the PCI transport in the > > case of hardware PCI devices. > > > > And in virtio drivers: > > For virtio_pci on SMP, we don't need to order with respect to MMIO > > accesses through relaxed memory I/O windows, so virt_mb() et al are > > sufficient. > > For using virtio to talk to real devices (eg. other heterogeneous > > CPUs) we do need real barriers. > > > > So the sentence "VIRTIO_F_ORDER_PLATFORM has nothing to do with the > > device and everything with qemu itself." is actually the reverse, and > > has everything to do with devices? > > Point is this is not device's decision. > > > > > Yea we can allow VIRTIO_F_ORDER_PLATFORM from kernel but given > > > we never did at this point it will need a protocol feature bit. > > > I don't think it's worth it .. > > > > > > > With "from kernel" do you mean in vhost-kernel or in virtio ring > > driver? The virtio ring driver already supports them. > > vhost-kernel > > > I'm ok with leaving this for the future but that means hw devices in > > non-x86 platforms may not work correctly, isn't it? > > > > Thanks! > > You need to pass this to guest. My point is that there is no reason to > get it from the kernel driver. QEMU can figure out whether the flag is > needed itself. > Ok, I can see now how the HW device does not have all the knowledge to offer this flag or not. But I'm not sure how qemu can know either. If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to tell if the device is sw or hw as far as I know. Am I missing something? Thanks!
On Thu, Mar 02, 2023 at 12:30:52PM +0100, Eugenio Perez Martin wrote: > > You need to pass this to guest. My point is that there is no reason to > > get it from the kernel driver. QEMU can figure out whether the flag is > > needed itself. > > > > Ok, I can see now how the HW device does not have all the knowledge to > offer this flag or not. But I'm not sure how qemu can know either. > > If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to > tell if the device is sw or hw as far as I know. Am I missing > something? > > Thanks! This is what I said earlier. You can safely assume vdpa needs this flag. Only exception is vduse and we don't care about performance there.
On Thu, Mar 2, 2023 at 12:43 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Mar 02, 2023 at 12:30:52PM +0100, Eugenio Perez Martin wrote: > > > You need to pass this to guest. My point is that there is no reason to > > > get it from the kernel driver. QEMU can figure out whether the flag is > > > needed itself. > > > > > > > Ok, I can see now how the HW device does not have all the knowledge to > > offer this flag or not. But I'm not sure how qemu can know either. > > > > If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to > > tell if the device is sw or hw as far as I know. Am I missing > > something? > > > > Thanks! > > This is what I said earlier. You can safely assume vdpa needs this > flag. Only exception is vduse and we don't care about performance there. > Ok now I get your point, thanks for explaining. But I'm missing why it is wrong to start using it properly from the kernel. I didn't test vDPA in non x86 / PCI, but if it does not work because of the lack of this feature flag the right fix would be to offer it, not to start assuming it in qemu, isn't it? I can see how "assume VIRTIO_F_ORDER_PLATFORM from qemu" may need code comments and extra explanations, but to start offering it properly from the device is expected somehow. Thanks!
On Thu, Mar 02, 2023 at 03:47:48PM +0100, Eugenio Perez Martin wrote: > On Thu, Mar 2, 2023 at 12:43 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Thu, Mar 02, 2023 at 12:30:52PM +0100, Eugenio Perez Martin wrote: > > > > You need to pass this to guest. My point is that there is no reason to > > > > get it from the kernel driver. QEMU can figure out whether the flag is > > > > needed itself. > > > > > > > > > > Ok, I can see now how the HW device does not have all the knowledge to > > > offer this flag or not. But I'm not sure how qemu can know either. > > > > > > If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to > > > tell if the device is sw or hw as far as I know. Am I missing > > > something? > > > > > > Thanks! > > > > This is what I said earlier. You can safely assume vdpa needs this > > flag. Only exception is vduse and we don't care about performance there. > > > > Ok now I get your point, thanks for explaining. > > But I'm missing why it is wrong to start using it properly from the > kernel. > > I didn't test vDPA in non x86 / PCI, but if it does not work > because of the lack of this feature flag the right fix would be to > offer it, not to start assuming it in qemu, isn't it? > > I can see how "assume VIRTIO_F_ORDER_PLATFORM from qemu" may need code > comments and extra explanations, but to start offering it properly > from the device is expected somehow. > > Thanks! Does kernel always expose it?
On Fri, Mar 3, 2023 at 12:31 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Mar 02, 2023 at 03:47:48PM +0100, Eugenio Perez Martin wrote: > > On Thu, Mar 2, 2023 at 12:43 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > On Thu, Mar 02, 2023 at 12:30:52PM +0100, Eugenio Perez Martin wrote: > > > > > You need to pass this to guest. My point is that there is no reason to > > > > > get it from the kernel driver. QEMU can figure out whether the flag is > > > > > needed itself. > > > > > > > > > > > > > Ok, I can see now how the HW device does not have all the knowledge to > > > > offer this flag or not. But I'm not sure how qemu can know either. > > > > > > > > If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to > > > > tell if the device is sw or hw as far as I know. Am I missing > > > > something? > > > > > > > > Thanks! > > > > > > This is what I said earlier. You can safely assume vdpa needs this > > > flag. Only exception is vduse and we don't care about performance there. > > > > > > > Ok now I get your point, thanks for explaining. > > > > But I'm missing why it is wrong to start using it properly from the > > kernel. > > > > I didn't test vDPA in non x86 / PCI, but if it does not work > > because of the lack of this feature flag the right fix would be to > > offer it, not to start assuming it in qemu, isn't it? > > > > I can see how "assume VIRTIO_F_ORDER_PLATFORM from qemu" may need code > > comments and extra explanations, but to start offering it properly > > from the device is expected somehow. > > > > Thanks! > > Does kernel always expose it? > As far as I know the only vdpa device exposing it is alibaba/eni_vdpa
On Fri, Mar 03, 2023 at 10:08:17AM +0100, Eugenio Perez Martin wrote: > On Fri, Mar 3, 2023 at 12:31 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Thu, Mar 02, 2023 at 03:47:48PM +0100, Eugenio Perez Martin wrote: > > > On Thu, Mar 2, 2023 at 12:43 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > > > On Thu, Mar 02, 2023 at 12:30:52PM +0100, Eugenio Perez Martin wrote: > > > > > > You need to pass this to guest. My point is that there is no reason to > > > > > > get it from the kernel driver. QEMU can figure out whether the flag is > > > > > > needed itself. > > > > > > > > > > > > > > > > Ok, I can see now how the HW device does not have all the knowledge to > > > > > offer this flag or not. But I'm not sure how qemu can know either. > > > > > > > > > > If qemu opens /dev/vhost-vdpa-N, how can it know it? It has no way to > > > > > tell if the device is sw or hw as far as I know. Am I missing > > > > > something? > > > > > > > > > > Thanks! > > > > > > > > This is what I said earlier. You can safely assume vdpa needs this > > > > flag. Only exception is vduse and we don't care about performance there. > > > > > > > > > > Ok now I get your point, thanks for explaining. > > > > > > But I'm missing why it is wrong to start using it properly from the > > > kernel. > > > > > > I didn't test vDPA in non x86 / PCI, but if it does not work > > > because of the lack of this feature flag the right fix would be to > > > offer it, not to start assuming it in qemu, isn't it? > > > > > > I can see how "assume VIRTIO_F_ORDER_PLATFORM from qemu" may need code > > > comments and extra explanations, but to start offering it properly > > > from the device is expected somehow. > > > > > > Thanks! > > > > Does kernel always expose it? > > > > As far as I know the only vdpa device exposing it is alibaba/eni_vdpa That is my point then. qemu should set it and ignore what kernel says.
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c index 4307296358..6bb1998f12 100644 --- a/hw/virtio/vhost-shadow-virtqueue.c +++ b/hw/virtio/vhost-shadow-virtqueue.c @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, Error **errp) switch (b) { case VIRTIO_F_ANY_LAYOUT: case VIRTIO_RING_F_EVENT_IDX: + case VIRTIO_F_ORDER_PLATFORM: continue; case VIRTIO_F_ACCESS_PLATFORM:
VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and the device are ordered in a way described by the platform. Since vDPA devices may be backed by a hardware devices, let's allow VIRTIO_F_ORDER_PLATFORM. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> --- hw/virtio/vhost-shadow-virtqueue.c | 1 + 1 file changed, 1 insertion(+)