diff mbox series

[v1,7/7] ui/spice: Create another texture with linear layout when gl=on is enabled

Message ID 20240120003013.1829757-8-vivek.kasireddy@intel.com (mailing list archive)
State New, archived
Headers show
Series ui/spice: Enable gl=on option for non-local or remote clients | expand

Commit Message

Kasireddy, Vivek Jan. 20, 2024, 12:30 a.m. UTC
Since most encoders (invoked by Spice) may not work with tiled memory
associated with a texture, we need to create another texture that has
linear memory layout and use that instead.

Note that, there does not seem to be a direct way to indicate to the
GL implementation that a texture's backing memory needs to be linear.
Instead, we have to do it in a roundabout way where we first need to
create a tiled texture and obtain a dmabuf fd associated with it via
EGL. Next, we have to create a memory object by importing the dmabuf
fd created earlier and finally create a linear texture by using the
memory object as the texture storage mechanism.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Frediano Ziglio <freddy77@gmail.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 ui/spice-display.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

Comments

Marc-André Lureau Jan. 22, 2024, 8:40 a.m. UTC | #1
Hi

On Sat, Jan 20, 2024 at 4:54 AM Vivek Kasireddy
<vivek.kasireddy@intel.com> wrote:
>
> Since most encoders (invoked by Spice) may not work with tiled memory
> associated with a texture, we need to create another texture that has
> linear memory layout and use that instead.
>
> Note that, there does not seem to be a direct way to indicate to the
> GL implementation that a texture's backing memory needs to be linear.
> Instead, we have to do it in a roundabout way where we first need to
> create a tiled texture and obtain a dmabuf fd associated with it via
> EGL. Next, we have to create a memory object by importing the dmabuf
> fd created earlier and finally create a linear texture by using the
> memory object as the texture storage mechanism.
>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Cc: Frediano Ziglio <freddy77@gmail.com>
> Cc: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
>  ui/spice-display.c | 33 +++++++++++++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>
> diff --git a/ui/spice-display.c b/ui/spice-display.c
> index 08b4aec921..94cb378dbe 100644
> --- a/ui/spice-display.c
> +++ b/ui/spice-display.c
> @@ -893,6 +893,7 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
>  {
>      SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
>      EGLint stride, fourcc;
> +    GLuint texture = 0;
>      int fd;
>
>      if (ssd->ds) {
> @@ -912,6 +913,38 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
>              return;
>          }
>
> +        if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {

hmm

> +            /*
> +             * We really want to ensure that the memory layout of the texture
> +             * is linear; otherwise, the encoder's output may show corruption.
> +             */
> +            surface_gl_create_texture_from_fd(ssd->ds, fd, &texture);

What if the encoder actually supports tiled layout?

Shouldn't this conversion be done at the encoder level as necessary?

It's also strange to reuse an FD associated with a tiled texture for a
linear layout (I am uncomfortable with all this tbh)

> +
> +            /*
> +             * A successful return after glImportMemoryFdEXT() means that
> +             * the ownership of fd has been passed to GL. In other words,
> +             * the fd we got above should not be used anymore.
> +             */
> +            if (texture > 0) {
> +                fd = egl_get_fd_for_texture(texture,
> +                                            &stride, &fourcc,
> +                                            NULL);
> +                if (fd < 0) {

I suggest adding warnings or tracing, to help debug issues...

> +                    glDeleteTextures(1, &texture);
> +                    fd = egl_get_fd_for_texture(ssd->ds->texture,
> +                                                &stride, &fourcc,
> +                                                NULL);
> +                    if (fd < 0) {
> +                        surface_gl_destroy_texture(ssd->gls, ssd->ds);
> +                        return;
> +                    }
> +                } else {
> +                    surface_gl_destroy_texture(ssd->gls, ssd->ds);
> +                    ssd->ds->texture = texture;

Have you tried this series with virgl? (I doubt the renderer accepts
that the scanout texture is changed)

> +                }
> +            }
> +        }
> +
>          trace_qemu_spice_gl_surface(ssd->qxl.id,
>                                      surface_width(ssd->ds),
>                                      surface_height(ssd->ds),
> --
> 2.39.2
>
>
Kasireddy, Vivek Jan. 26, 2024, 7:36 a.m. UTC | #2
Hi Marc-Andre,

> 
> Hi
> 
> On Sat, Jan 20, 2024 at 4:54 AM Vivek Kasireddy
> <vivek.kasireddy@intel.com> wrote:
> >
> > Since most encoders (invoked by Spice) may not work with tiled memory
> > associated with a texture, we need to create another texture that has
> > linear memory layout and use that instead.
> >
> > Note that, there does not seem to be a direct way to indicate to the
> > GL implementation that a texture's backing memory needs to be linear.
> > Instead, we have to do it in a roundabout way where we first need to
> > create a tiled texture and obtain a dmabuf fd associated with it via
> > EGL. Next, we have to create a memory object by importing the dmabuf
> > fd created earlier and finally create a linear texture by using the
> > memory object as the texture storage mechanism.
> >
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> > Cc: Frediano Ziglio <freddy77@gmail.com>
> > Cc: Dongwon Kim <dongwon.kim@intel.com>
> > Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> > ---
> >  ui/spice-display.c | 33 +++++++++++++++++++++++++++++++++
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/ui/spice-display.c b/ui/spice-display.c
> > index 08b4aec921..94cb378dbe 100644
> > --- a/ui/spice-display.c
> > +++ b/ui/spice-display.c
> > @@ -893,6 +893,7 @@ static void spice_gl_switch(DisplayChangeListener
> *dcl,
> >  {
> >      SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
> >      EGLint stride, fourcc;
> > +    GLuint texture = 0;
> >      int fd;
> >
> >      if (ssd->ds) {
> > @@ -912,6 +913,38 @@ static void spice_gl_switch(DisplayChangeListener
> *dcl,
> >              return;
> >          }
> >
> > +        if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {
> 
> hmm
> 
> > +            /*
> > +             * We really want to ensure that the memory layout of the texture
> > +             * is linear; otherwise, the encoder's output may show corruption.
> > +             */
> > +            surface_gl_create_texture_from_fd(ssd->ds, fd, &texture);
> 
> What if the encoder actually supports tiled layout?
Right, that would be true in most cases as the Render and Encode hardware are
mostly compatible. And, my patches are not required in this case if Spice chooses
a hardware encoder. However, the choice of which encoder to use (hardware vs
software) is decided dynamically and is internal to Spice at this point. And, since
there is no easy way to know from Qemu whether an encoder chosen by Spice
would support any given tiling format, I chose to always use linear layout since it
is guaranteed to work with all types of encoders.

> 
> Shouldn't this conversion be done at the encoder level as necessary?
Yeah, that is probably the right place but a software encoder like x264enc is not
going to know how to do the conversion from various tiled formats. This is the
specific case I am trying to address given that it is not a problem with hardware
encoders. I guess we could add a Qemu ui/spice option to tweak this behavior
while launching the VM.

> 
> It's also strange to reuse an FD associated with a tiled texture for a
> linear layout (I am uncomfortable with all this tbh)
I have looked around but there doesn't seem to be a way for requesting the GL
implementation to create a texture with linear layout other than via importing
memory objects. As noted in the comments below, the fd's ownership is taken
over by the GL implementation as part of the import. 

Also, I have started a conversation to figure out if there is any other way to
create linear textures more efficiently:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27067

> 
> > +
> > +            /*
> > +             * A successful return after glImportMemoryFdEXT() means that
> > +             * the ownership of fd has been passed to GL. In other words,
> > +             * the fd we got above should not be used anymore.
> > +             */
> > +            if (texture > 0) {
> > +                fd = egl_get_fd_for_texture(texture,
> > +                                            &stride, &fourcc,
> > +                                            NULL);
> > +                if (fd < 0) {
> 
> I suggest adding warnings or tracing, to help debug issues...
Sure, will do that in v2.

> 
> > +                    glDeleteTextures(1, &texture);
> > +                    fd = egl_get_fd_for_texture(ssd->ds->texture,
> > +                                                &stride, &fourcc,
> > +                                                NULL);
> > +                    if (fd < 0) {
> > +                        surface_gl_destroy_texture(ssd->gls, ssd->ds);
> > +                        return;
> > +                    }
> > +                } else {
> > +                    surface_gl_destroy_texture(ssd->gls, ssd->ds);
> > +                    ssd->ds->texture = texture;
> 
> Have you tried this series with virgl? (I doubt the renderer accepts
> that the scanout texture is changed)
I have tried with virgl enabled and it mostly works because my patches
don't affect virgl codepaths such as qemu_spice_gl_scanout_texture()
which is where the texture/fd is set. My patches are only meant for
virtio-vga device (and blob=false), where the Guest desktop is rendered
using llvmpipe DRI driver.

But, your suspicion is right. That is, if we are using virgl and Spice selects a
software encoder (such as x264enc), then the encoder's output will show
corruption and we'd not be able fix it with the approach used in this patch.
As you implied, this is because virglrenderer owns the scanout texture in
this case which cannot be replaced inside Qemu UI like we do in this patch.
I am not sure how this problem can be solved if we (Qemu UI) cannot request
Virgl or GL implementation to create linear textures.

Thanks,
Vivek

> 
> > +                }
> > +            }
> > +        }
> > +
> >          trace_qemu_spice_gl_surface(ssd->qxl.id,
> >                                      surface_width(ssd->ds),
> >                                      surface_height(ssd->ds),
> > --
> > 2.39.2
> >
> >
> 
> 
> --
> Marc-André Lureau
diff mbox series

Patch

diff --git a/ui/spice-display.c b/ui/spice-display.c
index 08b4aec921..94cb378dbe 100644
--- a/ui/spice-display.c
+++ b/ui/spice-display.c
@@ -893,6 +893,7 @@  static void spice_gl_switch(DisplayChangeListener *dcl,
 {
     SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
     EGLint stride, fourcc;
+    GLuint texture = 0;
     int fd;
 
     if (ssd->ds) {
@@ -912,6 +913,38 @@  static void spice_gl_switch(DisplayChangeListener *dcl,
             return;
         }
 
+        if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {
+            /*
+             * We really want to ensure that the memory layout of the texture
+             * is linear; otherwise, the encoder's output may show corruption.
+             */
+            surface_gl_create_texture_from_fd(ssd->ds, fd, &texture);
+
+            /*
+             * A successful return after glImportMemoryFdEXT() means that
+             * the ownership of fd has been passed to GL. In other words,
+             * the fd we got above should not be used anymore.
+             */
+            if (texture > 0) {
+                fd = egl_get_fd_for_texture(texture,
+                                            &stride, &fourcc,
+                                            NULL);
+                if (fd < 0) {
+                    glDeleteTextures(1, &texture);
+                    fd = egl_get_fd_for_texture(ssd->ds->texture,
+                                                &stride, &fourcc,
+                                                NULL);
+                    if (fd < 0) {
+                        surface_gl_destroy_texture(ssd->gls, ssd->ds);
+                        return;
+                    }
+                } else {
+                    surface_gl_destroy_texture(ssd->gls, ssd->ds);
+                    ssd->ds->texture = texture;
+                }
+            }
+        }
+
         trace_qemu_spice_gl_surface(ssd->qxl.id,
                                     surface_width(ssd->ds),
                                     surface_height(ssd->ds),