diff mbox series

[v7,7/9] drm/virtio: Improve DMA API usage for shmem BOs

Message ID 20220630200726.1884320-8-dmitry.osipenko@collabora.com (mailing list archive)
State New, archived
Headers show
Series VirtIO-GPU driver fixes and improvements | expand

Commit Message

Dmitry Osipenko June 30, 2022, 8:07 p.m. UTC
DRM API requires the DRM's driver to be backed with the device that can
be used for generic DMA operations. The VirtIO-GPU device can't perform
DMA operations if it uses PCI transport because PCI device driver creates
a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's
GPU device for the DRM's device instead of the VirtIO-GPU device and drop
DMA-related hacks from the VirtIO-GPU driver.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c    | 51 ++++++-----------------
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  5 +--
 drivers/gpu/drm/virtio/virtgpu_kms.c    |  7 ++--
 drivers/gpu/drm/virtio/virtgpu_object.c | 55 +++++--------------------
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 13 +++---
 5 files changed, 32 insertions(+), 99 deletions(-)

Comments

Gerd Hoffmann July 5, 2022, 1:53 p.m. UTC | #1
Hi,

> -	 * So for the moment keep things as-is, with a bulky comment
> -	 * for the next person who feels like removing this
> -	 * drm_dev_set_unique() quirk.

Dragons lurking here.  It's not the first attempt to ditch this, and so
far all have been rolled back due to regressions.  Specifically Xorg is
notoriously picky if it doesn't find its expectations fulfilled.

Also note that pci is not the only virtio transport we have.

What kind of testing has this patch seen?

take care,
  Gerd
Dmitry Osipenko July 5, 2022, 2:27 p.m. UTC | #2
Hello Gerd,

On 7/5/22 16:53, Gerd Hoffmann wrote:
>   Hi,
> 
>> -	 * So for the moment keep things as-is, with a bulky comment
>> -	 * for the next person who feels like removing this
>> -	 * drm_dev_set_unique() quirk.
> 
> Dragons lurking here.  It's not the first attempt to ditch this, and so
> far all have been rolled back due to regressions.  Specifically Xorg is
> notoriously picky if it doesn't find its expectations fulfilled.

I saw the previous attempt. Back then it was a mechanically created
patch that didn't get any testing.

> Also note that pci is not the only virtio transport we have.

The VirtIO indeed has other transports, but only PCI is really supported
in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
only the PCI transport was tested.

> What kind of testing has this patch seen?

The Xorg and virgl work perfectly fine in Qemu (with and without IOMMU
enabled in Qemu) and in crosvm (ChromeOS virtual machine).
Gerd Hoffmann July 5, 2022, 3:45 p.m. UTC | #3
Hi,

> > Also note that pci is not the only virtio transport we have.
> 
> The VirtIO indeed has other transports, but only PCI is really supported
> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
> only the PCI transport was tested.

qemu -M microvm \
  -global virtio-mmio.force-legacy=false \
  -device virtio-gpu-device

Gives you a functional virtio-gpu device on virtio-mmio.

aarch64 virt machines support both pci and mmio too.
s390x has virtio-gpu-ccw ...

take care,
  Gerd
Emil Velikov July 5, 2022, 4:03 p.m. UTC | #4
On 2022/07/05, Gerd Hoffmann wrote:
>   Hi,
> 
> > > Also note that pci is not the only virtio transport we have.
> > 
> > The VirtIO indeed has other transports, but only PCI is really supported
> > in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
> > only the PCI transport was tested.
> 
> qemu -M microvm \
>   -global virtio-mmio.force-legacy=false \
>   -device virtio-gpu-device
> 
> Gives you a functional virtio-gpu device on virtio-mmio.
> 
> aarch64 virt machines support both pci and mmio too.
> s390x has virtio-gpu-ccw ...
> 

As the last person who was there - the problem is indeed when using
virtio on top of mmio. If that's no longer supported by the kernel then
the hacky code-path can be dropped.

Even in that case, I would suggest keeping it a separate commit.

HTH
Emil
Dmitry Osipenko July 5, 2022, 5:02 p.m. UTC | #5
On 7/5/22 18:45, Gerd Hoffmann wrote:
>   Hi,
> 
>>> Also note that pci is not the only virtio transport we have.
>>
>> The VirtIO indeed has other transports, but only PCI is really supported
>> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
>> only the PCI transport was tested.
> 
> qemu -M microvm \
>   -global virtio-mmio.force-legacy=false \
>   -device virtio-gpu-device
> 
> Gives you a functional virtio-gpu device on virtio-mmio.
> 
> aarch64 virt machines support both pci and mmio too.
> s390x has virtio-gpu-ccw ...

Gerd, thank you very much! It's was indeed unclear to me how to test the
MMIO GPU, but yours variant with microvm works! I was looking for trying
aarch64 in the past, but it also was unclear how to do it since there is
no DT support for the VirtIO-GPU, AFAICS.

I booted kernel with this patchset applied and everything is okay, Xorg
works.

 [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
 virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not
called
 virtio-mmio LNRO0005:01: [drm] fb0: virtio_gpudrmfb frame buffer device

There is no virgl support because it's a virtio-gpu-device and not
virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.

I'd appreciate if you could give s390x a test.. I never touched s390x
and it will probably take some extra effort to get into it.
Emil Velikov July 5, 2022, 8:56 p.m. UTC | #6
On 2022/07/05, Dmitry Osipenko wrote:
> On 7/5/22 18:45, Gerd Hoffmann wrote:
> >   Hi,
> > 
> >>> Also note that pci is not the only virtio transport we have.
> >>
> >> The VirtIO indeed has other transports, but only PCI is really supported
> >> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
> >> only the PCI transport was tested.
> > 
> > qemu -M microvm \
> >   -global virtio-mmio.force-legacy=false \
> >   -device virtio-gpu-device
> > 
> > Gives you a functional virtio-gpu device on virtio-mmio.
> > 
> > aarch64 virt machines support both pci and mmio too.
> > s390x has virtio-gpu-ccw ...
> 
> Gerd, thank you very much! It's was indeed unclear to me how to test the
> MMIO GPU, but yours variant with microvm works! I was looking for trying
> aarch64 in the past, but it also was unclear how to do it since there is
> no DT support for the VirtIO-GPU, AFAICS.
> 
> I booted kernel with this patchset applied and everything is okay, Xorg
> works.
> 
>  [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
>  virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not
> called
>  virtio-mmio LNRO0005:01: [drm] fb0: virtio_gpudrmfb frame buffer device
> 
> There is no virgl support because it's a virtio-gpu-device and not
> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
> 
> I'd appreciate if you could give s390x a test.. I never touched s390x
> and it will probably take some extra effort to get into it.
> 

Adding Laszlo Ersek, who debugged and tested this the last time.

Laszlo Ersek do ypu have some tips for Dmitry? Xorg seems to be
working on his end with the drm_drv_set_unique(... "pci:...") call
removed.

Original patch can be found at:
https://lore.kernel.org/dri-devel/1380526d-17fb-6eb2-0fd5-5cddbdf0a92e@collabora.com/T/#mbc1a1bedc91d1855007188a725c5c75bbc771cf0

HTH
Emil
Rob Clark July 5, 2022, 9:39 p.m. UTC | #7
On Tue, Jul 5, 2022 at 10:02 AM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
>
> On 7/5/22 18:45, Gerd Hoffmann wrote:
> >   Hi,
> >
> >>> Also note that pci is not the only virtio transport we have.
> >>
> >> The VirtIO indeed has other transports, but only PCI is really supported
> >> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
> >> only the PCI transport was tested.
> >
> > qemu -M microvm \
> >   -global virtio-mmio.force-legacy=false \
> >   -device virtio-gpu-device
> >
> > Gives you a functional virtio-gpu device on virtio-mmio.
> >
> > aarch64 virt machines support both pci and mmio too.
> > s390x has virtio-gpu-ccw ...
>
> Gerd, thank you very much! It's was indeed unclear to me how to test the
> MMIO GPU, but yours variant with microvm works! I was looking for trying
> aarch64 in the past, but it also was unclear how to do it since there is
> no DT support for the VirtIO-GPU, AFAICS.

just a drive-by note, IME on aarch64 kernels, at least with crosvm,
virtgpu is also a pci device.. the non-pci things in the guest kernel
use dt, but devices on discoverable busses like pci don't need dt
nodes (which is true also in the non-vm case)

BR,
-R


> I booted kernel with this patchset applied and everything is okay, Xorg
> works.
>
>  [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
>  virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not
> called
>  virtio-mmio LNRO0005:01: [drm] fb0: virtio_gpudrmfb frame buffer device
>
> There is no virgl support because it's a virtio-gpu-device and not
> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
>
> I'd appreciate if you could give s390x a test.. I never touched s390x
> and it will probably take some extra effort to get into it.
>
> --
> Best regards,
> Dmitry
Dmitry Osipenko July 5, 2022, 11:08 p.m. UTC | #8
On 7/6/22 00:39, Rob Clark wrote:
> On Tue, Jul 5, 2022 at 10:02 AM Dmitry Osipenko
> <dmitry.osipenko@collabora.com> wrote:
>>
>> On 7/5/22 18:45, Gerd Hoffmann wrote:
>>>   Hi,
>>>
>>>>> Also note that pci is not the only virtio transport we have.
>>>>
>>>> The VirtIO indeed has other transports, but only PCI is really supported
>>>> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
>>>> only the PCI transport was tested.
>>>
>>> qemu -M microvm \
>>>   -global virtio-mmio.force-legacy=false \
>>>   -device virtio-gpu-device
>>>
>>> Gives you a functional virtio-gpu device on virtio-mmio.
>>>
>>> aarch64 virt machines support both pci and mmio too.
>>> s390x has virtio-gpu-ccw ...
>>
>> Gerd, thank you very much! It's was indeed unclear to me how to test the
>> MMIO GPU, but yours variant with microvm works! I was looking for trying
>> aarch64 in the past, but it also was unclear how to do it since there is
>> no DT support for the VirtIO-GPU, AFAICS.
> 
> just a drive-by note, IME on aarch64 kernels, at least with crosvm,
> virtgpu is also a pci device.. the non-pci things in the guest kernel
> use dt, but devices on discoverable busses like pci don't need dt
> nodes (which is true also in the non-vm case)

Sure, I was only looking how to test MMIO GPU on aarch64. Since I
haven't a found a way to test MMIO back then, I concluded that the MMIO
case wasn't really well supported.
Gerd Hoffmann July 6, 2022, 7:13 a.m. UTC | #9
Hi,

> Gerd, thank you very much! It's was indeed unclear to me how to test the
> MMIO GPU, but yours variant with microvm works! I was looking for trying
> aarch64 in the past, but it also was unclear how to do it since there is
> no DT support for the VirtIO-GPU, AFAICS.

aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
Not fully sure about arm(v7).

Even with DT it should work because DT only describes the virtio-mmio
'slots', not the actual virtio devices.

> There is no virgl support because it's a virtio-gpu-device and not
> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.

It's named 'virtio-gpu-gl-device'

take care,
  Gerd
Dmitry Osipenko July 6, 2022, 7:22 a.m. UTC | #10
On 7/6/22 10:13, Gerd Hoffmann wrote:
>   Hi,
> 
>> Gerd, thank you very much! It's was indeed unclear to me how to test the
>> MMIO GPU, but yours variant with microvm works! I was looking for trying
>> aarch64 in the past, but it also was unclear how to do it since there is
>> no DT support for the VirtIO-GPU, AFAICS.
> 
> aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
> Not fully sure about arm(v7).
> 
> Even with DT it should work because DT only describes the virtio-mmio
> 'slots', not the actual virtio devices.
> 
>> There is no virgl support because it's a virtio-gpu-device and not
>> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
> 
> It's named 'virtio-gpu-gl-device'

Ah, thanks again! Just quickly tested virtio-gpu-gl-device and
everything works too for MMIO GPU on microvm, including virgl and Xorg
(glamor).

[drm] features: +virgl +edid -resource_blob -host_visible
[drm] features: -context_init
[drm] number of scanouts: 1
[drm] number of cap sets: 2
[drm] cap set 0: id 1, max-version 1, max-size 308
[drm] cap set 1: id 2, max-version 2, max-size 696
[drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not called
Laszlo Ersek July 6, 2022, 10:30 a.m. UTC | #11
Hi Emil,

On 07/05/22 22:56, Emil Velikov wrote:
> On 2022/07/05, Dmitry Osipenko wrote:
>> On 7/5/22 18:45, Gerd Hoffmann wrote:
>>>   Hi,
>>>
>>>>> Also note that pci is not the only virtio transport we have.
>>>>
>>>> The VirtIO indeed has other transports, but only PCI is really supported
>>>> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
>>>> only the PCI transport was tested.
>>>
>>> qemu -M microvm \
>>>   -global virtio-mmio.force-legacy=false \
>>>   -device virtio-gpu-device
>>>
>>> Gives you a functional virtio-gpu device on virtio-mmio.
>>>
>>> aarch64 virt machines support both pci and mmio too.
>>> s390x has virtio-gpu-ccw ...
>>
>> Gerd, thank you very much! It's was indeed unclear to me how to test the
>> MMIO GPU, but yours variant with microvm works! I was looking for trying
>> aarch64 in the past, but it also was unclear how to do it since there is
>> no DT support for the VirtIO-GPU, AFAICS.
>>
>> I booted kernel with this patchset applied and everything is okay, Xorg
>> works.
>>
>>  [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
>>  virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not
>> called
>>  virtio-mmio LNRO0005:01: [drm] fb0: virtio_gpudrmfb frame buffer device
>>
>> There is no virgl support because it's a virtio-gpu-device and not
>> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
>>
>> I'd appreciate if you could give s390x a test.. I never touched s390x
>> and it will probably take some extra effort to get into it.
>>
> 
> Adding Laszlo Ersek, who debugged and tested this the last time.
> 
> Laszlo Ersek do ypu have some tips for Dmitry? Xorg seems to be
> working on his end with the drm_drv_set_unique(... "pci:...") call
> removed.
> 
> Original patch can be found at:
> https://lore.kernel.org/dri-devel/1380526d-17fb-6eb2-0fd5-5cddbdf0a92e@collabora.com/T/#mbc1a1bedc91d1855007188a725c5c75bbc771cf0

thanks for recalling this, but... I've moved to different projects, and
I'm already scraping the bottom of the barrel for every chunk of time I
can find :(

Laszlo
Gerd Hoffmann July 19, 2022, 10:31 a.m. UTC | #12
On Wed, Jul 06, 2022 at 10:22:52AM +0300, Dmitry Osipenko wrote:
> On 7/6/22 10:13, Gerd Hoffmann wrote:
> >   Hi,
> > 
> >> Gerd, thank you very much! It's was indeed unclear to me how to test the
> >> MMIO GPU, but yours variant with microvm works! I was looking for trying
> >> aarch64 in the past, but it also was unclear how to do it since there is
> >> no DT support for the VirtIO-GPU, AFAICS.
> > 
> > aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
> > Not fully sure about arm(v7).
> > 
> > Even with DT it should work because DT only describes the virtio-mmio
> > 'slots', not the actual virtio devices.
> > 
> >> There is no virgl support because it's a virtio-gpu-device and not
> >> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
> > 
> > It's named 'virtio-gpu-gl-device'
> 
> Ah, thanks again! Just quickly tested virtio-gpu-gl-device and
> everything works too for MMIO GPU on microvm, including virgl and Xorg
> (glamor).
> 
> [drm] features: +virgl +edid -resource_blob -host_visible
> [drm] features: -context_init
> [drm] number of scanouts: 1
> [drm] number of cap sets: 2
> [drm] cap set 0: id 1, max-version 1, max-size 308
> [drm] cap set 1: id 2, max-version 2, max-size 696
> [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
> virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not called

Cool.  Havn't found the time to try s390x, so I'm taking that as good
enough test that we don't have any unnoticed dependencies on pci.

Queued up.  I'll go over a few more pending patches, and assuming no
issues show up in testing this should land in drm-misc-next in a few
hours.

take care,
  Gerd
Dmitry Osipenko July 19, 2022, 11:30 a.m. UTC | #13
On 7/19/22 13:31, Gerd Hoffmann wrote:
> On Wed, Jul 06, 2022 at 10:22:52AM +0300, Dmitry Osipenko wrote:
>> On 7/6/22 10:13, Gerd Hoffmann wrote:
>>>   Hi,
>>>
>>>> Gerd, thank you very much! It's was indeed unclear to me how to test the
>>>> MMIO GPU, but yours variant with microvm works! I was looking for trying
>>>> aarch64 in the past, but it also was unclear how to do it since there is
>>>> no DT support for the VirtIO-GPU, AFAICS.
>>>
>>> aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
>>> Not fully sure about arm(v7).
>>>
>>> Even with DT it should work because DT only describes the virtio-mmio
>>> 'slots', not the actual virtio devices.
>>>
>>>> There is no virgl support because it's a virtio-gpu-device and not
>>>> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
>>>
>>> It's named 'virtio-gpu-gl-device'
>>
>> Ah, thanks again! Just quickly tested virtio-gpu-gl-device and
>> everything works too for MMIO GPU on microvm, including virgl and Xorg
>> (glamor).
>>
>> [drm] features: +virgl +edid -resource_blob -host_visible
>> [drm] features: -context_init
>> [drm] number of scanouts: 1
>> [drm] number of cap sets: 2
>> [drm] cap set 0: id 1, max-version 1, max-size 308
>> [drm] cap set 1: id 2, max-version 2, max-size 696
>> [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
>> virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not called
> 
> Cool.  Havn't found the time to try s390x, so I'm taking that as good
> enough test that we don't have any unnoticed dependencies on pci.
> 
> Queued up.  I'll go over a few more pending patches, and assuming no
> issues show up in testing this should land in drm-misc-next in a few
> hours.

Great, thank you.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 5f25a8d15464..0141b7df97ec 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -46,12 +46,11 @@  static int virtio_gpu_modeset = -1;
 MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");
 module_param_named(modeset, virtio_gpu_modeset, int, 0400);
 
-static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev)
+static int virtio_gpu_pci_quirk(struct drm_device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(vdev->dev.parent);
+	struct pci_dev *pdev = to_pci_dev(dev->dev);
 	const char *pname = dev_name(&pdev->dev);
 	bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA;
-	char unique[20];
 	int ret;
 
 	DRM_INFO("pci: %s detected at %s\n",
@@ -63,39 +62,7 @@  static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd
 			return ret;
 	}
 
-	/*
-	 * Normally the drm_dev_set_unique() call is done by core DRM.
-	 * The following comment covers, why virtio cannot rely on it.
-	 *
-	 * Unlike the other virtual GPU drivers, virtio abstracts the
-	 * underlying bus type by using struct virtio_device.
-	 *
-	 * Hence the dev_is_pci() check, used in core DRM, will fail
-	 * and the unique returned will be the virtio_device "virtio0",
-	 * while a "pci:..." one is required.
-	 *
-	 * A few other ideas were considered:
-	 * - Extend the dev_is_pci() check [in drm_set_busid] to
-	 *   consider virtio.
-	 *   Seems like a bigger hack than what we have already.
-	 *
-	 * - Point drm_device::dev to the parent of the virtio_device
-	 *   Semantic changes:
-	 *   * Using the wrong device for i2c, framebuffer_alloc and
-	 *     prime import.
-	 *   Visual changes:
-	 *   * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer,
-	 *     will print the wrong information.
-	 *
-	 * We could address the latter issues, by introducing
-	 * drm_device::bus_dev, ... which would be used solely for this.
-	 *
-	 * So for the moment keep things as-is, with a bulky comment
-	 * for the next person who feels like removing this
-	 * drm_dev_set_unique() quirk.
-	 */
-	snprintf(unique, sizeof(unique), "pci:%s", pname);
-	return drm_dev_set_unique(dev, unique);
+	return 0;
 }
 
 static int virtio_gpu_probe(struct virtio_device *vdev)
@@ -109,18 +76,24 @@  static int virtio_gpu_probe(struct virtio_device *vdev)
 	if (virtio_gpu_modeset == 0)
 		return -EINVAL;
 
-	dev = drm_dev_alloc(&driver, &vdev->dev);
+	/*
+	 * The virtio-gpu device is a virtual device that doesn't have DMA
+	 * ops assigned to it, nor DMA mask set and etc. Its parent device
+	 * is actual GPU device we want to use it for the DRM's device in
+	 * order to benefit from using generic DRM APIs.
+	 */
+	dev = drm_dev_alloc(&driver, vdev->dev.parent);
 	if (IS_ERR(dev))
 		return PTR_ERR(dev);
 	vdev->priv = dev;
 
 	if (!strcmp(vdev->dev.parent->bus->name, "pci")) {
-		ret = virtio_gpu_pci_quirk(dev, vdev);
+		ret = virtio_gpu_pci_quirk(dev);
 		if (ret)
 			goto err_free;
 	}
 
-	ret = virtio_gpu_init(dev);
+	ret = virtio_gpu_init(vdev, dev);
 	if (ret)
 		goto err_free;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index f80664cf98d0..9b98470593b0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -101,8 +101,6 @@  struct virtio_gpu_object {
 
 struct virtio_gpu_object_shmem {
 	struct virtio_gpu_object base;
-	struct sg_table *pages;
-	uint32_t mapped;
 };
 
 struct virtio_gpu_object_vram {
@@ -215,7 +213,6 @@  struct virtio_gpu_drv_cap_cache {
 };
 
 struct virtio_gpu_device {
-	struct device *dev;
 	struct drm_device *ddev;
 
 	struct virtio_device *vdev;
@@ -283,7 +280,7 @@  extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
 /* virtgpu_kms.c */
-int virtio_gpu_init(struct drm_device *dev);
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev);
 void virtio_gpu_deinit(struct drm_device *dev);
 void virtio_gpu_release(struct drm_device *dev);
 int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 3313b92db531..0d1e3eb61bee 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -110,7 +110,7 @@  static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
 	vgdev->num_capsets = num_capsets;
 }
 
-int virtio_gpu_init(struct drm_device *dev)
+int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
 {
 	static vq_callback_t *callbacks[] = {
 		virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack
@@ -123,7 +123,7 @@  int virtio_gpu_init(struct drm_device *dev)
 	u32 num_scanouts, num_capsets;
 	int ret = 0;
 
-	if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1))
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
 		return -ENODEV;
 
 	vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL);
@@ -132,8 +132,7 @@  int virtio_gpu_init(struct drm_device *dev)
 
 	vgdev->ddev = dev;
 	dev->dev_private = vgdev;
-	vgdev->vdev = dev_to_virtio(dev->dev);
-	vgdev->dev = dev->dev;
+	vgdev->vdev = vdev;
 
 	spin_lock_init(&vgdev->display_info_lock);
 	spin_lock_init(&vgdev->resource_export_lock);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 62b4d075cfac..8d7728181de0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -67,21 +67,6 @@  void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 
 	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
 	if (virtio_gpu_is_shmem(bo)) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-
-		if (shmem->pages) {
-			if (shmem->mapped) {
-				dma_unmap_sgtable(vgdev->vdev->dev.parent,
-					     shmem->pages, DMA_TO_DEVICE, 0);
-				shmem->mapped = 0;
-			}
-
-			sg_free_table(shmem->pages);
-			kfree(shmem->pages);
-			shmem->pages = NULL;
-			drm_gem_shmem_unpin(&bo->base);
-		}
-
 		drm_gem_shmem_free(&bo->base);
 	} else if (virtio_gpu_is_vram(bo)) {
 		struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo);
@@ -153,36 +138,18 @@  static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 					unsigned int *nents)
 {
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 	struct scatterlist *sg;
-	int si, ret;
+	struct sg_table *pages;
+	int si;
 
-	ret = drm_gem_shmem_pin(&bo->base);
-	if (ret < 0)
-		return -EINVAL;
-
-	/*
-	 * virtio_gpu uses drm_gem_shmem_get_sg_table instead of
-	 * drm_gem_shmem_get_pages_sgt because virtio has it's own set of
-	 * dma-ops. This is discouraged for other drivers, but should be fine
-	 * since virtio_gpu doesn't support dma-buf import from other devices.
-	 */
-	shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
-	if (IS_ERR(shmem->pages)) {
-		drm_gem_shmem_unpin(&bo->base);
-		shmem->pages = NULL;
-		return PTR_ERR(shmem->pages);
-	}
+	pages = drm_gem_shmem_get_pages_sgt(&bo->base);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
 
-	if (use_dma_api) {
-		ret = dma_map_sgtable(vgdev->vdev->dev.parent,
-				      shmem->pages, DMA_TO_DEVICE, 0);
-		if (ret)
-			return ret;
-		*nents = shmem->mapped = shmem->pages->nents;
-	} else {
-		*nents = shmem->pages->orig_nents;
-	}
+	if (use_dma_api)
+		*nents = pages->nents;
+	else
+		*nents = pages->orig_nents;
 
 	*ents = kvmalloc_array(*nents,
 			       sizeof(struct virtio_gpu_mem_entry),
@@ -193,13 +160,13 @@  static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
 	}
 
 	if (use_dma_api) {
-		for_each_sgtable_dma_sg(shmem->pages, sg, si) {
+		for_each_sgtable_dma_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_dma_address(sg));
 			(*ents)[si].length = cpu_to_le32(sg_dma_len(sg));
 			(*ents)[si].padding = 0;
 		}
 	} else {
-		for_each_sgtable_sg(shmem->pages, sg, si) {
+		for_each_sgtable_sg(pages, sg, si) {
 			(*ents)[si].addr = cpu_to_le64(sg_phys(sg));
 			(*ents)[si].length = cpu_to_le32(sg->length);
 			(*ents)[si].padding = 0;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 1262fd0b3bef..ee84256946ab 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -595,11 +595,10 @@  void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_transfer_to_host_2d *cmd_p;
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
-	struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
 
 	if (virtio_gpu_is_shmem(bo) && use_dma_api)
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));
@@ -1019,11 +1018,9 @@  void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
 	struct virtio_gpu_vbuffer *vbuf;
 	bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
 
-	if (virtio_gpu_is_shmem(bo) && use_dma_api) {
-		struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);
-		dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
-					    shmem->pages, DMA_TO_DEVICE);
-	}
+	if (virtio_gpu_is_shmem(bo) && use_dma_api)
+		dma_sync_sgtable_for_device(&vgdev->vdev->dev,
+					    bo->base.sgt, DMA_TO_DEVICE);
 
 	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
 	memset(cmd_p, 0, sizeof(*cmd_p));