diff mbox series

[3/3] drm/vmwgfx: Use coherent memory if there are dma mapping size restrictions

Message ID 20191114105645.41578-4-thomas_os@shipmail.org (mailing list archive)
State New, archived
Headers show
Series drm/vmwgfx: Clean- and fix up DMA mode selection | expand

Commit Message

Thomas Hellström (Intel) Nov. 14, 2019, 10:56 a.m. UTC
From: Thomas Hellstrom <thellstrom@vmware.com>

We're gradually moving towards using DMA coherent memory in most
situations, although TTM interactions with the DMA layers is still a
work-in-progress. Meanwhile, use coherent memory when there are size
restrictions meaning that there is a chance that streaming dma mapping
of large buffer objects may fail.

Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
---
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

Comments

Thomas Hellström (Intel) Nov. 14, 2019, 12:42 p.m. UTC | #1
On 11/14/19 1:40 PM, Christoph Hellwig wrote:
> On Thu, Nov 14, 2019 at 11:56:45AM +0100, Thomas Hellström (VMware) wrote:
>> From: Thomas Hellstrom <thellstrom@vmware.com>
>>
>> We're gradually moving towards using DMA coherent memory in most
>> situations, although TTM interactions with the DMA layers is still a
>> work-in-progress. Meanwhile, use coherent memory when there are size
>> restrictions meaning that there is a chance that streaming dma mapping
>> of large buffer objects may fail.
> Unofrtunately that dma mapping size check really is completely
> broken.  For example the sparc32 iommus have mapping size limitations
> (which we just haven't wired up yet), but will never bounce buffer.
>
> Let me cook up a real API for you instead.  dma_addressing_limited()
> is fundamentally the right call for this, we just need to make it
> handle the corner cases you mentioned in reply to the last version of
> your patch.

Sounds great!

Thanks,

Thomas
diff mbox series

Patch

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 8d479a411cdd..24f8d88d4b28 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -565,7 +565,15 @@  static int vmw_dma_select_mode(struct vmw_private *dev_priv)
 		[vmw_dma_map_populate] = "Caching DMA mappings.",
 		[vmw_dma_map_bind] = "Giving up DMA mappings early."};
 
-	if (vmw_force_coherent)
+	/*
+	 * dma_max_mapping_size() != SIZE_MAX means something is going
+	 * on in the dma layer that the dma_map_bind or dma_map_populate modes
+	 * are not working well with, or haven't been tested with.
+	 * This typically happens when the SWIOTLB is active. Fall back to
+	 * coherent memory in those cases.
+	 */
+	if (dma_max_mapping_size(dev_priv->dev->dev) != SIZE_MAX ||
+	    vmw_force_coherent)
 		dev_priv->map_mode = vmw_dma_alloc_coherent;
 	else if (vmw_restrict_iommu)
 		dev_priv->map_mode = vmw_dma_map_bind;
@@ -668,10 +676,8 @@  static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
 		dev_priv->capabilities2 = vmw_read(dev_priv, SVGA_REG_CAP2);
 	}
 
-
-	ret = vmw_dma_select_mode(dev_priv);
-	if (unlikely(ret != 0)) {
-		DRM_INFO("Restricting capabilities due to IOMMU setup.\n");
+	if (vmw_dma_masks(dev_priv) || vmw_dma_select_mode(dev_priv)) {
+		DRM_WARN("Refusing DMA due to lack of DMA support.");
 		refuse_dma = true;
 	}
 
@@ -740,10 +746,6 @@  static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
 	if (dev_priv->capabilities & SVGA_CAP_CAP2_REGISTER)
 		vmw_print_capabilities2(dev_priv->capabilities2);
 
-	ret = vmw_dma_masks(dev_priv);
-	if (unlikely(ret != 0))
-		goto out_err0;
-
 	dma_set_max_seg_size(dev->dev, min_t(unsigned int, U32_MAX & PAGE_MASK,
 					     SCATTERLIST_MAX_SEGMENT));