Message ID | 20200822175254.1105377-1-robdclark@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | e1bf29e022fb48eabe3c3db9ab981ed56307c69b |
Headers | show |
Series | drm/msm: drop cache sync hack | expand |
On Sat, Aug 22, 2020 at 10:52:54AM -0700, Rob Clark wrote: > From: Rob Clark <robdclark@chromium.org> > > Now that it isn't causing problems to use dma_map/unmap, we can drop the > hack of using dma_sync in certain cases. Great to see! What did solve the problems?
On Mon, Aug 24, 2020 at 11:52 PM Christoph Hellwig <hch@infradead.org> wrote: > > On Sat, Aug 22, 2020 at 10:52:54AM -0700, Rob Clark wrote: > > From: Rob Clark <robdclark@chromium.org> > > > > Now that it isn't causing problems to use dma_map/unmap, we can drop the > > hack of using dma_sync in certain cases. > > Great to see! What did solve the problems? should be 0e764a01015dfebff8a8ffd297d74663772e248a ("iommu/arm-smmu: Allow client devices to select direct mapping") I still need to confirm whether qcom_iommu needs a similar thing, but I think it is ok as the iommu phandle link is down one level on the 'mdp' device, rather than attached to the toplevel drm device. BR, -R
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b2f49152b4d4..3cb7aeb93fd3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,26 +52,16 @@ static void sync_for_device(struct msm_gem_object *msm_obj) { struct device *dev = msm_obj->base.dev->dev; - if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) { - dma_sync_sg_for_device(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_map_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } + dma_map_sg(dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, DMA_BIDIRECTIONAL); } static void sync_for_cpu(struct msm_gem_object *msm_obj) { struct device *dev = msm_obj->base.dev->dev; - if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) { - dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_unmap_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } + dma_unmap_sg(dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, DMA_BIDIRECTIONAL); } /* allocate pages from VRAM carveout, used when no IOMMU: */