Message ID | 20250320172956.168358-11-matthew.auld@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Replace xe_hmm with gpusvm | expand |
On Thu, 2025-03-20 at 17:29 +0000, Matthew Auld wrote: > If we are only reading the memory then from the device pov the > direction > can be DMA_TO_DEVICE. This aligns with the xe-userptr code. Using the > most restrictive data direction to represent the access is normally a > good idea. > > Signed-off-by: Matthew Auld <matthew.auld@intel.com> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> > Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> > --- > drivers/gpu/drm/drm_gpusvm.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gpusvm.c > b/drivers/gpu/drm/drm_gpusvm.c > index 48993cef4a74..7f1cf5492bba 100644 > --- a/drivers/gpu/drm/drm_gpusvm.c > +++ b/drivers/gpu/drm/drm_gpusvm.c > @@ -1355,6 +1355,8 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > int err = 0; > struct dev_pagemap *pagemap; > struct drm_pagemap *dpagemap; > + enum dma_data_direction dma_dir = ctx->read_only ? > DMA_TO_DEVICE : > + > DMA_BIDIRECTIONAL; > > retry: > hmm_range.notifier_seq = mmu_interval_read_begin(notifier); > @@ -1459,7 +1461,7 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > dpagemap->ops->device_map(dpagemap, > gpusvm- > >drm->dev, > page, > order, > - > DMA_BIDIRECTIONAL); > + dma_dir); > if (dma_mapping_error(gpusvm->drm->dev, > range- > >dma_addr[j].addr)) { > err = -EFAULT; > @@ -1478,7 +1480,7 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > addr = dma_map_page(gpusvm->drm->dev, > page, 0, > PAGE_SIZE << order, > - DMA_BIDIRECTIONAL); > + dma_dir); > if (dma_mapping_error(gpusvm->drm->dev, > addr)) { > err = -EFAULT; > goto err_unmap; > @@ -1486,7 +1488,7 @@ int drm_gpusvm_range_get_pages(struct > drm_gpusvm *gpusvm, > > range->dma_addr[j] = > drm_pagemap_device_addr_encode > (addr, DRM_INTERCONNECT_SYSTEM, > order, > - DMA_BIDIRECTIONAL); > + dma_dir); > } > i += 1 << order; > num_dma_mapped = i;
On Thu, Mar 20, 2025 at 05:29:59PM +0000, Matthew Auld wrote: > If we are only reading the memory then from the device pov the direction > can be DMA_TO_DEVICE. This aligns with the xe-userptr code. Using the > most restrictive data direction to represent the access is normally a > good idea. > > Signed-off-by: Matthew Auld <matthew.auld@intel.com> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> > Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> > --- > drivers/gpu/drm/drm_gpusvm.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c > index 48993cef4a74..7f1cf5492bba 100644 > --- a/drivers/gpu/drm/drm_gpusvm.c > +++ b/drivers/gpu/drm/drm_gpusvm.c > @@ -1355,6 +1355,8 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, > int err = 0; > struct dev_pagemap *pagemap; > struct drm_pagemap *dpagemap; > + enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE : > + DMA_BIDIRECTIONAL; > > retry: > hmm_range.notifier_seq = mmu_interval_read_begin(notifier); > @@ -1459,7 +1461,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, > dpagemap->ops->device_map(dpagemap, > gpusvm->drm->dev, > page, order, > - DMA_BIDIRECTIONAL); > + dma_dir); > if (dma_mapping_error(gpusvm->drm->dev, > range->dma_addr[j].addr)) { > err = -EFAULT; > @@ -1478,7 +1480,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, > addr = dma_map_page(gpusvm->drm->dev, > page, 0, > PAGE_SIZE << order, > - DMA_BIDIRECTIONAL); > + dma_dir); > if (dma_mapping_error(gpusvm->drm->dev, addr)) { > err = -EFAULT; > goto err_unmap; > @@ -1486,7 +1488,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, > > range->dma_addr[j] = drm_pagemap_device_addr_encode > (addr, DRM_INTERCONNECT_SYSTEM, order, > - DMA_BIDIRECTIONAL); > + dma_dir); > } > i += 1 << order; > num_dma_mapped = i; > -- > 2.48.1 >
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c index 48993cef4a74..7f1cf5492bba 100644 --- a/drivers/gpu/drm/drm_gpusvm.c +++ b/drivers/gpu/drm/drm_gpusvm.c @@ -1355,6 +1355,8 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, int err = 0; struct dev_pagemap *pagemap; struct drm_pagemap *dpagemap; + enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE : + DMA_BIDIRECTIONAL; retry: hmm_range.notifier_seq = mmu_interval_read_begin(notifier); @@ -1459,7 +1461,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, dpagemap->ops->device_map(dpagemap, gpusvm->drm->dev, page, order, - DMA_BIDIRECTIONAL); + dma_dir); if (dma_mapping_error(gpusvm->drm->dev, range->dma_addr[j].addr)) { err = -EFAULT; @@ -1478,7 +1480,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, addr = dma_map_page(gpusvm->drm->dev, page, 0, PAGE_SIZE << order, - DMA_BIDIRECTIONAL); + dma_dir); if (dma_mapping_error(gpusvm->drm->dev, addr)) { err = -EFAULT; goto err_unmap; @@ -1486,7 +1488,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm, range->dma_addr[j] = drm_pagemap_device_addr_encode (addr, DRM_INTERCONNECT_SYSTEM, order, - DMA_BIDIRECTIONAL); + dma_dir); } i += 1 << order; num_dma_mapped = i;
If we are only reading the memory then from the device pov the direction can be DMA_TO_DEVICE. This aligns with the xe-userptr code. Using the most restrictive data direction to represent the access is normally a good idea. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> --- drivers/gpu/drm/drm_gpusvm.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)