Message ID | 20221027152723.381060-1-matthew.auld@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] drm/i915/dmabuf: fix sg_table handling in map_dma_buf | expand |
>-----Original Message----- >From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of >Matthew Auld >Sent: Thursday, October 27, 2022 11:27 AM >To: intel-gfx@lists.freedesktop.org >Subject: [Intel-gfx] [PATCH 1/2] drm/i915/dmabuf: fix sg_table handling in >map_dma_buf > >We need to iterate over the original entries here for the sg_table, >pulling out the struct page for each one, to be remapped. However >currently this incorrectly iterates over the final dma mapped entries, >which is likely just one gigantic sg entry if the iommu is enabled, >leading to us only mapping the first struct page (and any physically >contiguous pages following it), even if there is potentially lots more >data to follow. > >Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 >Signed-off-by: Matthew Auld <matthew.auld@intel.com> >Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> >Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> >--- > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > >diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >index 07eee1c09aaf..05ebbdfd3b3b 100644 >--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >@@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct >dma_buf_attachment *attachme > goto err; > } > >- ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL); >+ ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL); > if (ret) > goto err_free; > > src = obj->mm.pages->sgl; > dst = st->sgl; >- for (i = 0; i < obj->mm.pages->nents; i++) { >+ for (i = 0; i < obj->mm.pages->orig_nents; i++) { This really should use the for_each_sg() macro. I proposed a clean up patch a while back that looked like this: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index e2cdc2640c08..ccc5d46aa749 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -7,6 +7,7 @@ #include <linux/dma-buf.h> #include <linux/highmem.h> #include <linux/dma-resv.h> +#include <linux/scatterlist.h> #include "gem/i915_gem_dmabuf.h" #include "i915_drv.h" @@ -41,12 +42,10 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attach, if (ret) goto err_free; - src = obj->mm.pages->sgl; dst = sgt->sgl; - for (i = 0; i < obj->mm.pages->nents; i++) { + for_each_sg(obj->mm.pages->sgl, src, obj->mm.pages->nents, i) { sg_set_page(dst, sg_page(src), src->length, 0); dst = sg_next(dst); - src = sg_next(src); } If you are updating the for loop, this might be a reasonable update as well. Mike > sg_set_page(dst, sg_page(src), src->length, 0); > dst = sg_next(dst); > src = sg_next(src); >-- >2.37.3
On 28/10/2022 14:55, Ruhl, Michael J wrote: >> -----Original Message----- >> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of >> Matthew Auld >> Sent: Thursday, October 27, 2022 11:27 AM >> To: intel-gfx@lists.freedesktop.org >> Subject: [Intel-gfx] [PATCH 1/2] drm/i915/dmabuf: fix sg_table handling in >> map_dma_buf >> >> We need to iterate over the original entries here for the sg_table, >> pulling out the struct page for each one, to be remapped. However >> currently this incorrectly iterates over the final dma mapped entries, >> which is likely just one gigantic sg entry if the iommu is enabled, >> leading to us only mapping the first struct page (and any physically >> contiguous pages following it), even if there is potentially lots more >> data to follow. >> >> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 >> Signed-off-by: Matthew Auld <matthew.auld@intel.com> >> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> >> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> >> --- >> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >> index 07eee1c09aaf..05ebbdfd3b3b 100644 >> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c >> @@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct >> dma_buf_attachment *attachme >> goto err; >> } >> >> - ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL); >> + ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL); >> if (ret) >> goto err_free; >> >> src = obj->mm.pages->sgl; >> dst = st->sgl; >> - for (i = 0; i < obj->mm.pages->nents; i++) { >> + for (i = 0; i < obj->mm.pages->orig_nents; i++) { > > This really should use the for_each_sg() macro. > > I proposed a clean up patch a while back that looked like this: > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > index e2cdc2640c08..ccc5d46aa749 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > @@ -7,6 +7,7 @@ > #include <linux/dma-buf.h> > #include <linux/highmem.h> > #include <linux/dma-resv.h> > +#include <linux/scatterlist.h> > > #include "gem/i915_gem_dmabuf.h" > #include "i915_drv.h" > @@ -41,12 +42,10 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attach, > if (ret) > goto err_free; > > - src = obj->mm.pages->sgl; > dst = sgt->sgl; > - for (i = 0; i < obj->mm.pages->nents; i++) { > + for_each_sg(obj->mm.pages->sgl, src, obj->mm.pages->nents, i) { > sg_set_page(dst, sg_page(src), src->length, 0); > dst = sg_next(dst); > - src = sg_next(src); > } > > If you are updating the for loop, this might be a reasonable update as well. Ok, but such cleanups should normally be a separate patch. I'll grab your series and bolt that onto this one. > > Mike > >> sg_set_page(dst, sg_page(src), src->length, 0); >> dst = sg_next(dst); >> src = sg_next(src); >> -- >> 2.37.3 >
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 07eee1c09aaf..05ebbdfd3b3b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme goto err; } - ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL); + ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL); if (ret) goto err_free; src = obj->mm.pages->sgl; dst = st->sgl; - for (i = 0; i < obj->mm.pages->nents; i++) { + for (i = 0; i < obj->mm.pages->orig_nents; i++) { sg_set_page(dst, sg_page(src), src->length, 0); dst = sg_next(dst); src = sg_next(src);
We need to iterate over the original entries here for the sg_table, pulling out the struct page for each one, to be remapped. However currently this incorrectly iterates over the final dma mapped entries, which is likely just one gigantic sg entry if the iommu is enabled, leading to us only mapping the first struct page (and any physically contiguous pages following it), even if there is potentially lots more data to follow. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)