Message ID | 20230919194855.347582-1-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | i915: Limit the length of an sg list to the requested length | expand |
On 19.09.2023 21:48, Matthew Wilcox (Oracle) wrote: > The folio conversion changed the behaviour of shmem_sg_alloc_table() to > put the entire length of the last folio into the sg list, even if the sg > list should have been shorter. gen8_ggtt_insert_entries() relied on the > list being the right langth and would overrun the end of the page tables. s/langth/length/, I can fix it on applying. > Other functions may also have been affected. > > Clamp the length of the last entry in the sg list to be the expected > length. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Fixes: 0b62af28f249 ("i915: convert shmem_sg_free_table() to use a folio_batch") > Cc: stable@vger.kernel.org # 6.5.x > Link: https://gitlab.freedesktop.org/drm/intel/-/issues/9256 > Link: https://lore.kernel.org/lkml/6287208.lOV4Wx5bFT@natalenko.name/ > Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name> > Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Regards Andrzej > --- > drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c > index 8f1633c3fb93..73a4a4eb29e0 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c > +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c > @@ -100,6 +100,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, > st->nents = 0; > for (i = 0; i < page_count; i++) { > struct folio *folio; > + unsigned long nr_pages; > const unsigned int shrink[] = { > I915_SHRINK_BOUND | I915_SHRINK_UNBOUND, > 0, > @@ -150,6 +151,8 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, > } > } while (1); > > + nr_pages = min_t(unsigned long, > + folio_nr_pages(folio), page_count - i); > if (!i || > sg->length >= max_segment || > folio_pfn(folio) != next_pfn) { > @@ -157,13 +160,13 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, > sg = sg_next(sg); > > st->nents++; > - sg_set_folio(sg, folio, folio_size(folio), 0); > + sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0); > } else { > /* XXX: could overflow? */ > - sg->length += folio_size(folio); > + sg->length += nr_pages * PAGE_SIZE; > } > - next_pfn = folio_pfn(folio) + folio_nr_pages(folio); > - i += folio_nr_pages(folio) - 1; > + next_pfn = folio_pfn(folio) + nr_pages; > + i += nr_pages - 1; > > /* Check that the i965g/gm workaround works. */ > GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 8f1633c3fb93..73a4a4eb29e0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -100,6 +100,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, st->nents = 0; for (i = 0; i < page_count; i++) { struct folio *folio; + unsigned long nr_pages; const unsigned int shrink[] = { I915_SHRINK_BOUND | I915_SHRINK_UNBOUND, 0, @@ -150,6 +151,8 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, } } while (1); + nr_pages = min_t(unsigned long, + folio_nr_pages(folio), page_count - i); if (!i || sg->length >= max_segment || folio_pfn(folio) != next_pfn) { @@ -157,13 +160,13 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, sg = sg_next(sg); st->nents++; - sg_set_folio(sg, folio, folio_size(folio), 0); + sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0); } else { /* XXX: could overflow? */ - sg->length += folio_size(folio); + sg->length += nr_pages * PAGE_SIZE; } - next_pfn = folio_pfn(folio) + folio_nr_pages(folio); - i += folio_nr_pages(folio) - 1; + next_pfn = folio_pfn(folio) + nr_pages; + i += nr_pages - 1; /* Check that the i965g/gm workaround works. */ GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL);