Message ID | 20231005135648.2317298-1-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm: Do not overrun array in drm_gem_get_pages() | expand |
On čtvrtek 5. října 2023 15:56:47 CEST Matthew Wilcox (Oracle) wrote: > If the shared memory object is larger than the DRM object that it backs, > we can overrun the page array. Limit the number of pages we install > from each folio to prevent this. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name> > Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> > Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@natalenko.name/ > Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch") > Cc: stable@vger.kernel.org # 6.5.x > --- > drivers/gpu/drm/drm_gem.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 6129b89bb366..44a948b80ee1 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) > struct page **pages; > struct folio *folio; > struct folio_batch fbatch; > - int i, j, npages; > + long i, j, npages; > > if (WARN_ON(!obj->filp)) > return ERR_PTR(-EINVAL); > @@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) > > i = 0; > while (i < npages) { > + long nr; > folio = shmem_read_folio_gfp(mapping, i, > mapping_gfp_mask(mapping)); > if (IS_ERR(folio)) > goto fail; > - for (j = 0; j < folio_nr_pages(folio); j++, i++) > + nr = min(npages - i, folio_nr_pages(folio)); > + for (j = 0; j < nr; j++, i++) > pages[i] = folio_file_page(folio, i); > > /* Make sure shmem keeps __GFP_DMA32 allocated pages in the > Gentle ping. It would be nice to have this picked so that it gets into the stable kernel rather sooner than later. Thanks.
On Thu, 05 Oct 2023 14:56:47 +0100, Matthew Wilcox (Oracle) wrote: > If the shared memory object is larger than the DRM object that it backs, > we can overrun the page array. Limit the number of pages we install > from each folio to prevent this. > > Applied to drm/drm-misc (drm-misc-fixes). Thanks! Maxime
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 6129b89bb366..44a948b80ee1 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) struct page **pages; struct folio *folio; struct folio_batch fbatch; - int i, j, npages; + long i, j, npages; if (WARN_ON(!obj->filp)) return ERR_PTR(-EINVAL); @@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) i = 0; while (i < npages) { + long nr; folio = shmem_read_folio_gfp(mapping, i, mapping_gfp_mask(mapping)); if (IS_ERR(folio)) goto fail; - for (j = 0; j < folio_nr_pages(folio); j++, i++) + nr = min(npages - i, folio_nr_pages(folio)); + for (j = 0; j < nr; j++, i++) pages[i] = folio_file_page(folio, i); /* Make sure shmem keeps __GFP_DMA32 allocated pages in the