diff mbox series

drm: Do not overrun array in drm_gem_get_pages()

Message ID 20231005135648.2317298-1-willy@infradead.org (mailing list archive)
State New, archived
Headers show
Series drm: Do not overrun array in drm_gem_get_pages() | expand

Commit Message

Matthew Wilcox (Oracle) Oct. 5, 2023, 1:56 p.m. UTC
If the shared memory object is larger than the DRM object that it backs,
we can overrun the page array.  Limit the number of pages we install
from each folio to prevent this.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@natalenko.name/
Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch")
Cc: stable@vger.kernel.org # 6.5.x
---
 drivers/gpu/drm/drm_gem.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Oleksandr Natalenko Oct. 12, 2023, 8:01 a.m. UTC | #1
On čtvrtek 5. října 2023 15:56:47 CEST Matthew Wilcox (Oracle) wrote:
> If the shared memory object is larger than the DRM object that it backs,
> we can overrun the page array.  Limit the number of pages we install
> from each folio to prevent this.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@natalenko.name/
> Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch")
> Cc: stable@vger.kernel.org # 6.5.x
> ---
>  drivers/gpu/drm/drm_gem.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 6129b89bb366..44a948b80ee1 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
>  	struct page **pages;
>  	struct folio *folio;
>  	struct folio_batch fbatch;
> -	int i, j, npages;
> +	long i, j, npages;
>  
>  	if (WARN_ON(!obj->filp))
>  		return ERR_PTR(-EINVAL);
> @@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
>  
>  	i = 0;
>  	while (i < npages) {
> +		long nr;
>  		folio = shmem_read_folio_gfp(mapping, i,
>  				mapping_gfp_mask(mapping));
>  		if (IS_ERR(folio))
>  			goto fail;
> -		for (j = 0; j < folio_nr_pages(folio); j++, i++)
> +		nr = min(npages - i, folio_nr_pages(folio));
> +		for (j = 0; j < nr; j++, i++)
>  			pages[i] = folio_file_page(folio, i);
>  
>  		/* Make sure shmem keeps __GFP_DMA32 allocated pages in the
> 

Gentle ping. It would be nice to have this picked so that it gets into the stable kernel rather sooner than later.

Thanks.
Maxime Ripard Oct. 12, 2023, 8:45 a.m. UTC | #2
On Thu, 05 Oct 2023 14:56:47 +0100, Matthew Wilcox (Oracle) wrote:
> If the shared memory object is larger than the DRM object that it backs,
> we can overrun the page array.  Limit the number of pages we install
> from each folio to prevent this.
> 
> 

Applied to drm/drm-misc (drm-misc-fixes).

Thanks!
Maxime
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 6129b89bb366..44a948b80ee1 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -540,7 +540,7 @@  struct page **drm_gem_get_pages(struct drm_gem_object *obj)
 	struct page **pages;
 	struct folio *folio;
 	struct folio_batch fbatch;
-	int i, j, npages;
+	long i, j, npages;
 
 	if (WARN_ON(!obj->filp))
 		return ERR_PTR(-EINVAL);
@@ -564,11 +564,13 @@  struct page **drm_gem_get_pages(struct drm_gem_object *obj)
 
 	i = 0;
 	while (i < npages) {
+		long nr;
 		folio = shmem_read_folio_gfp(mapping, i,
 				mapping_gfp_mask(mapping));
 		if (IS_ERR(folio))
 			goto fail;
-		for (j = 0; j < folio_nr_pages(folio); j++, i++)
+		nr = min(npages - i, folio_nr_pages(folio));
+		for (j = 0; j < nr; j++, i++)
 			pages[i] = folio_file_page(folio, i);
 
 		/* Make sure shmem keeps __GFP_DMA32 allocated pages in the