diff mbox series

drm/dma-helpers: Don't change vma flags

Message ID 20221123172802.1049424-1-daniel.vetter@ffwll.ch (mailing list archive)
State New, archived
Headers show
Series drm/dma-helpers: Don't change vma flags | expand

Commit Message

Daniel Vetter Nov. 23, 2022, 5:28 p.m. UTC
This code was added in b65e64f7ccd4 ("drm/cma: Use
dma_mmap_writecombine() to mmap buffer"), but does not explain why
it's needed.

It should be entirely unnecessary, because remap_pfn_range(), which is
what the various dma_mmap functiosn are built on top of, does already
unconditionally adjust the vma flags:

https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518

More importantly, it does uncondtionally set VM_PFNMAP, so clearing
that does not make much sense.

Patch motived by discussions around enforcing VM_PFNMAP semantics for
all dma-buf users, where Thomas asked why dma helpers will work with
that dma_buf_mmap() contract.

References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@suse.de/
Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
---
 drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

Comments

Robin Murphy Nov. 24, 2022, 11:11 a.m. UTC | #1
On 2022-11-23 17:28, Daniel Vetter wrote:
> This code was added in b65e64f7ccd4 ("drm/cma: Use
> dma_mmap_writecombine() to mmap buffer"), but does not explain why
> it's needed.
> 
> It should be entirely unnecessary, because remap_pfn_range(), which is
> what the various dma_mmap functiosn are built on top of, does already
> unconditionally adjust the vma flags:

Not all dma_mmap_*() implementations use remap_pfn_range() though. For 
instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS 
not all the relevant drivers would set VM_MIXEDMAP to prevent reaching 
the BUG_ON(vma->vm_flags & VM_PFNMAP) in there.

Robin.

> https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518
> 
> More importantly, it does uncondtionally set VM_PFNMAP, so clearing
> that does not make much sense.
> 
> Patch motived by discussions around enforcing VM_PFNMAP semantics for
> all dma-buf users, where Thomas asked why dma helpers will work with
> that dma_buf_mmap() contract.
> 
> References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@suse.de/
> Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> ---
>   drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
>   1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> index 1e658c448366..637a5cc62457 100644
> --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> @@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
>   	int ret;
>   
>   	/*
> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -	 * the whole buffer.
> +	 * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
> +	 * want to map the whole buffer.
>   	 */
>   	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_flags |= VM_DONTEXPAND;
>   
>   	if (dma_obj->map_noncoherent) {
>   		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
Daniel Vetter Nov. 24, 2022, 5:24 p.m. UTC | #2
On Thu, Nov 24, 2022 at 11:11:21AM +0000, Robin Murphy wrote:
> On 2022-11-23 17:28, Daniel Vetter wrote:
> > This code was added in b65e64f7ccd4 ("drm/cma: Use
> > dma_mmap_writecombine() to mmap buffer"), but does not explain why
> > it's needed.
> > 
> > It should be entirely unnecessary, because remap_pfn_range(), which is
> > what the various dma_mmap functiosn are built on top of, does already
> > unconditionally adjust the vma flags:
> 
> Not all dma_mmap_*() implementations use remap_pfn_range() though. For
> instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS not
> all the relevant drivers would set VM_MIXEDMAP to prevent reaching the
> BUG_ON(vma->vm_flags & VM_PFNMAP) in there.

Uh a dma_mmap which does not use VM_PFNMAP has pretty good chances of
being busted, since that allows get_user_pages on these memory
allocations. And I'm really not sure that's a bright idea ...

Can you please point me at these dma-ops so that I can try and understand
what they're trying to do?
-Daniel

> 
> Robin.
> 
> > https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518
> > 
> > More importantly, it does uncondtionally set VM_PFNMAP, so clearing
> > that does not make much sense.
> > 
> > Patch motived by discussions around enforcing VM_PFNMAP semantics for
> > all dma-buf users, where Thomas asked why dma helpers will work with
> > that dma_buf_mmap() contract.
> > 
> > References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@suse.de/
> > Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > ---
> >   drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
> >   1 file changed, 2 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> > index 1e658c448366..637a5cc62457 100644
> > --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> > @@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
> >   	int ret;
> >   	/*
> > -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> > -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> > -	 * the whole buffer.
> > +	 * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
> > +	 * want to map the whole buffer.
> >   	 */
> >   	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> > -	vma->vm_flags &= ~VM_PFNMAP;
> > -	vma->vm_flags |= VM_DONTEXPAND;
> >   	if (dma_obj->map_noncoherent) {
> >   		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
Robin Murphy Nov. 24, 2022, 6:12 p.m. UTC | #3
On 2022-11-24 17:24, Daniel Vetter wrote:
> On Thu, Nov 24, 2022 at 11:11:21AM +0000, Robin Murphy wrote:
>> On 2022-11-23 17:28, Daniel Vetter wrote:
>>> This code was added in b65e64f7ccd4 ("drm/cma: Use
>>> dma_mmap_writecombine() to mmap buffer"), but does not explain why
>>> it's needed.
>>>
>>> It should be entirely unnecessary, because remap_pfn_range(), which is
>>> what the various dma_mmap functiosn are built on top of, does already
>>> unconditionally adjust the vma flags:
>>
>> Not all dma_mmap_*() implementations use remap_pfn_range() though. For
>> instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS not
>> all the relevant drivers would set VM_MIXEDMAP to prevent reaching the
>> BUG_ON(vma->vm_flags & VM_PFNMAP) in there.
> 
> Uh a dma_mmap which does not use VM_PFNMAP has pretty good chances of
> being busted, since that allows get_user_pages on these memory
> allocations. And I'm really not sure that's a bright idea ...
> 
> Can you please point me at these dma-ops so that I can try and understand
> what they're trying to do?

See arm_iommu_mmap_attrs(), but also one of the paths in 
iommu_dma_mmap() for both arm64 and x86. These aren't using 
remap_pfn_range() because they're mapping a potentially-disjoint set of 
arbitrary pages, not a single physically-contiguous range. And for the 
avoidance of doubt, yes, in those cases they will always be real kernel 
pages. dma_mmap_attrs() can be relied upon to do the right thing for 
whatever dma_alloc_attrs() did; what isn't reliable is trying to 
second-guess from outside exactly what that might be.

I forgot to mention also that removing the VM_DONTEXPAND line will 
seemingly just reintroduce the annoying warning spam for which we added 
it in the first place (and 59f39bfa6553 does document this same reasoning).

Thanks,
Robin.

> -Daniel
> 
>>
>> Robin.
>>
>>> https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518
>>>
>>> More importantly, it does uncondtionally set VM_PFNMAP, so clearing
>>> that does not make much sense.
>>>
>>> Patch motived by discussions around enforcing VM_PFNMAP semantics for
>>> all dma-buf users, where Thomas asked why dma helpers will work with
>>> that dma_buf_mmap() contract.
>>>
>>> References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@suse.de/
>>> Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
>>> Cc: Dave Airlie <airlied@redhat.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Maxime Ripard <mripard@kernel.org>
>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>> Cc: "Christian K�nig" <christian.koenig@amd.com>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> ---
>>>    drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
>>>    1 file changed, 2 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
>>> index 1e658c448366..637a5cc62457 100644
>>> --- a/drivers/gpu/drm/drm_gem_dma_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
>>> @@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
>>>    	int ret;
>>>    	/*
>>> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>>> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>>> -	 * the whole buffer.
>>> +	 * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> +	 * want to map the whole buffer.
>>>    	 */
>>>    	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>> -	vma->vm_flags &= ~VM_PFNMAP;
>>> -	vma->vm_flags |= VM_DONTEXPAND;
>>>    	if (dma_obj->map_noncoherent) {
>>>    		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>
Laurent Pinchart Nov. 24, 2022, 7:50 p.m. UTC | #4
On Thu, Nov 24, 2022 at 06:24:52PM +0100, Daniel Vetter wrote:
> On Thu, Nov 24, 2022 at 11:11:21AM +0000, Robin Murphy wrote:
> > On 2022-11-23 17:28, Daniel Vetter wrote:
> > > This code was added in b65e64f7ccd4 ("drm/cma: Use
> > > dma_mmap_writecombine() to mmap buffer"), but does not explain why
> > > it's needed.
> > > 
> > > It should be entirely unnecessary, because remap_pfn_range(), which is
> > > what the various dma_mmap functiosn are built on top of, does already
> > > unconditionally adjust the vma flags:
> > 
> > Not all dma_mmap_*() implementations use remap_pfn_range() though. For
> > instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS not
> > all the relevant drivers would set VM_MIXEDMAP to prevent reaching the
> > BUG_ON(vma->vm_flags & VM_PFNMAP) in there.
> 
> Uh a dma_mmap which does not use VM_PFNMAP has pretty good chances of
> being busted, since that allows get_user_pages on these memory
> allocations. And I'm really not sure that's a bright idea ...
> 
> Can you please point me at these dma-ops so that I can try and understand
> what they're trying to do?

I've taken a trip back in time to v3.14, when b65e64f7ccd4 was merged,
and checked the x86, arm and arm64 dma mapping implementations. They all
call remap_pfn_range() except for arm_iommu_mmap_attrs(), which uses
vm_insert_page(). It was implemented as

int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
			struct page *page)
{
	if (addr < vma->vm_start || addr >= vma->vm_end)
		return -EFAULT;
	if (!page_count(page))
		return -EINVAL;
	if (!(vma->vm_flags & VM_MIXEDMAP)) {
		BUG_ON(down_read_trylock(&vma->vm_mm->mmap_sem));
		BUG_ON(vma->vm_flags & VM_PFNMAP);
		vma->vm_flags |= VM_MIXEDMAP;
	}
	return insert_page(vma, addr, page, vma->vm_page_prot);
}

I don't recall if I just hit that BUG_ON() back then, or if I actually
understood what I was doing :-)

Today in mainline arm_iommu_mmap_attrs() uses vm_map_pages(), which
calls vm_insert_page(), and the BUG_ON() is still there.

> > > https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518
> > > 
> > > More importantly, it does uncondtionally set VM_PFNMAP, so clearing
> > > that does not make much sense.
> > > 
> > > Patch motived by discussions around enforcing VM_PFNMAP semantics for
> > > all dma-buf users, where Thomas asked why dma helpers will work with
> > > that dma_buf_mmap() contract.
> > > 
> > > References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@suse.de/
> > > Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> > > Cc: Dave Airlie <airlied@redhat.com>
> > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > Cc: Maxime Ripard <mripard@kernel.org>
> > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > > Cc: "Christian König" <christian.koenig@amd.com>
> > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > ---
> > >   drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++-----
> > >   1 file changed, 2 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> > > index 1e658c448366..637a5cc62457 100644
> > > --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> > > @@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
> > >   	int ret;
> > >   	/*
> > > -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> > > -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> > > -	 * the whole buffer.
> > > +	 * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
> > > +	 * want to map the whole buffer.
> > >   	 */
> > >   	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> > > -	vma->vm_flags &= ~VM_PFNMAP;
> > > -	vma->vm_flags |= VM_DONTEXPAND;
> > >   	if (dma_obj->map_noncoherent) {
> > >   		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
index 1e658c448366..637a5cc62457 100644
--- a/drivers/gpu/drm/drm_gem_dma_helper.c
+++ b/drivers/gpu/drm/drm_gem_dma_helper.c
@@ -525,13 +525,10 @@  int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *
 	int ret;
 
 	/*
-	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-	 * the whole buffer.
+	 * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
+	 * want to map the whole buffer.
 	 */
 	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_flags |= VM_DONTEXPAND;
 
 	if (dma_obj->map_noncoherent) {
 		vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);