Message ID | 20210713205153.1896059-2-daniel.vetter@ffwll.ch (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | shmem helpers for vgem | expand |
Hi Am 13.07.21 um 22:51 schrieb Daniel Vetter: > tldr; DMA buffers aren't normal memory, expecting that you can use > them like that (like calling get_user_pages works, or that they're > accounting like any other normal memory) cannot be guaranteed. > > Since some userspace only runs on integrated devices, where all > buffers are actually all resident system memory, there's a huge > temptation to assume that a struct page is always present and useable > like for any more pagecache backed mmap. This has the potential to > result in a uapi nightmare. > > To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which > blocks get_user_pages and all the other struct page based > infrastructure for everyone. In spirit this is the uapi counterpart to > the kernel-internal CONFIG_DMABUF_DEBUG. > > Motivated by a recent patch which wanted to swich the system dma-buf > heap to vm_insert_page instead of vm_insert_pfn. > > v2: > > Jason brought up that we also want to guarantee that all ptes have the > pte_special flag set, to catch fast get_user_pages (on architectures > that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would > still allow vm_insert_page, but limiting to VM_PFNMAP will catch that. > > From auditing the various functions to insert pfn pte entires > (vm_insert_pfn_prot, remap_pfn_range and all it's callers like > dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so > this should be the correct flag to check for. > > References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-WbWzkRg@mail.gmail.com/ > Acked-by: Christian König <christian.koenig@amd.com> > Cc: Jason Gunthorpe <jgg@ziepe.ca> > Cc: Suren Baghdasaryan <surenb@google.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: John Stultz <john.stultz@linaro.org> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > Cc: Sumit Semwal <sumit.semwal@linaro.org> > Cc: "Christian König" <christian.koenig@amd.com> > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > -- > Resending this so I can test the next two patches for vgem/shmem in > intel-gfx-ci. Last round failed somehow, but I can't repro that at all > locally here. > > No immediate plans to merge this patch here since ttm isn't addressed > yet (and there we have the hugepte issue, for which I don't think we > have a clear consensus yet). > -Daniel > --- > drivers/dma-buf/dma-buf.c | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index 510b42771974..65cbd7f0f16a 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -130,6 +130,7 @@ static struct file_system_type dma_buf_fs_type = { > static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) > { > struct dma_buf *dmabuf; > + int ret; > > if (!is_dma_buf_file(file)) > return -EINVAL; > @@ -145,7 +146,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) > dmabuf->size >> PAGE_SHIFT) > return -EINVAL; > > - return dmabuf->ops->mmap(dmabuf, vma); > + ret = dmabuf->ops->mmap(dmabuf, vma); > + > + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); Maybe change this to WARN_ON_ONCE(), so it doesn't fill up the kernel log. Same comment below. For either version Acked-by: Thomas Zimmermann <tzimmermann@suse.de> Best regards Thomas > + > + return ret; > } > > static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) > @@ -1276,6 +1281,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); > int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, > unsigned long pgoff) > { > + int ret; > + > if (WARN_ON(!dmabuf || !vma)) > return -EINVAL; > > @@ -1296,7 +1303,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, > vma_set_file(vma, dmabuf->file); > vma->vm_pgoff = pgoff; > > - return dmabuf->ops->mmap(dmabuf, vma); > + ret = dmabuf->ops->mmap(dmabuf, vma); > + > + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); > + > + return ret; > } > EXPORT_SYMBOL_GPL(dma_buf_mmap); > >
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 510b42771974..65cbd7f0f16a 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -130,6 +130,7 @@ static struct file_system_type dma_buf_fs_type = { static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) { struct dma_buf *dmabuf; + int ret; if (!is_dma_buf_file(file)) return -EINVAL; @@ -145,7 +146,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) dmabuf->size >> PAGE_SHIFT) return -EINVAL; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) @@ -1276,6 +1281,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1296,7 +1303,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - return dmabuf->ops->mmap(dmabuf, vma); + ret = dmabuf->ops->mmap(dmabuf, vma); + + WARN_ON(!(vma->vm_flags & VM_PFNMAP)); + + return ret; } EXPORT_SYMBOL_GPL(dma_buf_mmap);