Message ID | 20191206082426.2958-3-thomas_os@shipmail.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, drm/ttm: Fix pte insertion with customized protection | expand |
On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote: [...] > @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, > pfn = page_to_pfn(page); > } > > + /* > + * Note that the value of @prot at this point may differ from > + * the value of @vma->vm_page_prot in the caching- and > + * encryption bits. This is because the exact location of the > + * data may not be known at mmap() time and may also change > + * at arbitrary times while the data is mmap'ed. > + * This is ok as long as @vma->vm_page_prot is not used by > + * the core vm to set caching- and encryption bits. > + * This is ensured by core vm using pte_modify() to modify > + * page table entry protection bits (that function preserves > + * old caching- and encryption bits), and the @fault > + * callback being the only function that creates new > + * page table entries. > + */ While this is a very valuable piece of information I believe we need to document this in the generic code where everybody will find it. vmf_insert_mixed_prot sounds like a good place to me. So being explicit about VM_MIXEDMAP. Also a reference from vm_page_prot to this function would be really helpeful. Thanks!
Hi Michal, On Fri, 2019-12-06 at 11:30 +0100, Michal Hocko wrote: > On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote: > [...] > > @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct > > vm_fault *vmf, > > pfn = page_to_pfn(page); > > } > > > > + /* > > + * Note that the value of @prot at this point may > > differ from > > + * the value of @vma->vm_page_prot in the caching- and > > + * encryption bits. This is because the exact location > > of the > > + * data may not be known at mmap() time and may also > > change > > + * at arbitrary times while the data is mmap'ed. > > + * This is ok as long as @vma->vm_page_prot is not used > > by > > + * the core vm to set caching- and encryption bits. > > + * This is ensured by core vm using pte_modify() to > > modify > > + * page table entry protection bits (that function > > preserves > > + * old caching- and encryption bits), and the @fault > > + * callback being the only function that creates new > > + * page table entries. > > + */ > > While this is a very valuable piece of information I believe we need > to > document this in the generic code where everybody will find it. > vmf_insert_mixed_prot sounds like a good place to me. So being > explicit > about VM_MIXEDMAP. Also a reference from vm_page_prot to this > function > would be really helpeful. > > Thanks! > Just to make sure I understand correctly. You'd prefer this (or similar) text to be present at the vmf_insert_mixed_prot() and vmf_insert_pfn_prot() definitions for MIXEDMAP and PFNMAP respectively, and a pointer from vm_page_prot to that text. Is that correct? Thanks, Thomas
On Fri 06-12-19 14:16:10, Thomas Hellstrom wrote: > Hi Michal, > > On Fri, 2019-12-06 at 11:30 +0100, Michal Hocko wrote: > > On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote: > > [...] > > > @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct > > > vm_fault *vmf, > > > pfn = page_to_pfn(page); > > > } > > > > > > + /* > > > + * Note that the value of @prot at this point may > > > differ from > > > + * the value of @vma->vm_page_prot in the caching- and > > > + * encryption bits. This is because the exact location > > > of the > > > + * data may not be known at mmap() time and may also > > > change > > > + * at arbitrary times while the data is mmap'ed. > > > + * This is ok as long as @vma->vm_page_prot is not used > > > by > > > + * the core vm to set caching- and encryption bits. > > > + * This is ensured by core vm using pte_modify() to > > > modify > > > + * page table entry protection bits (that function > > > preserves > > > + * old caching- and encryption bits), and the @fault > > > + * callback being the only function that creates new > > > + * page table entries. > > > + */ > > > > While this is a very valuable piece of information I believe we need > > to > > document this in the generic code where everybody will find it. > > vmf_insert_mixed_prot sounds like a good place to me. So being > > explicit > > about VM_MIXEDMAP. Also a reference from vm_page_prot to this > > function > > would be really helpeful. > > > > Thanks! > > > > Just to make sure I understand correctly. You'd prefer this (or > similar) text to be present at the vmf_insert_mixed_prot() and > vmf_insert_pfn_prot() definitions for MIXEDMAP and PFNMAP respectively, > and a pointer from vm_page_prot to that text. Is that correct? Yes. You can keep whatever is specific to TTM here but the rest should be somewhere visible to users of the interface and a note at vm_page_prot should help anybody touching the generic/core code to not break those expectations. Thanks!
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index e6495ca2630b..35d0a0e7aacc 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -173,7 +173,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, pgoff_t num_prefault) { struct vm_area_struct *vma = vmf->vma; - struct vm_area_struct cvma = *vma; struct ttm_buffer_object *bo = vma->vm_private_data; struct ttm_bo_device *bdev = bo->bdev; unsigned long page_offset; @@ -244,7 +243,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, goto out_io_unlock; } - cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, prot); + prot = ttm_io_prot(bo->mem.placement, prot); if (!bo->mem.bus.is_iomem) { struct ttm_operation_ctx ctx = { .interruptible = false, @@ -260,7 +259,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, } } else { /* Iomem should not be marked encrypted */ - cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot); + prot = pgprot_decrypted(prot); } /* @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, pfn = page_to_pfn(page); } + /* + * Note that the value of @prot at this point may differ from + * the value of @vma->vm_page_prot in the caching- and + * encryption bits. This is because the exact location of the + * data may not be known at mmap() time and may also change + * at arbitrary times while the data is mmap'ed. + * This is ok as long as @vma->vm_page_prot is not used by + * the core vm to set caching- and encryption bits. + * This is ensured by core vm using pte_modify() to modify + * page table entry protection bits (that function preserves + * old caching- and encryption bits), and the @fault + * callback being the only function that creates new + * page table entries. + */ if (vma->vm_flags & VM_MIXEDMAP) - ret = vmf_insert_mixed(&cvma, address, - __pfn_to_pfn_t(pfn, PFN_DEV)); + ret = vmf_insert_mixed_prot(vma, address, + __pfn_to_pfn_t(pfn, PFN_DEV), + prot); else - ret = vmf_insert_pfn(&cvma, address, pfn); + ret = vmf_insert_pfn_prot(vma, address, pfn, prot); /* Never error on prefaulted PTEs */ if (unlikely((ret & VM_FAULT_ERROR))) { @@ -319,7 +333,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) if (ret) return ret; - prot = vm_get_page_prot(vma->vm_flags); + prot = vma->vm_page_prot; ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) return ret; diff --git a/mm/memory.c b/mm/memory.c index b9e7f1d56b1c..4c26c27afb0a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1769,6 +1769,7 @@ vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr, { return __vm_insert_mixed(vma, addr, pfn, pgprot, false); } +EXPORT_SYMBOL(vmf_insert_mixed_prot); vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn)