Message ID | 20240325223339.169350-5-vishal.moola@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Define struct vm_fault in handle_mm_fault() | expand |
On Mon, Mar 25, 2024 at 03:33:38PM -0700, Vishal Moola (Oracle) wrote: > Hugetlb calculates addresses and page offsets differently from the rest of > mm. In order to pass struct vm_fault through the fault pathway we will let > hugetlb_fault() and __handle_mm_fault() set those variables themselves > instead. I don't think this is a great idea. I'd rather not do patch 5 than do patch 4+5. If you look at the history, commits 742d33729a0df11 and 5857c9209ce58f show that drivers got into the bad habit of changing address & pgoff, so they got made const to prevent that. So can we make hugetlbfs OK with using addresses & pgoffsets that aren't aligned to HPAGE boundaries? Worth playing with for a bit to see how deep that assumption runs.
On Mon, Mar 25, 2024 at 7:38 PM Matthew Wilcox <willy@infradead.org> wrote: > > On Mon, Mar 25, 2024 at 03:33:38PM -0700, Vishal Moola (Oracle) wrote: > > Hugetlb calculates addresses and page offsets differently from the rest of > > mm. In order to pass struct vm_fault through the fault pathway we will let > > hugetlb_fault() and __handle_mm_fault() set those variables themselves > > instead. > > I don't think this is a great idea. I'd rather not do patch 5 than do > patch 4+5. If you look at the history, commits 742d33729a0df11 and > 5857c9209ce58f show that drivers got into the bad habit of changing > address & pgoff, so they got made const to prevent that. > > So can we make hugetlbfs OK with using addresses & pgoffsets that aren't > aligned to HPAGE boundaries? Worth playing with for a bit to see how > deep that assumption runs. Hmmm, I'll take a look. I don't think there should be too many issues with that.
diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..c6874aa7b7f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -507,10 +507,11 @@ struct vm_fault { const struct { struct vm_area_struct *vma; /* Target VMA */ gfp_t gfp_mask; /* gfp mask to be used for allocations */ - pgoff_t pgoff; /* Logical page offset based on vma */ - unsigned long address; /* Faulting virtual address - masked */ unsigned long real_address; /* Faulting virtual address - unmasked */ }; + unsigned long address; /* Faulting virtual address - masked */ + pgoff_t pgoff; /* Logical page offset based on vma */ + enum fault_flag flags; /* FAULT_FLAG_xxx flags * XXX: should really be 'const' */ pmd_t *pmd; /* Pointer to pmd entry matching
Hugetlb calculates addresses and page offsets differently from the rest of mm. In order to pass struct vm_fault through the fault pathway we will let hugetlb_fault() and __handle_mm_fault() set those variables themselves instead. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- include/linux/mm.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)