Message ID | 20230809061105.3369958-2-fengwei.yin@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | support large folio for mlock | expand |
On 09/08/2023 07:11, Yin Fengwei wrote: > It will be used to check whether the folio is mapped to specific > VMA and whether the mapping address of folio is in the range. > > Also a helper function folio_within_vma() to check whether folio > is in the range of vma based on folio_in_range(). > > Signed-off-by: Yin Fengwei <fengwei.yin@intel.com> > --- > mm/internal.h | 35 +++++++++++++++++++++++++++++++++++ > 1 file changed, 35 insertions(+) > > diff --git a/mm/internal.h b/mm/internal.h > index 154da4f0d557..5d1b71010fd2 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -585,6 +585,41 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, > bool write, int *locked); > extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, > unsigned long bytes); > + > +static inline bool > +folio_in_range(struct folio *folio, struct vm_area_struct *vma, > + unsigned long start, unsigned long end) I still think it would be beneficial to have a comment block describing the requirements and behaviour of the function: - folio must have at least 1 page that is mapped in vma - the result tells you if the folio lies within the range, but it does not tell you that all of its pages are actually _mapped_ (e.g. they may not have been faulted in yet). - I think [start, end) is intended intersect with the vma too? (although I'm pretty sure sure the logic works if it doesn't?) > +{ > + pgoff_t pgoff, addr; > + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; > + > + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); > + if (start > end) > + return false; > + > + if (start < vma->vm_start) > + start = vma->vm_start; > + > + if (end > vma->vm_end) > + end = vma->vm_end; > + > + pgoff = folio_pgoff(folio); > + > + /* if folio start address is not in vma range */ > + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) > + return false; > + > + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > + > + return !(addr < start || end - addr < folio_size(folio)); > +} > + > +static inline bool > +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) why call this *within* but call the folio_in_range() *in*? Feels cleaner to use the same word for both. > +{ > + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); > +} > + > /* > * mlock_vma_folio() and munlock_vma_folio(): > * should be called with vma's mmap_lock held for read or write,
On 8/10/23 03:34, Ryan Roberts wrote: > On 09/08/2023 07:11, Yin Fengwei wrote: >> It will be used to check whether the folio is mapped to specific >> VMA and whether the mapping address of folio is in the range. >> >> Also a helper function folio_within_vma() to check whether folio >> is in the range of vma based on folio_in_range(). >> >> Signed-off-by: Yin Fengwei <fengwei.yin@intel.com> >> --- >> mm/internal.h | 35 +++++++++++++++++++++++++++++++++++ >> 1 file changed, 35 insertions(+) >> >> diff --git a/mm/internal.h b/mm/internal.h >> index 154da4f0d557..5d1b71010fd2 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -585,6 +585,41 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, >> bool write, int *locked); >> extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, >> unsigned long bytes); >> + >> +static inline bool >> +folio_in_range(struct folio *folio, struct vm_area_struct *vma, >> + unsigned long start, unsigned long end) > > I still think it would be beneficial to have a comment block describing the > requirements and behaviour of the function: Definitely. Thanks a lot for reminding me again. > > - folio must have at least 1 page that is mapped in vma This is typical usage. > - the result tells you if the folio lies within the range, but it does not tell > you that all of its pages are actually _mapped_ (e.g. they may not have been > faulted in yet). Exactly. Something like following: This function can't tell whether the folio is fully mapped in the range as it doesn't check whether all pages of filio are associated with page table of VMA. Caller needs to do page table check if it cares about the fully mapping. Typical usage (mlock or madvise) is caller calls this function to check whether the folio is in the range first. Then check page table to know whether the folio is fully mapped to the range when it knows at least 1 page of folio is associated with page table of VMA. > > - I think [start, end) is intended intersect with the vma too? (although I'm > pretty sure sure the logic works if it doesn't?) > >> +{ >> + pgoff_t pgoff, addr; >> + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; >> + >> + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); >> + if (start > end) >> + return false; >> + >> + if (start < vma->vm_start) >> + start = vma->vm_start; >> + >> + if (end > vma->vm_end) >> + end = vma->vm_end; >> + >> + pgoff = folio_pgoff(folio); >> + >> + /* if folio start address is not in vma range */ >> + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) >> + return false; >> + >> + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); >> + >> + return !(addr < start || end - addr < folio_size(folio)); >> +} >> + >> +static inline bool >> +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) > > why call this *within* but call the folio_in_range() *in*? Feels cleaner to use > the same word for both. Good point. I will change folio_in_range() to folio_within_range(). Regards Yin, Fengwei > >> +{ >> + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); >> +} >> + >> /* >> * mlock_vma_folio() and munlock_vma_folio(): >> * should be called with vma's mmap_lock held for read or write, >
diff --git a/mm/internal.h b/mm/internal.h index 154da4f0d557..5d1b71010fd2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -585,6 +585,41 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +static inline bool +folio_in_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start > end) + return false; + + if (start < vma->vm_start) + start = vma->vm_start; + + if (end > vma->vm_end) + end = vma->vm_end; + + pgoff = folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) + return false; + + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return !(addr < start || end - addr < folio_size(folio)); +} + +static inline bool +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) +{ + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write,
It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Also a helper function folio_within_vma() to check whether folio is in the range of vma based on folio_in_range(). Signed-off-by: Yin Fengwei <fengwei.yin@intel.com> --- mm/internal.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+)