Message ID | 20230721094043.2506691-4-fengwei.yin@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | fix large folio for madvise_cold_or_pageout() | expand |
On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@intel.com> wrote: > > It will be used to check whether the folio is mapped to specific > VMA and whether the mapping address of folio is in the range. > > Also a helper function folio_within_vma() to check whether folio > is in the range of vma based on folio_in_range(). > > Signed-off-by: Yin Fengwei <fengwei.yin@intel.com> > --- > mm/internal.h | 32 ++++++++++++++++++++++++++++++++ > 1 file changed, 32 insertions(+) > > diff --git a/mm/internal.h b/mm/internal.h > index 483add0bfb28..c7dd15d8de3e 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -585,6 +585,38 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, > bool write, int *locked); > extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, > unsigned long bytes); > + > +static inline bool > +folio_in_range(struct folio *folio, struct vm_area_struct *vma, > + unsigned long start, unsigned long end) > +{ > + pgoff_t pgoff, addr; > + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; > + > + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); > + if (start < vma->vm_start) > + start = vma->vm_start; > + > + if (end > vma->vm_end) > + end = vma->vm_end; > + > + pgoff = folio_pgoff(folio); > + > + /* if folio start address is not in vma range */ > + if (pgoff < vma->vm_pgoff || pgoff > vma->vm_pgoff + vma_pglen) > + return false; > + > + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > + > + return ((addr >= start) && (addr + folio_size(folio) <= end)); Not sure how much we care but better to avoid (addr + folio_size()), since it might wrap to 0 on 32-bit systems. Reusing some helpers from mm/internal.h, e.g., vma_pgoff_address(), would be great, if it's possible (I'm not sure if it's).
diff --git a/mm/internal.h b/mm/internal.h index 483add0bfb28..c7dd15d8de3e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -585,6 +585,38 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +static inline bool +folio_in_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start < vma->vm_start) + start = vma->vm_start; + + if (end > vma->vm_end) + end = vma->vm_end; + + pgoff = folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (pgoff < vma->vm_pgoff || pgoff > vma->vm_pgoff + vma_pglen) + return false; + + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return ((addr >= start) && (addr + folio_size(folio) <= end)); +} + +static inline bool +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) +{ + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write,
It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Also a helper function folio_within_vma() to check whether folio is in the range of vma based on folio_in_range(). Signed-off-by: Yin Fengwei <fengwei.yin@intel.com> --- mm/internal.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+)