Message ID | 20191002134730.40985-3-thomas_os@shipmail.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Emulated coherent graphics memory take 2 | expand |
On Wed, Oct 2, 2019 at 6:47 AM Thomas Hellström (VMware) <thomas_os@shipmail.org> wrote: > > From: Thomas Hellstrom <thellstrom@vmware.com> > > For users that want to travers all page table entries pointing into a > region of a struct address_space mapping, introduce a walk_page_mapping() > function. This looks non-offensive to me. My main reaction was "oh, really good that we split up the walker ops from the mm_walk structure, this would have been much uglier before that" due to the added vma entry/exit ops. Linus
On Wed, Oct 02, 2019 at 03:47:25PM +0200, Thomas Hellström (VMware) wrote: > From: Thomas Hellstrom <thellstrom@vmware.com> > > For users that want to travers all page table entries pointing into a > region of a struct address_space mapping, introduce a walk_page_mapping() > function. > > The walk_page_mapping() function will be initially be used for dirty- > tracking in virtual graphics drivers. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Will Deacon <will.deacon@arm.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Rik van Riel <riel@surriel.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Jérôme Glisse <jglisse@redhat.com> > Cc: Kirill A. Shutemov <kirill@shutemov.name> > Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> > --- > include/linux/pagewalk.h | 9 ++++ > mm/pagewalk.c | 99 +++++++++++++++++++++++++++++++++++++++- > 2 files changed, 107 insertions(+), 1 deletion(-) > > diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h > index bddd9759bab9..6ec82e92c87f 100644 > --- a/include/linux/pagewalk.h > +++ b/include/linux/pagewalk.h > @@ -24,6 +24,9 @@ struct mm_walk; > * "do page table walk over the current vma", returning > * a negative value means "abort current page table walk > * right now" and returning 1 means "skip the current vma" > + * @pre_vma: if set, called before starting walk on a non-null vma. > + * @post_vma: if set, called after a walk on a non-null vma, provided > + * that @pre_vma and the vma walk succeeded. > */ > struct mm_walk_ops { > int (*pud_entry)(pud_t *pud, unsigned long addr, > @@ -39,6 +42,9 @@ struct mm_walk_ops { > struct mm_walk *walk); > int (*test_walk)(unsigned long addr, unsigned long next, > struct mm_walk *walk); > + int (*pre_vma)(unsigned long start, unsigned long end, > + struct mm_walk *walk); > + void (*post_vma)(struct mm_walk *walk); > }; > > /** > @@ -62,5 +68,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, > void *private); > int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, > void *private); > +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, > + pgoff_t nr, const struct mm_walk_ops *ops, > + void *private); > > #endif /* _LINUX_PAGEWALK_H */ > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index d48c2a986ea3..658d1e5ec428 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -253,13 +253,23 @@ static int __walk_page_range(unsigned long start, unsigned long end, > { > int err = 0; > struct vm_area_struct *vma = walk->vma; > + const struct mm_walk_ops *ops = walk->ops; > + > + if (vma && ops->pre_vma) { > + err = ops->pre_vma(start, end, walk); > + if (err) > + return err; > + } > > if (vma && is_vm_hugetlb_page(vma)) { > - if (walk->ops->hugetlb_entry) > + if (ops->hugetlb_entry) > err = walk_hugetlb_range(start, end, walk); > } else > err = walk_pgd_range(start, end, walk); > > + if (vma && ops->post_vma) > + ops->post_vma(walk); > + > return err; > } > > @@ -285,11 +295,17 @@ static int __walk_page_range(unsigned long start, unsigned long end, > * - <0 : failed to handle the current entry, and return to the caller > * with error code. > * > + * > * Before starting to walk page table, some callers want to check whether > * they really want to walk over the current vma, typically by checking > * its vm_flags. walk_page_test() and @ops->test_walk() are used for this > * purpose. > * > + * If operations need to be staged before and committed after a vma is walked, > + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(), > + * since it is intended to handle commit-type operations, can't return any > + * errors. > + * > * struct mm_walk keeps current values of some common data like vma and pmd, > * which are useful for the access from callbacks. If you want to pass some > * caller-specific data to callbacks, @private should be helpful. > @@ -376,3 +392,84 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, > return err; > return __walk_page_range(vma->vm_start, vma->vm_end, &walk); > } > + > +/** > + * walk_page_mapping - walk all memory areas mapped into a struct address_space. > + * @mapping: Pointer to the struct address_space > + * @first_index: First page offset in the address_space > + * @nr: Number of incremental page offsets to cover > + * @ops: operation to call during the walk > + * @private: private data for callbacks' usage > + * > + * This function walks all memory areas mapped into a struct address_space. > + * The walk is limited to only the given page-size index range, but if > + * the index boundaries cross a huge page-table entry, that entry will be > + * included. > + * > + * Also see walk_page_range() for additional information. > + * > + * Locking: > + * This function can't require that the struct mm_struct::mmap_sem is held, > + * since @mapping may be mapped by multiple processes. Instead > + * @mapping->i_mmap_rwsem must be held. This might have implications in the > + * callbacks, and it's up tho the caller to ensure that the > + * struct mm_struct::mmap_sem is not needed. > + * > + * Also this means that a caller can't rely on the struct > + * vm_area_struct::vm_flags to be constant across a call, > + * except for immutable flags. Callers requiring this shouldn't use > + * this function. > + * > + * If @mapping allows faulting of huge pmds and puds, it is desirable > + * that its huge_fault() handler blocks while this function is running on > + * @mapping. Otherwise a race may occur where the huge entry is split when > + * it was intended to be handled in a huge entry callback. This requires an > + * external lock, for example that @mapping->i_mmap_rwsem is held in > + * write mode in the huge_fault() handlers. Em. No. We have ptl for this. It's the only lock required (plus mmap_sem on read) to split PMD entry into PTE table. And it can happen not only from fault path. If you care about splitting compound page under you, take a pin or lock a page. It will block split_huge_page(). Suggestion to block fault path is not viable (and it will not happen magically just because of this comment). > + */ > +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, > + pgoff_t nr, const struct mm_walk_ops *ops, > + void *private) > +{ > + struct mm_walk walk = { > + .ops = ops, > + .private = private, > + }; > + struct vm_area_struct *vma; > + pgoff_t vba, vea, cba, cea; > + unsigned long start_addr, end_addr; > + int err = 0; > + > + lockdep_assert_held(&mapping->i_mmap_rwsem); > + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, > + first_index + nr - 1) { > + /* Clip to the vma */ > + vba = vma->vm_pgoff; > + vea = vba + vma_pages(vma); > + cba = first_index; > + cba = max(cba, vba); > + cea = first_index + nr; > + cea = min(cea, vea); > + > + start_addr = ((cba - vba) << PAGE_SHIFT) + vma->vm_start; > + end_addr = ((cea - vba) << PAGE_SHIFT) + vma->vm_start; > + if (start_addr >= end_addr) > + continue; > + > + walk.vma = vma; > + walk.mm = vma->vm_mm; > + > + err = walk_page_test(vma->vm_start, vma->vm_end, &walk); > + if (err > 0) { > + err = 0; > + break; > + } else if (err < 0) > + break; > + > + err = __walk_page_range(start_addr, end_addr, &walk); > + if (err) > + break; > + } > + > + return err; > +} > -- > 2.20.1 >
On 10/3/19 1:17 PM, Kirill A. Shutemov wrote: > On Wed, Oct 02, 2019 at 03:47:25PM +0200, Thomas Hellström (VMware) wrote: >> From: Thomas Hellstrom <thellstrom@vmware.com> >> >> For users that want to travers all page table entries pointing into a >> region of a struct address_space mapping, introduce a walk_page_mapping() >> function. >> >> The walk_page_mapping() function will be initially be used for dirty- >> tracking in virtual graphics drivers. >> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Matthew Wilcox <willy@infradead.org> >> Cc: Will Deacon <will.deacon@arm.com> >> Cc: Peter Zijlstra <peterz@infradead.org> >> Cc: Rik van Riel <riel@surriel.com> >> Cc: Minchan Kim <minchan@kernel.org> >> Cc: Michal Hocko <mhocko@suse.com> >> Cc: Huang Ying <ying.huang@intel.com> >> Cc: Jérôme Glisse <jglisse@redhat.com> >> Cc: Kirill A. Shutemov <kirill@shutemov.name> >> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> >> --- >> include/linux/pagewalk.h | 9 ++++ >> mm/pagewalk.c | 99 +++++++++++++++++++++++++++++++++++++++- >> 2 files changed, 107 insertions(+), 1 deletion(-) >> >> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h >> index bddd9759bab9..6ec82e92c87f 100644 >> --- a/include/linux/pagewalk.h >> +++ b/include/linux/pagewalk.h >> @@ -24,6 +24,9 @@ struct mm_walk; >> * "do page table walk over the current vma", returning >> * a negative value means "abort current page table walk >> * right now" and returning 1 means "skip the current vma" >> + * @pre_vma: if set, called before starting walk on a non-null vma. >> + * @post_vma: if set, called after a walk on a non-null vma, provided >> + * that @pre_vma and the vma walk succeeded. >> */ >> struct mm_walk_ops { >> int (*pud_entry)(pud_t *pud, unsigned long addr, >> @@ -39,6 +42,9 @@ struct mm_walk_ops { >> struct mm_walk *walk); >> int (*test_walk)(unsigned long addr, unsigned long next, >> struct mm_walk *walk); >> + int (*pre_vma)(unsigned long start, unsigned long end, >> + struct mm_walk *walk); >> + void (*post_vma)(struct mm_walk *walk); >> }; >> >> /** >> @@ -62,5 +68,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, >> void *private); >> int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, >> void *private); >> +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, >> + pgoff_t nr, const struct mm_walk_ops *ops, >> + void *private); >> >> #endif /* _LINUX_PAGEWALK_H */ >> diff --git a/mm/pagewalk.c b/mm/pagewalk.c >> index d48c2a986ea3..658d1e5ec428 100644 >> --- a/mm/pagewalk.c >> +++ b/mm/pagewalk.c >> @@ -253,13 +253,23 @@ static int __walk_page_range(unsigned long start, unsigned long end, >> { >> int err = 0; >> struct vm_area_struct *vma = walk->vma; >> + const struct mm_walk_ops *ops = walk->ops; >> + >> + if (vma && ops->pre_vma) { >> + err = ops->pre_vma(start, end, walk); >> + if (err) >> + return err; >> + } >> >> if (vma && is_vm_hugetlb_page(vma)) { >> - if (walk->ops->hugetlb_entry) >> + if (ops->hugetlb_entry) >> err = walk_hugetlb_range(start, end, walk); >> } else >> err = walk_pgd_range(start, end, walk); >> >> + if (vma && ops->post_vma) >> + ops->post_vma(walk); >> + >> return err; >> } >> >> @@ -285,11 +295,17 @@ static int __walk_page_range(unsigned long start, unsigned long end, >> * - <0 : failed to handle the current entry, and return to the caller >> * with error code. >> * >> + * >> * Before starting to walk page table, some callers want to check whether >> * they really want to walk over the current vma, typically by checking >> * its vm_flags. walk_page_test() and @ops->test_walk() are used for this >> * purpose. >> * >> + * If operations need to be staged before and committed after a vma is walked, >> + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(), >> + * since it is intended to handle commit-type operations, can't return any >> + * errors. >> + * >> * struct mm_walk keeps current values of some common data like vma and pmd, >> * which are useful for the access from callbacks. If you want to pass some >> * caller-specific data to callbacks, @private should be helpful. >> @@ -376,3 +392,84 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, >> return err; >> return __walk_page_range(vma->vm_start, vma->vm_end, &walk); >> } >> + >> +/** >> + * walk_page_mapping - walk all memory areas mapped into a struct address_space. >> + * @mapping: Pointer to the struct address_space >> + * @first_index: First page offset in the address_space >> + * @nr: Number of incremental page offsets to cover >> + * @ops: operation to call during the walk >> + * @private: private data for callbacks' usage >> + * >> + * This function walks all memory areas mapped into a struct address_space. >> + * The walk is limited to only the given page-size index range, but if >> + * the index boundaries cross a huge page-table entry, that entry will be >> + * included. >> + * >> + * Also see walk_page_range() for additional information. >> + * >> + * Locking: >> + * This function can't require that the struct mm_struct::mmap_sem is held, >> + * since @mapping may be mapped by multiple processes. Instead >> + * @mapping->i_mmap_rwsem must be held. This might have implications in the >> + * callbacks, and it's up tho the caller to ensure that the >> + * struct mm_struct::mmap_sem is not needed. >> + * >> + * Also this means that a caller can't rely on the struct >> + * vm_area_struct::vm_flags to be constant across a call, >> + * except for immutable flags. Callers requiring this shouldn't use >> + * this function. >> + * >> + * If @mapping allows faulting of huge pmds and puds, it is desirable >> + * that its huge_fault() handler blocks while this function is running on >> + * @mapping. Otherwise a race may occur where the huge entry is split when >> + * it was intended to be handled in a huge entry callback. This requires an >> + * external lock, for example that @mapping->i_mmap_rwsem is held in >> + * write mode in the huge_fault() handlers. > Em. No. We have ptl for this. It's the only lock required (plus mmap_sem > on read) to split PMD entry into PTE table. And it can happen not only > from fault path. > > If you care about splitting compound page under you, take a pin or lock a > page. It will block split_huge_page(). > > Suggestion to block fault path is not viable (and it will not happen > magically just because of this comment). > I was specifically thinking of this: https://elixir.bootlin.com/linux/latest/source/mm/pagewalk.c#L103 If a huge pud is concurrently faulted in here, it will immediatly get split without getting processed in pud_entry(). An external lock would protect against that, but that's perhaps a bug in the pagewalk code? For pmds the situation is not the same since when pte_entry is used, all pmds will unconditionally get split. There's a similar more scary race in https://elixir.bootlin.com/linux/latest/source/mm/memory.c#L3931 It looks like if a concurrent thread faults in a huge pud just after the test for pud_none in that pmd_alloc, things might go pretty bad. /Thomas
On Thu, Oct 03, 2019 at 01:32:45PM +0200, Thomas Hellström (VMware) wrote: > > > + * If @mapping allows faulting of huge pmds and puds, it is desirable > > > + * that its huge_fault() handler blocks while this function is running on > > > + * @mapping. Otherwise a race may occur where the huge entry is split when > > > + * it was intended to be handled in a huge entry callback. This requires an > > > + * external lock, for example that @mapping->i_mmap_rwsem is held in > > > + * write mode in the huge_fault() handlers. > > Em. No. We have ptl for this. It's the only lock required (plus mmap_sem > > on read) to split PMD entry into PTE table. And it can happen not only > > from fault path. > > > > If you care about splitting compound page under you, take a pin or lock a > > page. It will block split_huge_page(). > > > > Suggestion to block fault path is not viable (and it will not happen > > magically just because of this comment). > > > I was specifically thinking of this: > > https://elixir.bootlin.com/linux/latest/source/mm/pagewalk.c#L103 > > If a huge pud is concurrently faulted in here, it will immediatly get split > without getting processed in pud_entry(). An external lock would protect > against that, but that's perhaps a bug in the pagewalk code? For pmds the > situation is not the same since when pte_entry is used, all pmds will > unconditionally get split. I *think* it should be fixed with something like this (there's no pud_trans_unstable() yet): diff --git a/mm/pagewalk.c b/mm/pagewalk.c index d48c2a986ea3..221a3b945f42 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -102,10 +102,11 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, break; continue; } + } else { + split_huge_pud(walk->vma, pud, addr); } - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) + if (pud_none(*pud) || pud_trans_unstable(*pud)) goto again; if (ops->pmd_entry || ops->pte_entry) Or better yet converted to what we do on pmd level. Honestly, all the code around PUD THP missing a lot of ground work. Rushing it upstream for DAX was not a right move. > There's a similar more scary race in > > https://elixir.bootlin.com/linux/latest/source/mm/memory.c#L3931 > > It looks like if a concurrent thread faults in a huge pud just after the > test for pud_none in that pmd_alloc, things might go pretty bad. Hm? It will fail the next pmd_none() check under ptl. Do you have a particular racing scenarion?
On 10/4/19 2:37 PM, Kirill A. Shutemov wrote: > On Thu, Oct 03, 2019 at 01:32:45PM +0200, Thomas Hellström (VMware) wrote: >>>> + * If @mapping allows faulting of huge pmds and puds, it is desirable >>>> + * that its huge_fault() handler blocks while this function is running on >>>> + * @mapping. Otherwise a race may occur where the huge entry is split when >>>> + * it was intended to be handled in a huge entry callback. This requires an >>>> + * external lock, for example that @mapping->i_mmap_rwsem is held in >>>> + * write mode in the huge_fault() handlers. >>> Em. No. We have ptl for this. It's the only lock required (plus mmap_sem >>> on read) to split PMD entry into PTE table. And it can happen not only >>> from fault path. >>> >>> If you care about splitting compound page under you, take a pin or lock a >>> page. It will block split_huge_page(). >>> >>> Suggestion to block fault path is not viable (and it will not happen >>> magically just because of this comment). >>> >> I was specifically thinking of this: >> >> https://elixir.bootlin.com/linux/latest/source/mm/pagewalk.c#L103 >> >> If a huge pud is concurrently faulted in here, it will immediatly get split >> without getting processed in pud_entry(). An external lock would protect >> against that, but that's perhaps a bug in the pagewalk code? For pmds the >> situation is not the same since when pte_entry is used, all pmds will >> unconditionally get split. > I *think* it should be fixed with something like this (there's no > pud_trans_unstable() yet): > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index d48c2a986ea3..221a3b945f42 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -102,10 +102,11 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, > break; > continue; > } > + } else { > + split_huge_pud(walk->vma, pud, addr); > } > > - split_huge_pud(walk->vma, pud, addr); > - if (pud_none(*pud)) > + if (pud_none(*pud) || pud_trans_unstable(*pud)) > goto again; > > if (ops->pmd_entry || ops->pte_entry) Yes, this seems better. I was looking at implementing a pud_trans_unstable() as a basis of fixing problems like this, but when I looked at pmd_trans_unstable I got a bit confused: Why are devmap huge pmds considered stable? I mean, couldn't anybody just run madvise() to clear those just like transhuge pmds? > > Or better yet converted to what we do on pmd level. > > Honestly, all the code around PUD THP missing a lot of ground work. > Rushing it upstream for DAX was not a right move. > >> There's a similar more scary race in >> >> https://elixir.bootlin.com/linux/latest/source/mm/memory.c#L3931 >> >> It looks like if a concurrent thread faults in a huge pud just after the >> test for pud_none in that pmd_alloc, things might go pretty bad. > Hm? It will fail the next pmd_none() check under ptl. Do you have a > particular racing scenarion? > Yes, I misinterpreted the code somewhat, but here's the scenario that looks racy: Thread 1 Thread 2 huge_fault(pud) - Fell back, for example because of write fault on dirty-tracking. huge_fault(pud) - Taken, read fault. pmd_alloc() - Will fail pmd_none check and return a pmd_offset() into thread 2's THP. Thanks, Thomas
On Fri, Oct 04, 2019 at 02:58:59PM +0200, Thomas Hellström (VMware) wrote: > On 10/4/19 2:37 PM, Kirill A. Shutemov wrote: > > On Thu, Oct 03, 2019 at 01:32:45PM +0200, Thomas Hellström (VMware) wrote: > > > > > + * If @mapping allows faulting of huge pmds and puds, it is desirable > > > > > + * that its huge_fault() handler blocks while this function is running on > > > > > + * @mapping. Otherwise a race may occur where the huge entry is split when > > > > > + * it was intended to be handled in a huge entry callback. This requires an > > > > > + * external lock, for example that @mapping->i_mmap_rwsem is held in > > > > > + * write mode in the huge_fault() handlers. > > > > Em. No. We have ptl for this. It's the only lock required (plus mmap_sem > > > > on read) to split PMD entry into PTE table. And it can happen not only > > > > from fault path. > > > > > > > > If you care about splitting compound page under you, take a pin or lock a > > > > page. It will block split_huge_page(). > > > > > > > > Suggestion to block fault path is not viable (and it will not happen > > > > magically just because of this comment). > > > > > > > I was specifically thinking of this: > > > > > > https://elixir.bootlin.com/linux/latest/source/mm/pagewalk.c#L103 > > > > > > If a huge pud is concurrently faulted in here, it will immediatly get split > > > without getting processed in pud_entry(). An external lock would protect > > > against that, but that's perhaps a bug in the pagewalk code? For pmds the > > > situation is not the same since when pte_entry is used, all pmds will > > > unconditionally get split. > > I *think* it should be fixed with something like this (there's no > > pud_trans_unstable() yet): > > > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > > index d48c2a986ea3..221a3b945f42 100644 > > --- a/mm/pagewalk.c > > +++ b/mm/pagewalk.c > > @@ -102,10 +102,11 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, > > break; > > continue; > > } > > + } else { > > + split_huge_pud(walk->vma, pud, addr); > > } > > - split_huge_pud(walk->vma, pud, addr); > > - if (pud_none(*pud)) > > + if (pud_none(*pud) || pud_trans_unstable(*pud)) > > goto again; > > if (ops->pmd_entry || ops->pte_entry) > > Yes, this seems better. I was looking at implementing a pud_trans_unstable() > as a basis of fixing problems like this, but when I looked at > pmd_trans_unstable I got a bit confused: > > Why are devmap huge pmds considered stable? I mean, couldn't anybody just > run madvise() to clear those just like transhuge pmds? Matthew, Dan, could you comment on this? > > Or better yet converted to what we do on pmd level. > > > > Honestly, all the code around PUD THP missing a lot of ground work. > > Rushing it upstream for DAX was not a right move. > > > > > There's a similar more scary race in > > > > > > https://elixir.bootlin.com/linux/latest/source/mm/memory.c#L3931 > > > > > > It looks like if a concurrent thread faults in a huge pud just after the > > > test for pud_none in that pmd_alloc, things might go pretty bad. > > Hm? It will fail the next pmd_none() check under ptl. Do you have a > > particular racing scenarion? > > > Yes, I misinterpreted the code somewhat, but here's the scenario that looks > racy: > > Thread 1 Thread 2 > huge_fault(pud) - Fell back, for example because of write fault on dirty-tracking. > huge_fault(pud) - Taken, read fault. > pmd_alloc() - Will fail pmd_none check and return a pmd_offset() I see. It also misses pud_tans_unstable() check or its variant.
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index bddd9759bab9..6ec82e92c87f 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -24,6 +24,9 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @pre_vma: if set, called before starting walk on a non-null vma. + * @post_vma: if set, called after a walk on a non-null vma, provided + * that @pre_vma and the vma walk succeeded. */ struct mm_walk_ops { int (*pud_entry)(pud_t *pud, unsigned long addr, @@ -39,6 +42,9 @@ struct mm_walk_ops { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*pre_vma)(unsigned long start, unsigned long end, + struct mm_walk *walk); + void (*post_vma)(struct mm_walk *walk); }; /** @@ -62,5 +68,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private); +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, + pgoff_t nr, const struct mm_walk_ops *ops, + void *private); #endif /* _LINUX_PAGEWALK_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index d48c2a986ea3..658d1e5ec428 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -253,13 +253,23 @@ static int __walk_page_range(unsigned long start, unsigned long end, { int err = 0; struct vm_area_struct *vma = walk->vma; + const struct mm_walk_ops *ops = walk->ops; + + if (vma && ops->pre_vma) { + err = ops->pre_vma(start, end, walk); + if (err) + return err; + } if (vma && is_vm_hugetlb_page(vma)) { - if (walk->ops->hugetlb_entry) + if (ops->hugetlb_entry) err = walk_hugetlb_range(start, end, walk); } else err = walk_pgd_range(start, end, walk); + if (vma && ops->post_vma) + ops->post_vma(walk); + return err; } @@ -285,11 +295,17 @@ static int __walk_page_range(unsigned long start, unsigned long end, * - <0 : failed to handle the current entry, and return to the caller * with error code. * + * * Before starting to walk page table, some callers want to check whether * they really want to walk over the current vma, typically by checking * its vm_flags. walk_page_test() and @ops->test_walk() are used for this * purpose. * + * If operations need to be staged before and committed after a vma is walked, + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(), + * since it is intended to handle commit-type operations, can't return any + * errors. + * * struct mm_walk keeps current values of some common data like vma and pmd, * which are useful for the access from callbacks. If you want to pass some * caller-specific data to callbacks, @private should be helpful. @@ -376,3 +392,84 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } + +/** + * walk_page_mapping - walk all memory areas mapped into a struct address_space. + * @mapping: Pointer to the struct address_space + * @first_index: First page offset in the address_space + * @nr: Number of incremental page offsets to cover + * @ops: operation to call during the walk + * @private: private data for callbacks' usage + * + * This function walks all memory areas mapped into a struct address_space. + * The walk is limited to only the given page-size index range, but if + * the index boundaries cross a huge page-table entry, that entry will be + * included. + * + * Also see walk_page_range() for additional information. + * + * Locking: + * This function can't require that the struct mm_struct::mmap_sem is held, + * since @mapping may be mapped by multiple processes. Instead + * @mapping->i_mmap_rwsem must be held. This might have implications in the + * callbacks, and it's up tho the caller to ensure that the + * struct mm_struct::mmap_sem is not needed. + * + * Also this means that a caller can't rely on the struct + * vm_area_struct::vm_flags to be constant across a call, + * except for immutable flags. Callers requiring this shouldn't use + * this function. + * + * If @mapping allows faulting of huge pmds and puds, it is desirable + * that its huge_fault() handler blocks while this function is running on + * @mapping. Otherwise a race may occur where the huge entry is split when + * it was intended to be handled in a huge entry callback. This requires an + * external lock, for example that @mapping->i_mmap_rwsem is held in + * write mode in the huge_fault() handlers. + */ +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, + pgoff_t nr, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk = { + .ops = ops, + .private = private, + }; + struct vm_area_struct *vma; + pgoff_t vba, vea, cba, cea; + unsigned long start_addr, end_addr; + int err = 0; + + lockdep_assert_held(&mapping->i_mmap_rwsem); + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, + first_index + nr - 1) { + /* Clip to the vma */ + vba = vma->vm_pgoff; + vea = vba + vma_pages(vma); + cba = first_index; + cba = max(cba, vba); + cea = first_index + nr; + cea = min(cea, vea); + + start_addr = ((cba - vba) << PAGE_SHIFT) + vma->vm_start; + end_addr = ((cea - vba) << PAGE_SHIFT) + vma->vm_start; + if (start_addr >= end_addr) + continue; + + walk.vma = vma; + walk.mm = vma->vm_mm; + + err = walk_page_test(vma->vm_start, vma->vm_end, &walk); + if (err > 0) { + err = 0; + break; + } else if (err < 0) + break; + + err = __walk_page_range(start_addr, end_addr, &walk); + if (err) + break; + } + + return err; +}