Message ID | 2e9996fa-d238-e7c-1194-834a2bd1f60@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: free retracted page table by RCU | expand |
On Sun, May 28, 2023 at 11:25:15PM -0700, Hugh Dickins wrote: > Simplify shmem and file THP collapse's retract_page_tables(), and relax > its locking: to improve its success rate and to lessen impact on others. > > Instead of its MADV_COLLAPSE case doing set_huge_pmd() at target_addr of > target_mm, leave that part of the work to madvise_collapse() calling > collapse_pte_mapped_thp() afterwards: just adjust collapse_file()'s > result code to arrange for that. That spares retract_page_tables() four > arguments; and since it will be successful in retracting all of the page > tables expected of it, no need to track and return a result code itself. > > It needs i_mmap_lock_read(mapping) for traversing the vma interval tree, > but it does not need i_mmap_lock_write() for that: page_vma_mapped_walk() > allows for pte_offset_map_lock() etc to fail, and uses pmd_lock() for > THPs. retract_page_tables() just needs to use those same spinlocks to > exclude it briefly, while transitioning pmd from page table to none: so > restore its use of pmd_lock() inside of which pte lock is nested. > > Users of pte_offset_map_lock() etc all now allow for them to fail: > so retract_page_tables() now has no use for mmap_write_trylock() or > vma_try_start_write(). In common with rmap and page_vma_mapped_walk(), > it does not even need the mmap_read_lock(). > > But those users do expect the page table to remain a good page table, > until they unlock and rcu_read_unlock(): so the page table cannot be > freed immediately, but rather by the recently added pte_free_defer(). > > retract_page_tables() can be enhanced to replace_page_tables(), which > inserts the final huge pmd without mmap lock: going through an invalid > state instead of pmd_none() followed by fault. But that does raise some > questions, and requires a more complicated pte_free_defer() for powerpc > (when its arch_needs_pgtable_deposit() for shmem and file THPs). Leave > that enhancement to a later release. > > Signed-off-by: Hugh Dickins <hughd@google.com> > --- > mm/khugepaged.c | 169 +++++++++++++++++------------------------------- > 1 file changed, 60 insertions(+), 109 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 1083f0e38a07..4fd408154692 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1617,9 +1617,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > break; > case SCAN_PMD_NONE: > /* > - * In MADV_COLLAPSE path, possible race with khugepaged where > - * all pte entries have been removed and pmd cleared. If so, > - * skip all the pte checks and just update the pmd mapping. > + * All pte entries have been removed and pmd cleared. > + * Skip all the pte checks and just update the pmd mapping. > */ > goto maybe_install_pmd; > default: > @@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl > mmap_write_unlock(mm); > } > > -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > - struct mm_struct *target_mm, > - unsigned long target_addr, struct page *hpage, > - struct collapse_control *cc) > +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > { > struct vm_area_struct *vma; > - int target_result = SCAN_FAIL; > > - i_mmap_lock_write(mapping); > + i_mmap_lock_read(mapping); > vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { > - int result = SCAN_FAIL; > - struct mm_struct *mm = NULL; > - unsigned long addr = 0; > - pmd_t *pmd; > - bool is_target = false; > + struct mm_struct *mm; > + unsigned long addr; > + pmd_t *pmd, pgt_pmd; > + spinlock_t *pml; > + spinlock_t *ptl; > > /* > * Check vma->anon_vma to exclude MAP_PRIVATE mappings that > - * got written to. These VMAs are likely not worth investing > - * mmap_write_lock(mm) as PMD-mapping is likely to be split > - * later. > + * got written to. These VMAs are likely not worth removing > + * page tables from, as PMD-mapping is likely to be split later. > * > - * Note that vma->anon_vma check is racy: it can be set up after > - * the check but before we took mmap_lock by the fault path. > - * But page lock would prevent establishing any new ptes of the > - * page, so we are safe. > - * > - * An alternative would be drop the check, but check that page > - * table is clear before calling pmdp_collapse_flush() under > - * ptl. It has higher chance to recover THP for the VMA, but > - * has higher cost too. It would also probably require locking > - * the anon_vma. > + * Note that vma->anon_vma check is racy: it can be set after > + * the check, but page locks (with XA_RETRY_ENTRYs in holes) > + * prevented establishing new ptes of the page. So we are safe > + * to remove page table below, without even checking it's empty. > */ > - if (READ_ONCE(vma->anon_vma)) { > - result = SCAN_PAGE_ANON; > - goto next; > - } > + if (READ_ONCE(vma->anon_vma)) > + continue; Not directly related to current patch, but I just realized there seems to have similar issue as what ab0c3f1251b4 wanted to fix. IIUC any shmem vma that used to have uprobe/bp installed will have anon_vma set here, then does it mean that any vma used to get debugged will never be able to merge into a thp (with either madvise or khugepaged)? I think it'll only make a difference when the page cache is not huge yet when bp was uninstalled, but then it becomes a thp candidate somehow. Even if so, I think the anon_vma should still be there. Did I miss something, or maybe that's not even a problem? > + > addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > if (addr & ~HPAGE_PMD_MASK || > - vma->vm_end < addr + HPAGE_PMD_SIZE) { > - result = SCAN_VMA_CHECK; > - goto next; > - } > - mm = vma->vm_mm; > - is_target = mm == target_mm && addr == target_addr; > - result = find_pmd_or_thp_or_none(mm, addr, &pmd); > - if (result != SCAN_SUCCEED) > - goto next; > - /* > - * We need exclusive mmap_lock to retract page table. > - * > - * We use trylock due to lock inversion: we need to acquire > - * mmap_lock while holding page lock. Fault path does it in > - * reverse order. Trylock is a way to avoid deadlock. > - * > - * Also, it's not MADV_COLLAPSE's job to collapse other > - * mappings - let khugepaged take care of them later. > - */ > - result = SCAN_PTE_MAPPED_HUGEPAGE; > - if ((cc->is_khugepaged || is_target) && > - mmap_write_trylock(mm)) { > - /* trylock for the same lock inversion as above */ > - if (!vma_try_start_write(vma)) > - goto unlock_next; > - > - /* > - * Re-check whether we have an ->anon_vma, because > - * collapse_and_free_pmd() requires that either no > - * ->anon_vma exists or the anon_vma is locked. > - * We already checked ->anon_vma above, but that check > - * is racy because ->anon_vma can be populated under the > - * mmap lock in read mode. > - */ > - if (vma->anon_vma) { > - result = SCAN_PAGE_ANON; > - goto unlock_next; > - } > - /* > - * When a vma is registered with uffd-wp, we can't > - * recycle the pmd pgtable because there can be pte > - * markers installed. Skip it only, so the rest mm/vma > - * can still have the same file mapped hugely, however > - * it'll always mapped in small page size for uffd-wp > - * registered ranges. > - */ > - if (hpage_collapse_test_exit(mm)) { > - result = SCAN_ANY_PROCESS; > - goto unlock_next; > - } > - if (userfaultfd_wp(vma)) { > - result = SCAN_PTE_UFFD_WP; > - goto unlock_next; > - } > - collapse_and_free_pmd(mm, vma, addr, pmd); > - if (!cc->is_khugepaged && is_target) > - result = set_huge_pmd(vma, addr, pmd, hpage); > - else > - result = SCAN_SUCCEED; > - > -unlock_next: > - mmap_write_unlock(mm); > - goto next; > - } > - /* > - * Calling context will handle target mm/addr. Otherwise, let > - * khugepaged try again later. > - */ > - if (!is_target) { > - khugepaged_add_pte_mapped_thp(mm, addr); > + vma->vm_end < addr + HPAGE_PMD_SIZE) > continue; > - } > -next: > - if (is_target) > - target_result = result; > + > + mm = vma->vm_mm; > + if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) > + continue; > + > + if (hpage_collapse_test_exit(mm)) > + continue; > + /* > + * When a vma is registered with uffd-wp, we cannot recycle > + * the page table because there may be pte markers installed. > + * Other vmas can still have the same file mapped hugely, but > + * skip this one: it will always be mapped in small page size > + * for uffd-wp registered ranges. > + * > + * What if VM_UFFD_WP is set a moment after this check? No > + * problem, huge page lock is still held, stopping new mappings > + * of page which might then get replaced by pte markers: only > + * existing markers need to be protected here. (We could check > + * after getting ptl below, but this comment distracting there!) > + */ > + if (userfaultfd_wp(vma)) > + continue; IIUC here with the new code we only hold (1) hpage lock, and (2) i_mmap_lock read. Then could it possible that right after checking this and found !UFFD_WP, but then someone quickly (1) register uffd-wp on this vma, then UFFDIO_WRITEPROTECT to install some pte markers, before below pgtable locks taken? The thing is installation of pte markers may not need either of the locks iiuc.. Would taking mmap read lock help in this case? Thanks, > + > + /* Huge page lock is still held, so page table must be empty */ > + pml = pmd_lock(mm, pmd); > + ptl = pte_lockptr(mm, pmd); > + if (ptl != pml) > + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); > + pgt_pmd = pmdp_collapse_flush(vma, addr, pmd); > + if (ptl != pml) > + spin_unlock(ptl); > + spin_unlock(pml); > + > + mm_dec_nr_ptes(mm); > + page_table_check_pte_clear_range(mm, addr, pgt_pmd); > + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); > } > - i_mmap_unlock_write(mapping); > - return target_result; > + i_mmap_unlock_read(mapping); > } > > /** > @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > /* > * Remove pte page tables, so we can re-fault the page as huge. > + * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp(). > */ > - result = retract_page_tables(mapping, start, mm, addr, hpage, > - cc); > + retract_page_tables(mapping, start); > + if (cc && !cc->is_khugepaged) > + result = SCAN_PTE_MAPPED_HUGEPAGE; > unlock_page(hpage); > > /* > -- > 2.35.3 >
Thanks for looking, Peter: I was well aware of you dropping several hints that you wanted to see what's intended before passing judgment on earlier series, and I preferred to get on with showing this series, than go into detail in responses to you there - thanks for your patience :) On Mon, 29 May 2023, Peter Xu wrote: > On Sun, May 28, 2023 at 11:25:15PM -0700, Hugh Dickins wrote: ... > > @@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl > > mmap_write_unlock(mm); > > } > > > > -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > > - struct mm_struct *target_mm, > > - unsigned long target_addr, struct page *hpage, > > - struct collapse_control *cc) > > +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > > { > > struct vm_area_struct *vma; > > - int target_result = SCAN_FAIL; > > > > - i_mmap_lock_write(mapping); > > + i_mmap_lock_read(mapping); > > vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { > > - int result = SCAN_FAIL; > > - struct mm_struct *mm = NULL; > > - unsigned long addr = 0; > > - pmd_t *pmd; > > - bool is_target = false; > > + struct mm_struct *mm; > > + unsigned long addr; > > + pmd_t *pmd, pgt_pmd; > > + spinlock_t *pml; > > + spinlock_t *ptl; > > > > /* > > * Check vma->anon_vma to exclude MAP_PRIVATE mappings that > > - * got written to. These VMAs are likely not worth investing > > - * mmap_write_lock(mm) as PMD-mapping is likely to be split > > - * later. > > + * got written to. These VMAs are likely not worth removing > > + * page tables from, as PMD-mapping is likely to be split later. > > * > > - * Note that vma->anon_vma check is racy: it can be set up after > > - * the check but before we took mmap_lock by the fault path. > > - * But page lock would prevent establishing any new ptes of the > > - * page, so we are safe. > > - * > > - * An alternative would be drop the check, but check that page > > - * table is clear before calling pmdp_collapse_flush() under > > - * ptl. It has higher chance to recover THP for the VMA, but > > - * has higher cost too. It would also probably require locking > > - * the anon_vma. > > + * Note that vma->anon_vma check is racy: it can be set after > > + * the check, but page locks (with XA_RETRY_ENTRYs in holes) > > + * prevented establishing new ptes of the page. So we are safe > > + * to remove page table below, without even checking it's empty. > > */ > > - if (READ_ONCE(vma->anon_vma)) { > > - result = SCAN_PAGE_ANON; > > - goto next; > > - } > > + if (READ_ONCE(vma->anon_vma)) > > + continue; > > Not directly related to current patch, but I just realized there seems to > have similar issue as what ab0c3f1251b4 wanted to fix. > > IIUC any shmem vma that used to have uprobe/bp installed will have anon_vma > set here, then does it mean that any vma used to get debugged will never be > able to merge into a thp (with either madvise or khugepaged)? > > I think it'll only make a difference when the page cache is not huge yet > when bp was uninstalled, but then it becomes a thp candidate somehow. Even > if so, I think the anon_vma should still be there. > > Did I miss something, or maybe that's not even a problem? Finding vma->anon_vma set would discourage retract_page_tables() from doing its business with that previously uprobed area; but it does not stop collapse_pte_mapped_thp() (which uprobes unregister calls directly) from dealing with it, and MADV_COLLAPSE works on anon_vma'ed areas too. It's just a heuristic in retract_page_tables(), when it chooses to skip the anon_vma'ed areas as often not worth bothering with. As to vma merges: I haven't actually checked since the maple tree and other rewrites of vma merging, but previously one vma with anon_vma set could be merged with adjacent vma before or after without anon_vma set - the anon_vma comparison is not just equality of anon_vma, but allows NULL too - so the anon_vma will still be there, but extends to cover the wider extent. Right, I find is_mergeable_anon_vma() still following that rule. (And once vmas are merged, so that the whole of the huge page falls within a single vma, khugepaged can consider it, and do collapse_pte_mapped_thp() on it - before or after 11/12 I think.) As to whether it would even be a problem: generally no, the vma is supposed just to be an internal representation, and so long as the code resists proliferating them unnecessarily, occasional failures to merge should not matter. The one place that forever sticks in my mind as mattering (perhaps there are others I'm unaware of, but I'd call them bugs) is mremap(): which is sufficiently awkward and bug-prone already, that nobody ever had the courage to make it independent of vma boundaries; but ideally, it's mremap() that we should fix. But I may have written three answers, yet still missed your point. ... > > + > > + mm = vma->vm_mm; > > + if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) > > + continue; > > + > > + if (hpage_collapse_test_exit(mm)) > > + continue; > > + /* > > + * When a vma is registered with uffd-wp, we cannot recycle > > + * the page table because there may be pte markers installed. > > + * Other vmas can still have the same file mapped hugely, but > > + * skip this one: it will always be mapped in small page size > > + * for uffd-wp registered ranges. > > + * > > + * What if VM_UFFD_WP is set a moment after this check? No > > + * problem, huge page lock is still held, stopping new mappings > > + * of page which might then get replaced by pte markers: only > > + * existing markers need to be protected here. (We could check > > + * after getting ptl below, but this comment distracting there!) > > + */ > > + if (userfaultfd_wp(vma)) > > + continue; > > IIUC here with the new code we only hold (1) hpage lock, and (2) > i_mmap_lock read. Then could it possible that right after checking this > and found !UFFD_WP, but then someone quickly (1) register uffd-wp on this > vma, then UFFDIO_WRITEPROTECT to install some pte markers, before below > pgtable locks taken? > > The thing is installation of pte markers may not need either of the locks > iiuc.. > > Would taking mmap read lock help in this case? Isn't my comment above it a good enough answer? If I misunderstand the uffd-wp pte marker ("If"? certainly I don't understand it well enough, but I may or may not be too wrong about it here), and actually it can spring up in places where the page has not even been mapped yet, then I'd *much* rather just move that check down into the pte_locked area, than involve mmap read lock (which, though easier to acquire than its write lock, would I think take us back to square 1 in terms of needing trylock); but I did prefer not to have a big uffd-wp comment distracting from the code flow there. I expect now, that if I follow up UFFDIO_WRITEPROTECT, I shall indeed find it inserting pte markers where the page has not even been mapped yet. A "Yes" from you will save me looking, but probably I shall have to move that check down (oh well, the comment will be smaller there). Thanks, Hugh
On Mon, May 29, 2023 at 8:25 AM Hugh Dickins <hughd@google.com> wrote: > -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, > - struct mm_struct *target_mm, > - unsigned long target_addr, struct page *hpage, > - struct collapse_control *cc) > +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > { > struct vm_area_struct *vma; > - int target_result = SCAN_FAIL; > > - i_mmap_lock_write(mapping); > + i_mmap_lock_read(mapping); > vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { > - int result = SCAN_FAIL; > - struct mm_struct *mm = NULL; > - unsigned long addr = 0; > - pmd_t *pmd; > - bool is_target = false; > + struct mm_struct *mm; > + unsigned long addr; > + pmd_t *pmd, pgt_pmd; > + spinlock_t *pml; > + spinlock_t *ptl; > > /* > * Check vma->anon_vma to exclude MAP_PRIVATE mappings that > - * got written to. These VMAs are likely not worth investing > - * mmap_write_lock(mm) as PMD-mapping is likely to be split > - * later. > + * got written to. These VMAs are likely not worth removing > + * page tables from, as PMD-mapping is likely to be split later. > * > - * Note that vma->anon_vma check is racy: it can be set up after > - * the check but before we took mmap_lock by the fault path. > - * But page lock would prevent establishing any new ptes of the > - * page, so we are safe. > - * > - * An alternative would be drop the check, but check that page > - * table is clear before calling pmdp_collapse_flush() under > - * ptl. It has higher chance to recover THP for the VMA, but > - * has higher cost too. It would also probably require locking > - * the anon_vma. > + * Note that vma->anon_vma check is racy: it can be set after > + * the check, but page locks (with XA_RETRY_ENTRYs in holes) > + * prevented establishing new ptes of the page. So we are safe > + * to remove page table below, without even checking it's empty. This "we are safe to remove page table below, without even checking it's empty" assumes that the only way to create new anonymous PTEs is to use existing file PTEs, right? What about private shmem VMAs that are registered with userfaultfd as VM_UFFD_MISSING? I think for those, the UFFDIO_COPY ioctl lets you directly insert anonymous PTEs without looking at the mapping and its pages (except for checking that the insertion point is before end-of-file), protected only by mmap_lock (shared) and pte_offset_map_lock(). > */ > - if (READ_ONCE(vma->anon_vma)) { > - result = SCAN_PAGE_ANON; > - goto next; > - } > + if (READ_ONCE(vma->anon_vma)) > + continue; > + > addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > if (addr & ~HPAGE_PMD_MASK || > - vma->vm_end < addr + HPAGE_PMD_SIZE) { > - result = SCAN_VMA_CHECK; > - goto next; > - } > - mm = vma->vm_mm; > - is_target = mm == target_mm && addr == target_addr; > - result = find_pmd_or_thp_or_none(mm, addr, &pmd); > - if (result != SCAN_SUCCEED) > - goto next; > - /* > - * We need exclusive mmap_lock to retract page table. > - * > - * We use trylock due to lock inversion: we need to acquire > - * mmap_lock while holding page lock. Fault path does it in > - * reverse order. Trylock is a way to avoid deadlock. > - * > - * Also, it's not MADV_COLLAPSE's job to collapse other > - * mappings - let khugepaged take care of them later. > - */ > - result = SCAN_PTE_MAPPED_HUGEPAGE; > - if ((cc->is_khugepaged || is_target) && > - mmap_write_trylock(mm)) { > - /* trylock for the same lock inversion as above */ > - if (!vma_try_start_write(vma)) > - goto unlock_next; > - > - /* > - * Re-check whether we have an ->anon_vma, because > - * collapse_and_free_pmd() requires that either no > - * ->anon_vma exists or the anon_vma is locked. > - * We already checked ->anon_vma above, but that check > - * is racy because ->anon_vma can be populated under the > - * mmap lock in read mode. > - */ > - if (vma->anon_vma) { > - result = SCAN_PAGE_ANON; > - goto unlock_next; > - } > - /* > - * When a vma is registered with uffd-wp, we can't > - * recycle the pmd pgtable because there can be pte > - * markers installed. Skip it only, so the rest mm/vma > - * can still have the same file mapped hugely, however > - * it'll always mapped in small page size for uffd-wp > - * registered ranges. > - */ > - if (hpage_collapse_test_exit(mm)) { > - result = SCAN_ANY_PROCESS; > - goto unlock_next; > - } > - if (userfaultfd_wp(vma)) { > - result = SCAN_PTE_UFFD_WP; > - goto unlock_next; > - } > - collapse_and_free_pmd(mm, vma, addr, pmd); The old code called collapse_and_free_pmd(), which involves MMU notifier invocation... > - if (!cc->is_khugepaged && is_target) > - result = set_huge_pmd(vma, addr, pmd, hpage); > - else > - result = SCAN_SUCCEED; > - > -unlock_next: > - mmap_write_unlock(mm); > - goto next; > - } > - /* > - * Calling context will handle target mm/addr. Otherwise, let > - * khugepaged try again later. > - */ > - if (!is_target) { > - khugepaged_add_pte_mapped_thp(mm, addr); > + vma->vm_end < addr + HPAGE_PMD_SIZE) > continue; > - } > -next: > - if (is_target) > - target_result = result; > + > + mm = vma->vm_mm; > + if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) > + continue; > + > + if (hpage_collapse_test_exit(mm)) > + continue; > + /* > + * When a vma is registered with uffd-wp, we cannot recycle > + * the page table because there may be pte markers installed. > + * Other vmas can still have the same file mapped hugely, but > + * skip this one: it will always be mapped in small page size > + * for uffd-wp registered ranges. > + * > + * What if VM_UFFD_WP is set a moment after this check? No > + * problem, huge page lock is still held, stopping new mappings > + * of page which might then get replaced by pte markers: only > + * existing markers need to be protected here. (We could check > + * after getting ptl below, but this comment distracting there!) > + */ > + if (userfaultfd_wp(vma)) > + continue; > + > + /* Huge page lock is still held, so page table must be empty */ > + pml = pmd_lock(mm, pmd); > + ptl = pte_lockptr(mm, pmd); > + if (ptl != pml) > + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); > + pgt_pmd = pmdp_collapse_flush(vma, addr, pmd); ... while the new code only does pmdp_collapse_flush(), which clears the pmd entry and does a TLB flush, but AFAICS doesn't use MMU notifiers. My understanding is that that's problematic - maybe (?) it is sort of okay with regards to classic MMU notifier users like KVM, but it's probably wrong for IOMMUv2 users, where an IOMMU directly consumes the normal page tables? (FWIW, last I looked, there also seemed to be some other issues with MMU notifier usage wrt IOMMUv2, see the thread <https://lore.kernel.org/linux-mm/Yzbaf9HW1%2FreKqR8@nvidia.com/>.) > + if (ptl != pml) > + spin_unlock(ptl); > + spin_unlock(pml); > + > + mm_dec_nr_ptes(mm); > + page_table_check_pte_clear_range(mm, addr, pgt_pmd); > + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); > } > - i_mmap_unlock_write(mapping); > - return target_result; > + i_mmap_unlock_read(mapping); > } > > /** > @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > /* > * Remove pte page tables, so we can re-fault the page as huge. > + * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp(). > */ > - result = retract_page_tables(mapping, start, mm, addr, hpage, > - cc); > + retract_page_tables(mapping, start); > + if (cc && !cc->is_khugepaged) > + result = SCAN_PTE_MAPPED_HUGEPAGE; > unlock_page(hpage); > > /* > -- > 2.35.3 >
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1083f0e38a07..4fd408154692 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1617,9 +1617,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, break; case SCAN_PMD_NONE: /* - * In MADV_COLLAPSE path, possible race with khugepaged where - * all pte entries have been removed and pmd cleared. If so, - * skip all the pte checks and just update the pmd mapping. + * All pte entries have been removed and pmd cleared. + * Skip all the pte checks and just update the pmd mapping. */ goto maybe_install_pmd; default: @@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl mmap_write_unlock(mm); } -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, - struct mm_struct *target_mm, - unsigned long target_addr, struct page *hpage, - struct collapse_control *cc) +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) { struct vm_area_struct *vma; - int target_result = SCAN_FAIL; - i_mmap_lock_write(mapping); + i_mmap_lock_read(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { - int result = SCAN_FAIL; - struct mm_struct *mm = NULL; - unsigned long addr = 0; - pmd_t *pmd; - bool is_target = false; + struct mm_struct *mm; + unsigned long addr; + pmd_t *pmd, pgt_pmd; + spinlock_t *pml; + spinlock_t *ptl; /* * Check vma->anon_vma to exclude MAP_PRIVATE mappings that - * got written to. These VMAs are likely not worth investing - * mmap_write_lock(mm) as PMD-mapping is likely to be split - * later. + * got written to. These VMAs are likely not worth removing + * page tables from, as PMD-mapping is likely to be split later. * - * Note that vma->anon_vma check is racy: it can be set up after - * the check but before we took mmap_lock by the fault path. - * But page lock would prevent establishing any new ptes of the - * page, so we are safe. - * - * An alternative would be drop the check, but check that page - * table is clear before calling pmdp_collapse_flush() under - * ptl. It has higher chance to recover THP for the VMA, but - * has higher cost too. It would also probably require locking - * the anon_vma. + * Note that vma->anon_vma check is racy: it can be set after + * the check, but page locks (with XA_RETRY_ENTRYs in holes) + * prevented establishing new ptes of the page. So we are safe + * to remove page table below, without even checking it's empty. */ - if (READ_ONCE(vma->anon_vma)) { - result = SCAN_PAGE_ANON; - goto next; - } + if (READ_ONCE(vma->anon_vma)) + continue; + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); if (addr & ~HPAGE_PMD_MASK || - vma->vm_end < addr + HPAGE_PMD_SIZE) { - result = SCAN_VMA_CHECK; - goto next; - } - mm = vma->vm_mm; - is_target = mm == target_mm && addr == target_addr; - result = find_pmd_or_thp_or_none(mm, addr, &pmd); - if (result != SCAN_SUCCEED) - goto next; - /* - * We need exclusive mmap_lock to retract page table. - * - * We use trylock due to lock inversion: we need to acquire - * mmap_lock while holding page lock. Fault path does it in - * reverse order. Trylock is a way to avoid deadlock. - * - * Also, it's not MADV_COLLAPSE's job to collapse other - * mappings - let khugepaged take care of them later. - */ - result = SCAN_PTE_MAPPED_HUGEPAGE; - if ((cc->is_khugepaged || is_target) && - mmap_write_trylock(mm)) { - /* trylock for the same lock inversion as above */ - if (!vma_try_start_write(vma)) - goto unlock_next; - - /* - * Re-check whether we have an ->anon_vma, because - * collapse_and_free_pmd() requires that either no - * ->anon_vma exists or the anon_vma is locked. - * We already checked ->anon_vma above, but that check - * is racy because ->anon_vma can be populated under the - * mmap lock in read mode. - */ - if (vma->anon_vma) { - result = SCAN_PAGE_ANON; - goto unlock_next; - } - /* - * When a vma is registered with uffd-wp, we can't - * recycle the pmd pgtable because there can be pte - * markers installed. Skip it only, so the rest mm/vma - * can still have the same file mapped hugely, however - * it'll always mapped in small page size for uffd-wp - * registered ranges. - */ - if (hpage_collapse_test_exit(mm)) { - result = SCAN_ANY_PROCESS; - goto unlock_next; - } - if (userfaultfd_wp(vma)) { - result = SCAN_PTE_UFFD_WP; - goto unlock_next; - } - collapse_and_free_pmd(mm, vma, addr, pmd); - if (!cc->is_khugepaged && is_target) - result = set_huge_pmd(vma, addr, pmd, hpage); - else - result = SCAN_SUCCEED; - -unlock_next: - mmap_write_unlock(mm); - goto next; - } - /* - * Calling context will handle target mm/addr. Otherwise, let - * khugepaged try again later. - */ - if (!is_target) { - khugepaged_add_pte_mapped_thp(mm, addr); + vma->vm_end < addr + HPAGE_PMD_SIZE) continue; - } -next: - if (is_target) - target_result = result; + + mm = vma->vm_mm; + if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) + continue; + + if (hpage_collapse_test_exit(mm)) + continue; + /* + * When a vma is registered with uffd-wp, we cannot recycle + * the page table because there may be pte markers installed. + * Other vmas can still have the same file mapped hugely, but + * skip this one: it will always be mapped in small page size + * for uffd-wp registered ranges. + * + * What if VM_UFFD_WP is set a moment after this check? No + * problem, huge page lock is still held, stopping new mappings + * of page which might then get replaced by pte markers: only + * existing markers need to be protected here. (We could check + * after getting ptl below, but this comment distracting there!) + */ + if (userfaultfd_wp(vma)) + continue; + + /* Huge page lock is still held, so page table must be empty */ + pml = pmd_lock(mm, pmd); + ptl = pte_lockptr(mm, pmd); + if (ptl != pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + pgt_pmd = pmdp_collapse_flush(vma, addr, pmd); + if (ptl != pml) + spin_unlock(ptl); + spin_unlock(pml); + + mm_dec_nr_ptes(mm); + page_table_check_pte_clear_range(mm, addr, pgt_pmd); + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); } - i_mmap_unlock_write(mapping); - return target_result; + i_mmap_unlock_read(mapping); } /** @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, /* * Remove pte page tables, so we can re-fault the page as huge. + * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp(). */ - result = retract_page_tables(mapping, start, mm, addr, hpage, - cc); + retract_page_tables(mapping, start); + if (cc && !cc->is_khugepaged) + result = SCAN_PTE_MAPPED_HUGEPAGE; unlock_page(hpage); /*
Simplify shmem and file THP collapse's retract_page_tables(), and relax its locking: to improve its success rate and to lessen impact on others. Instead of its MADV_COLLAPSE case doing set_huge_pmd() at target_addr of target_mm, leave that part of the work to madvise_collapse() calling collapse_pte_mapped_thp() afterwards: just adjust collapse_file()'s result code to arrange for that. That spares retract_page_tables() four arguments; and since it will be successful in retracting all of the page tables expected of it, no need to track and return a result code itself. It needs i_mmap_lock_read(mapping) for traversing the vma interval tree, but it does not need i_mmap_lock_write() for that: page_vma_mapped_walk() allows for pte_offset_map_lock() etc to fail, and uses pmd_lock() for THPs. retract_page_tables() just needs to use those same spinlocks to exclude it briefly, while transitioning pmd from page table to none: so restore its use of pmd_lock() inside of which pte lock is nested. Users of pte_offset_map_lock() etc all now allow for them to fail: so retract_page_tables() now has no use for mmap_write_trylock() or vma_try_start_write(). In common with rmap and page_vma_mapped_walk(), it does not even need the mmap_read_lock(). But those users do expect the page table to remain a good page table, until they unlock and rcu_read_unlock(): so the page table cannot be freed immediately, but rather by the recently added pte_free_defer(). retract_page_tables() can be enhanced to replace_page_tables(), which inserts the final huge pmd without mmap lock: going through an invalid state instead of pmd_none() followed by fault. But that does raise some questions, and requires a more complicated pte_free_defer() for powerpc (when its arch_needs_pgtable_deposit() for shmem and file THPs). Leave that enhancement to a later release. Signed-off-by: Hugh Dickins <hughd@google.com> --- mm/khugepaged.c | 169 +++++++++++++++++------------------------------- 1 file changed, 60 insertions(+), 109 deletions(-)