> On Nov 5, 2021, at 1:38 PM, Andrew Morton <akpm@linux-foundation.org> wrote: > > From: Nadav Amit <namit@vmware.com> > Subject: mm/memory.c: use correct VMA flags when freeing page-tables > > Consistent use of the mmu_gather interface requires a call to > tlb_start_vma() and tlb_end_vma() for each VMA. free_pgtables() does not > follow this pattern. > > Certain architectures need tlb_start_vma() to be called in order for > tlb_update_vma_flags() to update the VMA flags (tlb->vma_exec and > tlb->vma_huge), which are later used for the proper TLB flush to be > issued. Since tlb_start_vma() is not called, this can lead to the wrong > VMA flags being used when the flush is performed. > > Specifically, the munmap syscall would call unmap_region(), which unmaps > the VMAs and then frees the page-tables. A flush is needed after the > page-tables are removed to prevent page-walk caches from holding stale > entries, but this flush would use the flags of the VMA flags of the last > VMA that was flushed. This does not appear to be right. > > Use tlb_start_vma() and tlb_end_vma() to prevent this from happening. > This might lead to unnecessary calls to flush_cache_range() on certain > arch's. If needed, a new flag can be added to mmu_gather to indicate that > the flush is not needed. Hugh correctly indicated that I made a silly bug, and this patch is not healping. Nothing would explode, but I assumed the patch would be dropped for me to submit v2. I’ll send a fix to this fix instead unless it is dropped.
On Fri, Nov 5, 2021 at 1:57 PM Nadav Amit <namit@vmware.com> wrote: > > Hugh correctly indicated that I made a silly bug, and this patch is not > healping. > > Nothing would explode, but I assumed the patch would be dropped for me > to submit v2. > > I’ll send a fix to this fix instead unless it is dropped. I've dropped it. Linus
--- a/mm/memory.c~mm-use-correct-vma-flags-when-freeing-page-tables +++ a/mm/memory.c @@ -412,6 +412,8 @@ void free_pgtables(struct mmu_gather *tl unlink_anon_vmas(vma); unlink_file_vma(vma); + tlb_start_vma(tlb, vma); + if (is_vm_hugetlb_page(vma)) { hugetlb_free_pgd_range(tlb, addr, vma->vm_end, floor, next ? next->vm_start : ceiling); @@ -429,6 +431,8 @@ void free_pgtables(struct mmu_gather *tl free_pgd_range(tlb, addr, vma->vm_end, floor, next ? next->vm_start : ceiling); } + + tlb_end_vma(tlb, vma); vma = next; } }