Message ID | 20230731074829.79309-2-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: mremap: fix move page tables | expand |
On 07/31/23 15:48, Kefeng Wang wrote: > Archs may need to do special things when flushing hugepage tlb, > so use the more applicable flush_hugetlb_tlb_range() instead of > flush_tlb_range(). > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Thanks! Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Although, I missed this in 550a7d60bd5e :( Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range today.
> On Jul 31, 2023, at 15:48, Kefeng Wang <wangkefeng.wang@huawei.com> wrote: > > Archs may need to do special things when flushing hugepage tlb, > so use the more applicable flush_hugetlb_tlb_range() instead of > flush_tlb_range(). > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Muchun Song <songmuchun@bytedance.com>
On Mon, Jul 31, 2023 at 4:40 PM Mike Kravetz <mike.kravetz@oracle.com> wrote: > > On 07/31/23 15:48, Kefeng Wang wrote: > > Archs may need to do special things when flushing hugepage tlb, > > so use the more applicable flush_hugetlb_tlb_range() instead of > > flush_tlb_range(). > > > > Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > > Thanks! > > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> > Sorry for jumping in late, but given the concerns raised around HGM and the deviation between hugetlb and the rest of MM, does it make sense to try to make an incremental effort towards avoiding hugetlb specialization? In the context of this patch, I would prefer that the arch upgrade flush_tlb_range() to handle hugetlb correctly, instead of adding more hugetlb specific deviations, ala flush_hugetlb_tlb_range. While it's at it, maybe replace flush_hugetlb_tlb_range() in the code with flush_tlb_range(). Although, I don't have the expertise to judge if upgrading flush_tlb_range() to handle hugetlb is easy or feasible at all. > Although, I missed this in 550a7d60bd5e :( > > Looks like only powerpc provides an arch specific flush_hugetlb_tlb_range > today. > -- > Mike Kravetz > > > --- > > mm/hugetlb.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 64a3239b6407..ac876bfba340 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > > } > > > > if (shared_pmd) > > - flush_tlb_range(vma, range.start, range.end); > > + flush_hugetlb_tlb_range(vma, range.start, range.end); > > else > > - flush_tlb_range(vma, old_end - len, old_end); > > + flush_hugetlb_tlb_range(vma, old_end - len, old_end); > > mmu_notifier_invalidate_range_end(&range); > > i_mmap_unlock_write(mapping); > > hugetlb_vma_unlock_write(vma); > > -- > > 2.41.0 > >
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64a3239b6407..ac876bfba340 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5281,9 +5281,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, } if (shared_pmd) - flush_tlb_range(vma, range.start, range.end); + flush_hugetlb_tlb_range(vma, range.start, range.end); else - flush_tlb_range(vma, old_end - len, old_end); + flush_hugetlb_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range); i_mmap_unlock_write(mapping); hugetlb_vma_unlock_write(vma);
Archs may need to do special things when flushing hugepage tlb, so use the more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range(). Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- mm/hugetlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)