Message ID | 20221021163703.3218176-5-jthoughton@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | hugetlb: introduce HugeTLB high-granularity mapping | expand |
On Fri, Oct 21, 2022 at 04:36:20PM +0000, James Houghton wrote: > Currently this check is overly aggressive. For some userfaultfd VMAs, > VMA sharing is disabled, yet we still widen the address range, which is > used for flushing TLBs and sending MMU notifiers. > > This is done now, as HGM VMAs also have sharing disabled, yet would > still have flush ranges adjusted. Overaggressively flushing TLBs and > triggering MMU notifiers is particularly harmful with lots of > high-granularity operations. > > Signed-off-by: James Houghton <jthoughton@google.com> Acked-by: Peter Xu <peterx@redhat.com>
On 10/21/22 16:36, James Houghton wrote: > Currently this check is overly aggressive. For some userfaultfd VMAs, > VMA sharing is disabled, yet we still widen the address range, which is > used for flushing TLBs and sending MMU notifiers. Yes, the userfaultfd check is missing in the code today. > This is done now, as HGM VMAs also have sharing disabled, yet would > still have flush ranges adjusted. Overaggressively flushing TLBs and > triggering MMU notifiers is particularly harmful with lots of > high-granularity operations. > > Signed-off-by: James Houghton <jthoughton@google.com> > --- > mm/hugetlb.c | 21 +++++++++++++++------ > 1 file changed, 15 insertions(+), 6 deletions(-) Thanks, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 20a111b532aa..52cec5b0789e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6835,22 +6835,31 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma, return saddr; } -bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) +static bool pmd_sharing_possible(struct vm_area_struct *vma) { - unsigned long start = addr & PUD_MASK; - unsigned long end = start + PUD_SIZE; - #ifdef CONFIG_USERFAULTFD if (uffd_disable_huge_pmd_share(vma)) return false; #endif /* - * check on proper vm_flags and page table alignment + * Only shared VMAs can share PMDs. */ if (!(vma->vm_flags & VM_MAYSHARE)) return false; if (!vma->vm_private_data) /* vma lock required for sharing */ return false; + return true; +} + +bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) +{ + unsigned long start = addr & PUD_MASK; + unsigned long end = start + PUD_SIZE; + /* + * check on proper vm_flags and page table alignment + */ + if (!pmd_sharing_possible(vma)) + return false; if (!range_in_vma(vma, start, end)) return false; return true; @@ -6871,7 +6880,7 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, * vma needs to span at least one aligned PUD size, and the range * must be at least partially within in. */ - if (!(vma->vm_flags & VM_MAYSHARE) || !(v_end > v_start) || + if (!pmd_sharing_possible(vma) || !(v_end > v_start) || (*end <= v_start) || (*start >= v_end)) return;
Currently this check is overly aggressive. For some userfaultfd VMAs, VMA sharing is disabled, yet we still widen the address range, which is used for flushing TLBs and sending MMU notifiers. This is done now, as HGM VMAs also have sharing disabled, yet would still have flush ranges adjusted. Overaggressively flushing TLBs and triggering MMU notifiers is particularly harmful with lots of high-granularity operations. Signed-off-by: James Houghton <jthoughton@google.com> --- mm/hugetlb.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-)