Message ID | 20250217140419.1702389-4-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fixes for hugetlb and vmalloc on arm64 | expand |
On 2/17/25 19:34, Ryan Roberts wrote: > commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for > FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to > TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in > flush_hugetlb_tlb_range() was not updated. So as it stands, when trying > to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will > spuriously try to invalidate at level 0 on LPA2-enabled systems. > > Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are > at it, explicitly use the correct stride and level for CONT_PMD_SIZE and > CONT_PTE_SIZE, which should provide a minor optimization. > > Cc: stable@vger.kernel.org > Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> LGTM Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> > --- > arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------ > 1 file changed, 16 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h > index 03db9cb21ace..07fbf5bf85a7 100644 > --- a/arch/arm64/include/asm/hugetlb.h > +++ b/arch/arm64/include/asm/hugetlb.h > @@ -76,12 +76,22 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, > { > unsigned long stride = huge_page_size(hstate_vma(vma)); > > - if (stride == PMD_SIZE) > - __flush_tlb_range(vma, start, end, stride, false, 2); > - else if (stride == PUD_SIZE) > - __flush_tlb_range(vma, start, end, stride, false, 1); > - else > - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); > + switch (stride) { > +#ifndef __PAGETABLE_PMD_FOLDED > + case PUD_SIZE: > + __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1); > + break; > +#endif > + case CONT_PMD_SIZE: > + case PMD_SIZE: > + __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2); > + break; > + case CONT_PTE_SIZE: > + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3); > + break; > + default: > + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN); > + } > } > > #endif /* __ASM_HUGETLB_H */
On Mon, Feb 17, 2025 at 02:04:16PM +0000, Ryan Roberts wrote: > commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for > FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to > TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in > flush_hugetlb_tlb_range() was not updated. So as it stands, when trying > to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will > spuriously try to invalidate at level 0 on LPA2-enabled systems. > > Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are > at it, explicitly use the correct stride and level for CONT_PMD_SIZE and > CONT_PTE_SIZE, which should provide a minor optimization. > > Cc: stable@vger.kernel.org > Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 03db9cb21ace..07fbf5bf85a7 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -76,12 +76,22 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, { unsigned long stride = huge_page_size(hstate_vma(vma)); - if (stride == PMD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 2); - else if (stride == PUD_SIZE) - __flush_tlb_range(vma, start, end, stride, false, 1); - else - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); + switch (stride) { +#ifndef __PAGETABLE_PMD_FOLDED + case PUD_SIZE: + __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1); + break; +#endif + case CONT_PMD_SIZE: + case PMD_SIZE: + __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2); + break; + case CONT_PTE_SIZE: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3); + break; + default: + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN); + } } #endif /* __ASM_HUGETLB_H */
commit c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") changed the "invalidation level unknown" hint from 0 to TLBI_TTL_UNKNOWN (INT_MAX). But the fallback "unknown level" path in flush_hugetlb_tlb_range() was not updated. So as it stands, when trying to invalidate CONT_PMD_SIZE or CONT_PTE_SIZE hugetlb mappings, we will spuriously try to invalidate at level 0 on LPA2-enabled systems. Fix this so that the fallback passes TLBI_TTL_UNKNOWN, and while we are at it, explicitly use the correct stride and level for CONT_PMD_SIZE and CONT_PTE_SIZE, which should provide a minor optimization. Cc: stable@vger.kernel.org Fixes: c910f2b65518 ("arm64/mm: Update tlb invalidation routines for FEAT_LPA2") Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> --- arch/arm64/include/asm/hugetlb.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)