Message ID | 20240518074914.52170-3-libang.li@antgroup.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Add update_mmu_tlb_range() to simplify code | expand |
On 18.05.24 09:49, Bang Li wrote: > Remove update_mmu_tlb() from those architectures and define > generically via update_mmu_tlb_range(), removing the ability > for arches to override it. I'd suggest something like "mm: implement update_mmu_tlb() using update_mmu_tlb_range() Let's make update_mmu_tlb() simply a generic wrapper around update_mmu_tlb_range(). Only the latter can now be overridden by the architecture. We can now remove __HAVE_ARCH_UPDATE_MMU_TLB as well. " [...] > +#ifndef update_mmu_tlb_range > +static inline void update_mmu_tlb_range(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep, unsigned int nr) > +{ > +} > +#endif With that in patch #1 Acked-by: David Hildenbrand <david@redhat.com>
Hi David, Thanks for you review! On 2024/5/21 17:36, David Hildenbrand wrote: > On 18.05.24 09:49, Bang Li wrote: >> Remove update_mmu_tlb() from those architectures and define >> generically via update_mmu_tlb_range(), removing the ability >> for arches to override it. > > I'd suggest something like > > "mm: implement update_mmu_tlb() using update_mmu_tlb_range() > > Let's make update_mmu_tlb() simply a generic wrapper around > update_mmu_tlb_range(). Only the latter can now be overridden by the > architecture. We can now remove __HAVE_ARCH_UPDATE_MMU_TLB as well. > " Agree! Thank you for your suggestion, I will modify it in the next version > > [...] > >> +#ifndef update_mmu_tlb_range >> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma, >> + unsigned long address, pte_t *ptep, unsigned int nr) >> +{ >> +} >> +#endif > > With that in patch #1 Thanks again. Bang > > Acked-by: David Hildenbrand <david@redhat.com> >
diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index 5ccc2a3a6f7a..161dd6e10479 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -467,8 +467,6 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, #define update_mmu_cache(vma, addr, ptep) \ update_mmu_cache_range(NULL, vma, addr, ptep, 1) -#define __HAVE_ARCH_UPDATE_MMU_TLB -#define update_mmu_tlb update_mmu_cache #define update_mmu_tlb_range(vma, addr, ptep, nr) \ update_mmu_cache_range(NULL, vma, addr, ptep, nr) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 0891ad7d43b6..c29a551eb0ca 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -594,8 +594,6 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, #define update_mmu_cache(vma, address, ptep) \ update_mmu_cache_range(NULL, vma, address, ptep, 1) -#define __HAVE_ARCH_UPDATE_MMU_TLB -#define update_mmu_tlb update_mmu_cache #define update_mmu_tlb_range(vma, address, ptep, nr) \ update_mmu_cache_range(NULL, vma, address, ptep, nr) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index fc07b829ac4a..158170442d2f 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -486,8 +486,6 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, #define update_mmu_cache(vma, addr, ptep) \ update_mmu_cache_range(NULL, vma, addr, ptep, 1) -#define __HAVE_ARCH_UPDATE_MMU_TLB -#define update_mmu_tlb update_mmu_cache #define update_mmu_tlb_range(vma, addr, ptep, nr) \ update_mmu_cache_range(NULL, vma, addr, ptep, nr) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 436158bd9030..1647a7cc3fbf 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -410,9 +410,6 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, typedef pte_t *pte_addr_t; -void update_mmu_tlb(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep); -#define __HAVE_ARCH_UPDATE_MMU_TLB void update_mmu_tlb_range(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, unsigned int nr); #define update_mmu_tlb_range update_mmu_tlb_range diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c index 05efba86b870..0a1a815dc796 100644 --- a/arch/xtensa/mm/tlb.c +++ b/arch/xtensa/mm/tlb.c @@ -163,12 +163,6 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end) } } -void update_mmu_tlb(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) -{ - local_flush_tlb_page(vma, address); -} - void update_mmu_tlb_range(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, unsigned int nr) { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 18019f037bae..117b807e3f89 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -729,13 +729,18 @@ static inline void clear_full_ptes(struct mm_struct *mm, unsigned long addr, * fault. This function updates TLB only, do nothing with cache or others. * It is the difference with function update_mmu_cache. */ -#ifndef __HAVE_ARCH_UPDATE_MMU_TLB +#ifndef update_mmu_tlb_range +static inline void update_mmu_tlb_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} +#endif + static inline void update_mmu_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { + update_mmu_tlb_range(vma, address, ptep, 1); } -#define __HAVE_ARCH_UPDATE_MMU_TLB -#endif /* * Some architectures may be able to avoid expensive synchronization
Remove update_mmu_tlb() from those architectures and define generically via update_mmu_tlb_range(), removing the ability for arches to override it. Signed-off-by: Bang Li <libang.li@antgroup.com> --- arch/loongarch/include/asm/pgtable.h | 2 -- arch/mips/include/asm/pgtable.h | 2 -- arch/riscv/include/asm/pgtable.h | 2 -- arch/xtensa/include/asm/pgtable.h | 3 --- arch/xtensa/mm/tlb.c | 6 ------ include/linux/pgtable.h | 11 ++++++++--- 6 files changed, 8 insertions(+), 18 deletions(-)