Message ID | 20221212095523.52683-5-julien@xen.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen/arm: Don't switch TTBR while the MMU is on | expand |
Hi Julien, On 12/12/2022 10:55, Julien Grall wrote: > > > From: Julien Grall <jgrall@amazon.com> > > At the moment, flush_xen_tlb_range_va{,_local}() are using system > wide memory barrier. This is quite expensive and unnecessary. > > For the local version, a non-shareable barrier is sufficient. > For the SMP version, a inner-shareable barrier is sufficient. s/a/an/ > > Furthermore, the initial barrier only need to a store barrier. s/need/needs/ > > For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h. > > Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Michal Orzel <michal.orzel@amd.com> ~Michal
diff --git a/xen/arch/arm/include/asm/flushtlb.h b/xen/arch/arm/include/asm/flushtlb.h index 125a141975e0..e45fb6d97b02 100644 --- a/xen/arch/arm/include/asm/flushtlb.h +++ b/xen/arch/arm/include/asm/flushtlb.h @@ -37,13 +37,14 @@ static inline void flush_xen_tlb_range_va_local(vaddr_t va, { vaddr_t end = va + size; - dsb(sy); /* Ensure preceding are visible */ + /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */ + dsb(nshst); /* Ensure prior page-tables updates have completed */ while ( va < end ) { __flush_xen_tlb_one_local(va); va += PAGE_SIZE; } - dsb(sy); /* Ensure completion of the TLB flush */ + dsb(nsh); /* Ensure the TLB invalidation has completed */ isb(); } @@ -56,13 +57,14 @@ static inline void flush_xen_tlb_range_va(vaddr_t va, { vaddr_t end = va + size; - dsb(sy); /* Ensure preceding are visible */ + /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */ + dsb(ishst); /* Ensure prior page-tables updates have completed */ while ( va < end ) { __flush_xen_tlb_one(va); va += PAGE_SIZE; } - dsb(sy); /* Ensure completion of the TLB flush */ + dsb(ish); /* Ensure the TLB invalidation has completed */ isb(); }