Message ID | 20230717131004.12662-2-yangyicong@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | arm64: support batched/deferred tlb shootdown during page reclamation/migration | expand |
On Mon, Jul 17, 2023 at 09:10:01PM +0800, Yicong Yang wrote: > From: Anshuman Khandual <khandual@linux.vnet.ibm.com> > > The entire scheme of deferred TLB flush in reclaim path rests on the > fact that the cost to refill TLB entries is less than flushing out > individual entries by sending IPI to remote CPUs. But architecture > can have different ways to evaluate that. Hence apart from checking > TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be > architecture specific. > > Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> > [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] > Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> > [Rebase and fix incorrect return value type] > Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> > Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> > Reviewed-by: Barry Song <baohua@kernel.org> > Reviewed-by: Xin Hao <xhao@linux.alibaba.com> > Tested-by: Punit Agrawal <punit.agrawal@bytedance.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 80450e1d5385..cf2a1de5d388 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -253,6 +253,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); } +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) +{ + bool should_defer = false; + + /* If remote CPUs need to be flushed then defer batch the flush */ + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) + should_defer = true; + put_cpu(); + + return should_defer; +} + static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) { /* diff --git a/mm/rmap.c b/mm/rmap.c index 0c0d8857dfce..6480e526c154 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -688,17 +688,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval) */ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) { - bool should_defer = false; - if (!(flags & TTU_BATCH_FLUSH)) return false; - /* If remote CPUs need to be flushed then defer batch the flush */ - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) - should_defer = true; - put_cpu(); - - return should_defer; + return arch_tlbbatch_should_defer(mm); } /*