diff mbox series

x86/mm/tlb: Skip tracing when flush is not done

Message ID 20220710233355.4066-1-namit@vmware.com (mailing list archive)
State New
Headers show
Series x86/mm/tlb: Skip tracing when flush is not done | expand

Commit Message

Nadav Amit July 10, 2022, 11:33 p.m. UTC
From: Nadav Amit <namit@vmware.com>

Currently, if flush_tlb_func() does not flush for some reason, the
tracing of the flush will be done only in certain cases, depending on
the reason of the flush. Be consistent and just do not trace in all
cases when the flush was eventually not done.

Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/mm/tlb.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Andy Lutomirski July 11, 2022, 10:50 p.m. UTC | #1
On 7/10/22 16:33, Nadav Amit wrote:
> From: Nadav Amit <namit@vmware.com>
> 
> Currently, if flush_tlb_func() does not flush for some reason, the
> tracing of the flush will be done only in certain cases, depending on
> the reason of the flush. Be consistent and just do not trace in all
> cases when the flush was eventually not done.
> 
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Andy Lutomirski <luto@kernel.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>


Can you remove this comment, too?

>   	/* Tracing is done in a unified manner to reduce the code size */

> -done:
>   	trace_tlb_flush(!local ? TLB_REMOTE_SHOOTDOWN :
>   				(f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN :
>   						  TLB_LOCAL_MM_SHOOTDOWN,
Nadav Amit July 11, 2022, 11:42 p.m. UTC | #2
On Jul 11, 2022, at 3:50 PM, Andy Lutomirski <luto@kernel.org> wrote:

> On 7/10/22 16:33, Nadav Amit wrote:
>> From: Nadav Amit <namit@vmware.com>
>> Currently, if flush_tlb_func() does not flush for some reason, the
>> tracing of the flush will be done only in certain cases, depending on
>> the reason of the flush. Be consistent and just do not trace in all
>> cases when the flush was eventually not done.
>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Signed-off-by: Nadav Amit <namit@vmware.com>
> 
> 
> Can you remove this comment, too?
> 
>>  	/* Tracing is done in a unified manner to reduce the code size */

Yes. I will send v2.

… And if you have time, can you please review the other TLB patch that I
sent? It is a really urgent one.

Thanks,
Nadav
diff mbox series

Patch

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 0f346c51dd99..5c17b86b928d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -736,7 +736,7 @@  static void flush_tlb_func(void *info)
 	u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
 	u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
 	bool local = smp_processor_id() == f->initiating_cpu;
-	unsigned long nr_invalidate = 0;
+	unsigned long nr_invalidate;
 	u64 mm_tlb_gen;
 
 	/* This code cannot presently handle being reentered. */
@@ -795,7 +795,7 @@  static void flush_tlb_func(void *info)
 		 * be handled can catch us all the way up, leaving no work for
 		 * the second flush.
 		 */
-		goto done;
+		return;
 	}
 
 	WARN_ON_ONCE(local_tlb_gen > mm_tlb_gen);
@@ -871,7 +871,6 @@  static void flush_tlb_func(void *info)
 	this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
 
 	/* Tracing is done in a unified manner to reduce the code size */
-done:
 	trace_tlb_flush(!local ? TLB_REMOTE_SHOOTDOWN :
 				(f->mm == NULL) ? TLB_LOCAL_SHOOTDOWN :
 						  TLB_LOCAL_MM_SHOOTDOWN,