diff mbox series

[2/2] riscv: Eagerly flush in flush_icache_deferred()

Message ID 20240813-fix_fencei_optimization-v1-2-2aadc2cdde95@rivosinc.com (mailing list archive)
State Changes Requested
Headers show
Series riscv: Fix race conditions in PR_RISCV_SET_ICACHE_FLUSH_CTX | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR success PR summary
conchuod/patch-2-test-1 success .github/scripts/patches/tests/build_rv32_defconfig.sh
conchuod/patch-2-test-2 success .github/scripts/patches/tests/build_rv64_clang_allmodconfig.sh
conchuod/patch-2-test-3 success .github/scripts/patches/tests/build_rv64_gcc_allmodconfig.sh
conchuod/patch-2-test-4 success .github/scripts/patches/tests/build_rv64_nommu_k210_defconfig.sh
conchuod/patch-2-test-5 success .github/scripts/patches/tests/build_rv64_nommu_virt_defconfig.sh
conchuod/patch-2-test-6 success .github/scripts/patches/tests/checkpatch.sh
conchuod/patch-2-test-7 success .github/scripts/patches/tests/dtb_warn_rv64.sh
conchuod/patch-2-test-8 success .github/scripts/patches/tests/header_inline.sh
conchuod/patch-2-test-9 success .github/scripts/patches/tests/kdoc.sh
conchuod/patch-2-test-10 success .github/scripts/patches/tests/module_param.sh
conchuod/patch-2-test-11 success .github/scripts/patches/tests/verify_fixes.sh
conchuod/patch-2-test-12 success .github/scripts/patches/tests/verify_signedoff.sh

Commit Message

Charlie Jenkins Aug. 13, 2024, 11:02 p.m. UTC
It is possible for mm->context.icache_stale_mask to be set between
switch_mm() and switch_to() when there are two threads on different CPUs
executing in the same mm:

<---- Thread 1 starts running here on CPU1

<---- Thread 2 starts running here with same mm

T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_ON, PR_RISCV_SCOPE_PER_PROCESS);
T2: <-- kernel sets current->mm->context.force_icache_flush to true
T2: <modification of instructions>
T2: fence.i

T1: fence.i (to synchronize with other thread, has some logic to
             determine when to do this)
T1: <-- thread 1 is preempted
T1: <-- thread 1 is placed onto CPU3 and starts context switch sequence
T1 (kernel): flush_icache_deferred() -> skips flush because switch_to_should_flush_icache() returns true
				     -> thread has migrated and task->mm->context.force_icache_flush is true

T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_OFF, PR_RISCV_SCOPE_PER_PROCESS);
T2 (kernel): kernel sets current->mm->context.force_icache_flush = false

T1 (kernel): switch_to() calls switch_to_should_flush_icache() which now
	     returns false because task->mm->context.force_icache_flush
	     is false due to the other thread emitting
	     PR_RISCV_CTX_SW_FENCEI_OFF.
T1 (back in userspace): Instruction cache was never flushed on context
			switch to CPU3, and thus may execute incorrect
			instructions.

This commit fixes this issue by also flushing in switch_to() when
task->mm->context.icache_stale_mask is set, and always flushing in
flush_icache_deferred().

It is possible for switch_mm() to be called without being followed by
switch_to() such as with the riscv efi driver in efi_virtmap_load(), so
we cannot do all of the flushing in switch_to().

To avoid flushing multiple times in a single context switch, clear
mm->context.icache_stale_mask in switch_mm() and only flush in
switch_to() if it was set again in the meantime. Set icache_stale_mask
when handling prctl and in switch_to() if
task->mm->context.force_icache_flush is set to prepare for next context
switch.

Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
Fixes: 6b9391b581fd ("riscv: Include riscv_set_icache_flush_ctx prctl")
---
 arch/riscv/include/asm/switch_to.h | 19 ++++++++++++++++---
 arch/riscv/mm/cacheflush.c         |  1 +
 arch/riscv/mm/context.c            |  6 +-----
 3 files changed, 18 insertions(+), 8 deletions(-)

Comments

Andrea Parri Aug. 15, 2024, 12:34 a.m. UTC | #1
> <---- Thread 1 starts running here on CPU1
> 
> <---- Thread 2 starts running here with same mm
> 
> T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_ON, PR_RISCV_SCOPE_PER_PROCESS);
> T2: <-- kernel sets current->mm->context.force_icache_flush to true

Mmh, TBH, I'm not sure how this patch is supposed to fix the race in
question:

For once, AFAIU, the operation

T2: cpumask_setall(&current->mm->context.icache_stale_mask)

(on CPU2?) you've added with this patch...


> T2: <modification of instructions>
> T2: fence.i
> 
> T1: fence.i (to synchronize with other thread, has some logic to
>              determine when to do this)
> T1: <-- thread 1 is preempted
> T1: <-- thread 1 is placed onto CPU3 and starts context switch sequence
> T1 (kernel): flush_icache_deferred() -> skips flush because switch_to_should_flush_icache() returns true

... does _not_ ensure that T1: flush_icache_deferred() on CPU3 will
observe/read from that operation: IOW,

T1: cpumask_test_and_clear_cpu(cpu, &mm->context.icache_stale_mask)

may still evaluate to FALSE, thus preventing the FENCE.I execution.

Moreover, AFAIU, ...


> 				     -> thread has migrated and task->mm->context.force_icache_flush is true
> 
> T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_OFF, PR_RISCV_SCOPE_PER_PROCESS);

... moving the operation(s)

T2: set_icache_stale_mask()
	T2: cpumask_setall(&current->mm->context.icache_stale_mask)

before the following operation...  (per patch #1)


> T2 (kernel): kernel sets current->mm->context.force_icache_flush = false
> 
> T1 (kernel): switch_to() calls switch_to_should_flush_icache() which now
> 	     returns false because task->mm->context.force_icache_flush
> 	     is false due to the other thread emitting
> 	     PR_RISCV_CTX_SW_FENCEI_OFF.

... does not ensure that T1: switch_to_should_flush_icache() on CPU3
will observe

T1: cpumask_test_cpu(<cpu3>, &task->mm->context.icache_stale_mask) == true

in fact, T1 may evaluate the latter expression to FALSE while still
being able to observe the "later" operation, i.e.

T1: task->mm->context.force_icache_flush == false


Perhaps a simplified but useful way to look at such scenarios is as
follows:

  - CPUs are just like nodes of a distributed system, and

  - store are like messages to be exchanged (by the memory subsystem)
    between CPUs: without some (explicit) synchronization/constraints,
    messages originating from a given CPU can propagate/be visible to
    other CPUs at any time and in any order.


IAC, can you elaborate on the solution proposed here (maybe by adding
some inline comments), keeping the above considerations in mind? what
am I missing?


> T1 (back in userspace): Instruction cache was never flushed on context
> 			switch to CPU3, and thus may execute incorrect
> 			instructions.

Mmh, flushing the I$ (or, as meant here, executing a FENCE.I) seems
to be only half of the solution: IIUC, we'd like to ensure that the
store operation

T2: <modification of instructions>

originated from CPU2 is _visible_ to CPU3 by the time that FENCE.I
instruction is executed, cf.

[from Zifencei - emphasis mine]

A FENCE.I instruction ensures that a subsequent instruction fetch on
a RISC-V hart will see any previous data stores _already visible_ to
the same RISC-V hart.


IOW (but assuming code is in coherent main memory), imagine that the
(putative) FENCE.I on CPU3 is replaced by some

T1: LOAD reg,0(insts_addr)

question is: would such a load be guaranteed to observe the store

T2: <modification of instructions>  #  STORE new_insts,0(insts_addr)

originated from CPU2? can you elaborate?

  Andrea
Charlie Jenkins Aug. 15, 2024, 11:17 p.m. UTC | #2
On Thu, Aug 15, 2024 at 02:34:19AM +0200, Andrea Parri wrote:
> > <---- Thread 1 starts running here on CPU1
> > 
> > <---- Thread 2 starts running here with same mm
> > 
> > T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_ON, PR_RISCV_SCOPE_PER_PROCESS);
> > T2: <-- kernel sets current->mm->context.force_icache_flush to true
> 
> Mmh, TBH, I'm not sure how this patch is supposed to fix the race in
> question:
> 
> For once, AFAIU, the operation
> 
> T2: cpumask_setall(&current->mm->context.icache_stale_mask)
> 
> (on CPU2?) you've added with this patch...
> 
> 
> > T2: <modification of instructions>
> > T2: fence.i
> > 
> > T1: fence.i (to synchronize with other thread, has some logic to
> >              determine when to do this)
> > T1: <-- thread 1 is preempted
> > T1: <-- thread 1 is placed onto CPU3 and starts context switch sequence
> > T1 (kernel): flush_icache_deferred() -> skips flush because switch_to_should_flush_icache() returns true
> 
> ... does _not_ ensure that T1: flush_icache_deferred() on CPU3 will
> observe/read from that operation: IOW,
> 
> T1: cpumask_test_and_clear_cpu(cpu, &mm->context.icache_stale_mask)
> 
> may still evaluate to FALSE, thus preventing the FENCE.I execution.
> 
> Moreover, AFAIU, ...
> 
> 
> > 				     -> thread has migrated and task->mm->context.force_icache_flush is true
> > 
> > T2: prctl(PR_RISCV_SET_ICACHE_FLUSH_CTX, PR_RISCV_CTX_SW_FENCEI_OFF, PR_RISCV_SCOPE_PER_PROCESS);
> 
> ... moving the operation(s)
> 
> T2: set_icache_stale_mask()
> 	T2: cpumask_setall(&current->mm->context.icache_stale_mask)
> 
> before the following operation...  (per patch #1)
> 
> 
> > T2 (kernel): kernel sets current->mm->context.force_icache_flush = false
> > 
> > T1 (kernel): switch_to() calls switch_to_should_flush_icache() which now
> > 	     returns false because task->mm->context.force_icache_flush
> > 	     is false due to the other thread emitting
> > 	     PR_RISCV_CTX_SW_FENCEI_OFF.
> 
> ... does not ensure that T1: switch_to_should_flush_icache() on CPU3
> will observe
> 
> T1: cpumask_test_cpu(<cpu3>, &task->mm->context.icache_stale_mask) == true
> 
> in fact, T1 may evaluate the latter expression to FALSE while still
> being able to observe the "later" operation, i.e.
> 
> T1: task->mm->context.force_icache_flush == false
> 
> 
> Perhaps a simplified but useful way to look at such scenarios is as
> follows:
> 
>   - CPUs are just like nodes of a distributed system, and
> 
>   - store are like messages to be exchanged (by the memory subsystem)
>     between CPUs: without some (explicit) synchronization/constraints,
>     messages originating from a given CPU can propagate/be visible to
>     other CPUs at any time and in any order.
> 
> 
> IAC, can you elaborate on the solution proposed here (maybe by adding
> some inline comments), keeping the above considerations in mind? what
> am I missing?

I should have added some memory barriers. I want to have the stores to
task->mm->context.force_icache_flush and
task->mm->context.icache_stale_mask in riscv_set_icache_flush_ctx() from
one hart to be visible by another hart that is observing the values in
flush_icache_deferred() and switch_to_should_flush_icache(). Then also
for the changes to those variables in flush_icache_deferred() and
switch_to_should_flush_icache() to be visible in future invocations of
those functions.

> 
> 
> > T1 (back in userspace): Instruction cache was never flushed on context
> > 			switch to CPU3, and thus may execute incorrect
> > 			instructions.
> 
> Mmh, flushing the I$ (or, as meant here, executing a FENCE.I) seems
> to be only half of the solution: IIUC, we'd like to ensure that the
> store operation
> 
> T2: <modification of instructions>
> 
> originated from CPU2 is _visible_ to CPU3 by the time that FENCE.I
> instruction is executed, cf.
> 
> [from Zifencei - emphasis mine]
> 
> A FENCE.I instruction ensures that a subsequent instruction fetch on
> a RISC-V hart will see any previous data stores _already visible_ to
> the same RISC-V hart.
>

Oh okay so we will need to do a memory barrier before the fence.i in the
userspace program. I don't believe a memory barrier will be necessary in
the kernel though while this prctl is active, will the kernel ensure
memory coherence upon migration?

> 
> IOW (but assuming code is in coherent main memory), imagine that the
> (putative) FENCE.I on CPU3 is replaced by some
> 
> T1: LOAD reg,0(insts_addr)
> 
> question is: would such a load be guaranteed to observe the store
> 
> T2: <modification of instructions>  #  STORE new_insts,0(insts_addr)
> 
> originated from CPU2? can you elaborate?

To give a broader overview, the usecase for the per-mm mode of the prctl
is to support JIT languages that generate code sequences on one hart and
execute the generated code on another hart. In this example, thread 2 is
generating the code sequences and thread 1 is executing them.

The goal here is for Linux to guarantee that CPU migration does not
cause thread 1 to not see the instructions generated by thread 2, if it
was able to see the generated instructions on the CPU it was migrating
from. To hold this guarantee, when thread 1 (or any thread that is in
the mm group) is migrated, its instruction cache is synchronized.
Ideally, it would contain exactly the same contents as it did on the
previous CPU, but instead it must rely on fence.i since that is the only
option.

The stipulation of "if it was able to see the generated instructions on
the CPU it was migration from" is there to say that the thread is
expected to emit fence.i as necessary to cover the case that migration
does not occur.

Another note is that with this prctl, fence.i is emitted by the kernel
whenever any thread in the mm group is migrated, however the described
usecase is that there is one producer of the instructions and one
consumer. An extension to this prctl could be to specify which threads
are consumers and which threads are producers and only flush the icache
when the consumers have migrated but that optimization has not yet be
written.

- Charlie

> 
>   Andrea
Andrea Parri Aug. 16, 2024, 9:27 a.m. UTC | #3
> I should have added some memory barriers. I want to have the stores to
> task->mm->context.force_icache_flush and
> task->mm->context.icache_stale_mask in riscv_set_icache_flush_ctx() from
> one hart to be visible by another hart that is observing the values in
> flush_icache_deferred() and switch_to_should_flush_icache(). Then also
> for the changes to those variables in flush_icache_deferred() and
> switch_to_should_flush_icache() to be visible in future invocations of
> those functions.

[...]


> Oh okay so we will need to do a memory barrier before the fence.i in the
> userspace program. I don't believe a memory barrier will be necessary in
> the kernel though while this prctl is active, will the kernel ensure
> memory coherence upon migration?

Yes, the kernel enforces coherence upon migration:(*) in the example
at stake, this means that T1 will have a coherent view of memory when
migrating from CPU1 to CPU3.

To leverage this property, we (or the user application) would need to
provide some synchronization between T2 (that modifies the code) and
T1.  This typically involves some form of barrier pairing.

Feel free to reach out, here on the list or in private chats, for any
related memory-barrier doubts I might be able to clarify.

  Andrea

(*) A malicious/buggy hypervisor could migrate (virtual)CPU X, and all
its threads, at any time and way allowed by Murphy's law.  :-) I take
it as that goes beyond the scope of this series...
diff mbox series

Patch

diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
index 7594df37cc9f..5701cfcf2b68 100644
--- a/arch/riscv/include/asm/switch_to.h
+++ b/arch/riscv/include/asm/switch_to.h
@@ -76,11 +76,24 @@  extern struct task_struct *__switch_to(struct task_struct *,
 static inline bool switch_to_should_flush_icache(struct task_struct *task)
 {
 #ifdef CONFIG_SMP
-	bool stale_mm = task->mm && task->mm->context.force_icache_flush;
-	bool stale_thread = task->thread.force_icache_flush;
+	bool stale_mm = false;
 	bool thread_migrated = smp_processor_id() != task->thread.prev_cpu;
+	bool stale_thread = thread_migrated && task->thread.force_icache_flush;
+
+	if (task->mm) {
+		/*
+		 * The mm is only stale if the respective CPU bit in
+		 * icache_stale_mask is set. force_icache_flush indicates that
+		 * icache_stale_mask should be set again before returning to userspace.
+		 */
+		stale_mm = cpumask_test_cpu(smp_processor_id(),
+					    &task->mm->context.icache_stale_mask);
+		cpumask_assign_cpu(smp_processor_id(),
+				   &task->mm->context.icache_stale_mask,
+				   task->mm->context.force_icache_flush);
+	}
 
-	return thread_migrated && (stale_mm || stale_thread);
+	return stale_mm || stale_thread;
 #else
 	return false;
 #endif
diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index b81672729887..c4a9ac2ad7ab 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -230,6 +230,7 @@  int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long scope)
 		switch (scope) {
 		case PR_RISCV_SCOPE_PER_PROCESS:
 			current->mm->context.force_icache_flush = true;
+			cpumask_setall(&current->mm->context.icache_stale_mask);
 			break;
 		case PR_RISCV_SCOPE_PER_THREAD:
 			current->thread.force_icache_flush = true;
diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
index 4abe3de23225..9e82e2a99441 100644
--- a/arch/riscv/mm/context.c
+++ b/arch/riscv/mm/context.c
@@ -306,11 +306,7 @@  static inline void flush_icache_deferred(struct mm_struct *mm, unsigned int cpu,
 		 */
 		smp_mb();
 
-		/*
-		 * If cache will be flushed in switch_to, no need to flush here.
-		 */
-		if (!(task && switch_to_should_flush_icache(task)))
-			local_flush_icache_all();
+		local_flush_icache_all();
 	}
 #endif
 }