Message ID | 20240110145533.60234-5-parri.andrea@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | membarrier: riscv: Core serializing command | expand |
On Wed, Jan 10, 2024, at 9:55 AM, Andrea Parri wrote: > RISC-V uses xRET instructions on return from interrupt and to go back > to user-space; the xRET instruction is not core serializing. > > Use FENCE.I for providing core serialization as follows: > > - by calling sync_core_before_usermode() on return from interrupt (cf. > ipi_sync_core()), > > - via switch_mm() and sync_core_before_usermode() (respectively, for > uthread->uthread and kthread->uthread transitions) to go back to > user-space. > > On RISC-V, the serialization in switch_mm() is activated by resetting > the icache_stale_mask of the mm at prepare_sync_core_cmd(). > > Suggested-by: Palmer Dabbelt <palmer@dabbelt.com> > Signed-off-by: Andrea Parri <parri.andrea@gmail.com> > --- > .../membarrier-sync-core/arch-support.txt | 18 +++++++++++- > MAINTAINERS | 1 + > arch/riscv/Kconfig | 3 ++ > arch/riscv/include/asm/membarrier.h | 19 ++++++++++++ > arch/riscv/include/asm/sync_core.h | 29 +++++++++++++++++++ > kernel/sched/core.c | 4 +++ > kernel/sched/membarrier.c | 4 +++ > 7 files changed, 77 insertions(+), 1 deletion(-) > create mode 100644 arch/riscv/include/asm/sync_core.h > > diff --git > a/Documentation/features/sched/membarrier-sync-core/arch-support.txt > b/Documentation/features/sched/membarrier-sync-core/arch-support.txt > index d96b778b87ed8..a163170fc0f48 100644 > --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt > +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt > @@ -10,6 +10,22 @@ > # Rely on implicit context synchronization as a result of exception > return > # when returning from IPI handler, and when returning to user-space. > # > +# * riscv > +# > +# riscv uses xRET as return from interrupt and to return to user-space. > +# > +# Given that xRET is not core serializing, we rely on FENCE.I for > providing > +# core serialization: "core serialization" is a meaningless sequence of words for RISC-V users, and an extremely strange way to describe running fence.i on all remote cores. fence.i is a _fence_; it is not required to affect a core pipeline beyond what is needed to ensure that all instruction fetches after the barrier completes see writes performed before the barrier. The feature seems useful, but it should document what it does using terminology actually used in the RISC-V specifications. > +# > +# - by calling sync_core_before_usermode() on return from interrupt > (cf. > +# ipi_sync_core()), > +# > +# - via switch_mm() and sync_core_before_usermode() (respectively, for > +# uthread->uthread and kthread->uthread transitions) to go back to > +# user-space. > +# > +# The serialization in switch_mm() is activated by > prepare_sync_core_cmd(). > +# > # * x86 > # > # x86-32 uses IRET as return from interrupt, which takes care of the > IPI. > @@ -43,7 +59,7 @@ > | openrisc: | TODO | > | parisc: | TODO | > | powerpc: | ok | > - | riscv: | TODO | > + | riscv: | ok | > | s390: | ok | > | sh: | TODO | > | sparc: | TODO | > diff --git a/MAINTAINERS b/MAINTAINERS > index 6bce0aeecb4f2..e4ca6288ea3d1 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -13817,6 +13817,7 @@ L: linux-kernel@vger.kernel.org > S: Supported > F: Documentation/scheduler/membarrier.rst > F: arch/*/include/asm/membarrier.h > +F: arch/*/include/asm/sync_core.h > F: include/uapi/linux/membarrier.h > F: kernel/sched/membarrier.c > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 33d9ea5fa392f..2ad63a216d69a 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -28,14 +28,17 @@ config RISCV > select ARCH_HAS_GIGANTIC_PAGE > select ARCH_HAS_KCOV > select ARCH_HAS_MEMBARRIER_CALLBACKS > + select ARCH_HAS_MEMBARRIER_SYNC_CORE > select ARCH_HAS_MMIOWB > select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE > select ARCH_HAS_PMEM_API > + select ARCH_HAS_PREPARE_SYNC_CORE_CMD > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SET_DIRECT_MAP if MMU > select ARCH_HAS_SET_MEMORY if MMU > select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL > select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL > + select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE > select ARCH_HAS_SYSCALL_WRAPPER > select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST > select ARCH_HAS_UBSAN_SANITIZE_ALL > diff --git a/arch/riscv/include/asm/membarrier.h > b/arch/riscv/include/asm/membarrier.h > index 6c016ebb5020a..47b240d0d596a 100644 > --- a/arch/riscv/include/asm/membarrier.h > +++ b/arch/riscv/include/asm/membarrier.h > @@ -22,6 +22,25 @@ static inline void membarrier_arch_switch_mm(struct > mm_struct *prev, > /* > * The membarrier system call requires a full memory barrier > * after storing to rq->curr, before going back to user-space. > + * > + * This barrier is also needed for the SYNC_CORE command when > + * switching between processes; in particular, on a transition > + * from a thread belonging to another mm to a thread belonging > + * to the mm for which a membarrier SYNC_CORE is done on CPU0: > + * > + * - [CPU0] sets all bits in the mm icache_stale_mask (in > + * prepare_sync_core_cmd()); > + * > + * - [CPU1] stores to rq->curr (by the scheduler); > + * > + * - [CPU0] loads rq->curr within membarrier and observes > + * cpu_rq(1)->curr->mm != mm, so the IPI is skipped on > + * CPU1; this means membarrier relies on switch_mm() to > + * issue the sync-core; > + * > + * - [CPU1] switch_mm() loads icache_stale_mask; if the bit > + * is zero, switch_mm() may incorrectly skip the sync-core. > + * > * Matches a full barrier in the proximity of the membarrier > * system call entry. > */ > diff --git a/arch/riscv/include/asm/sync_core.h > b/arch/riscv/include/asm/sync_core.h > new file mode 100644 > index 0000000000000..9153016da8f14 > --- /dev/null > +++ b/arch/riscv/include/asm/sync_core.h > @@ -0,0 +1,29 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_RISCV_SYNC_CORE_H > +#define _ASM_RISCV_SYNC_CORE_H > + > +/* > + * RISC-V implements return to user-space through an xRET instruction, > + * which is not core serializing. > + */ > +static inline void sync_core_before_usermode(void) > +{ > + asm volatile ("fence.i" ::: "memory"); > +} Not standard terminology. > + > +#ifdef CONFIG_SMP > +/* > + * Ensure the next switch_mm() on every CPU issues a core serializing > + * instruction for the given @mm. > + */ > +static inline void prepare_sync_core_cmd(struct mm_struct *mm) > +{ > + cpumask_setall(&mm->context.icache_stale_mask); > +} > +#else > +static inline void prepare_sync_core_cmd(struct mm_struct *mm) > +{ > +} > +#endif /* CONFIG_SMP */ > + > +#endif /* _ASM_RISCV_SYNC_CORE_H */ > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index b51bc86f8340c..82de2b7d253cd 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6682,6 +6682,10 @@ static void __sched notrace __schedule(unsigned > int sched_mode) > * > * The barrier matches a full barrier in the proximity of > * the membarrier system call entry. > + * > + * On RISC-V, this barrier pairing is also needed for the > + * SYNC_CORE command when switching between processes, cf. > + * the inline comments in membarrier_arch_switch_mm(). > */ > ++*switch_count; > > diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c > index 6d1f31b3a967b..703e8d80a576d 100644 > --- a/kernel/sched/membarrier.c > +++ b/kernel/sched/membarrier.c > @@ -342,6 +342,10 @@ static int membarrier_private_expedited(int flags, > int cpu_id) > /* > * Matches memory barriers after rq->curr modification in > * scheduler. > + * > + * On RISC-V, this barrier pairing is also needed for the > + * SYNC_CORE command when switching between processes, cf. > + * the inline comments in membarrier_arch_switch_mm(). > */ > smp_mb(); /* system call entry is not a mb. */ > > -- > 2.34.1 -s
Hi Stefan, > "core serialization" is a meaningless sequence of words for RISC-V users, The expression is inherited from MEMBARRIER(2). Quoting from the RFC discussion (cf. [3] in the cover letter), "RISC-V does not have "core serializing instructions", meaning that there is no occurence of such a term in the RISC-V ISA. The discussion and git history about the SYNC_CORE command suggested the implementation below: a FENCE.I instruction [...]" > The feature seems useful, but it should document what it does using > terminology actually used in the RISC-V specifications. In _current RISC-V parlance, it's pretty clear: we are doing FENCE.I. As Palmer and others mentioned in the RFC, there're proposals for ISA extensions aiming to "replace" FENCE.I, but those are still WIP. (*) Andrea (*) https://github.com/riscv/riscv-j-extension
On 2024-01-10 09:55, Andrea Parri wrote: [...] > > diff --git a/Documentation/features/sched/membarrier-sync-core/arch-support.txt b/Documentation/features/sched/membarrier-sync-core/arch-support.txt > index d96b778b87ed8..a163170fc0f48 100644 > --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt > +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt > @@ -10,6 +10,22 @@ > # Rely on implicit context synchronization as a result of exception return > # when returning from IPI handler, and when returning to user-space. > # > +# * riscv > +# > +# riscv uses xRET as return from interrupt and to return to user-space. > +# > +# Given that xRET is not core serializing, we rely on FENCE.I for providing > +# core serialization: > +# > +# - by calling sync_core_before_usermode() on return from interrupt (cf. > +# ipi_sync_core()), > +# > +# - via switch_mm() and sync_core_before_usermode() (respectively, for > +# uthread->uthread and kthread->uthread transitions) to go back to > +# user-space. I don't quite get the meaning of the sentence above. There seems to be a missing marker before "to go back". Thanks, Mathieu
> > +# riscv uses xRET as return from interrupt and to return to user-space. > > +# > > +# Given that xRET is not core serializing, we rely on FENCE.I for providing > > +# core serialization: > > +# > > +# - by calling sync_core_before_usermode() on return from interrupt (cf. > > +# ipi_sync_core()), > > +# > > +# - via switch_mm() and sync_core_before_usermode() (respectively, for > > +# uthread->uthread and kthread->uthread transitions) to go back to > > +# user-space. > > I don't quite get the meaning of the sentence above. There seems to be a > missing marker before "to go back". Let's see. Without the round brackets, the last part becomes: - via switch_mm() and sync_core_before_usermode() to go back to user-space. This is indeed what I meant to say. What am I missing? Andrea
On 2024-01-24 13:44, Andrea Parri wrote: >>> +# riscv uses xRET as return from interrupt and to return to user-space. >>> +# >>> +# Given that xRET is not core serializing, we rely on FENCE.I for providing >>> +# core serialization: >>> +# >>> +# - by calling sync_core_before_usermode() on return from interrupt (cf. >>> +# ipi_sync_core()), >>> +# >>> +# - via switch_mm() and sync_core_before_usermode() (respectively, for >>> +# uthread->uthread and kthread->uthread transitions) to go back to >>> +# user-space. >> >> I don't quite get the meaning of the sentence above. There seems to be a >> missing marker before "to go back". > > Let's see. Without the round brackets, the last part becomes: > > - via switch_mm() and sync_core_before_usermode() to go back to > user-space. > > This is indeed what I meant to say. What am I missing? Would it still fit your intent if we say "before returning to user-space" rather than "to go back to user-space" ? Because the switch_mm(), for instance, does not happen exactly on return to user-space, but rather when the scheduler switches tasks. Therefore, I think that stating that core serialization needs to happen before returning to user-space is clearer than stating that it happens "when" we go back to user-space. Also, on another topic, did you find a way forward with respect of the different choice of words between the membarrier man page and documentation vs the RISC-V official semantic with respect to "core serializing" vs FENCE.I ? Thanks, Mathieu > > Andrea
On Wed, Jan 24, 2024 at 01:56:39PM -0500, Mathieu Desnoyers wrote: > On 2024-01-24 13:44, Andrea Parri wrote: > > > > +# riscv uses xRET as return from interrupt and to return to user-space. > > > > +# > > > > +# Given that xRET is not core serializing, we rely on FENCE.I for providing > > > > +# core serialization: > > > > +# > > > > +# - by calling sync_core_before_usermode() on return from interrupt (cf. > > > > +# ipi_sync_core()), > > > > +# > > > > +# - via switch_mm() and sync_core_before_usermode() (respectively, for > > > > +# uthread->uthread and kthread->uthread transitions) to go back to > > > > +# user-space. > > > > > > I don't quite get the meaning of the sentence above. There seems to be a > > > missing marker before "to go back". > > > > Let's see. Without the round brackets, the last part becomes: > > > > - via switch_mm() and sync_core_before_usermode() to go back to > > user-space. > > > > This is indeed what I meant to say. What am I missing? > > Would it still fit your intent if we say "before returning to > user-space" rather than "to go back to user-space" ? Yes, works for me. Will change in v4. > Because the switch_mm(), for instance, does not happen exactly on > return to user-space, but rather when the scheduler switches tasks. > Therefore, I think that stating that core serialization needs to > happen before returning to user-space is clearer than stating that > it happens "when" we go back to user-space. > > Also, on another topic, did you find a way forward with respect of > the different choice of words between the membarrier man page and > documentation vs the RISC-V official semantic with respect to "core > serializing" vs FENCE.I ? The way forward I envision involves the continuous (iterative) discussion /review of the respective documentation and use-cases/litmus tests/models /etc. AFAICS, that is not that different from discussions about smp_mb() (as in memory-barriers.txt) vs. FENCE RW,RW (RISC-V ISA manual) - only time will tell. Andrea
diff --git a/Documentation/features/sched/membarrier-sync-core/arch-support.txt b/Documentation/features/sched/membarrier-sync-core/arch-support.txt index d96b778b87ed8..a163170fc0f48 100644 --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt @@ -10,6 +10,22 @@ # Rely on implicit context synchronization as a result of exception return # when returning from IPI handler, and when returning to user-space. # +# * riscv +# +# riscv uses xRET as return from interrupt and to return to user-space. +# +# Given that xRET is not core serializing, we rely on FENCE.I for providing +# core serialization: +# +# - by calling sync_core_before_usermode() on return from interrupt (cf. +# ipi_sync_core()), +# +# - via switch_mm() and sync_core_before_usermode() (respectively, for +# uthread->uthread and kthread->uthread transitions) to go back to +# user-space. +# +# The serialization in switch_mm() is activated by prepare_sync_core_cmd(). +# # * x86 # # x86-32 uses IRET as return from interrupt, which takes care of the IPI. @@ -43,7 +59,7 @@ | openrisc: | TODO | | parisc: | TODO | | powerpc: | ok | - | riscv: | TODO | + | riscv: | ok | | s390: | ok | | sh: | TODO | | sparc: | TODO | diff --git a/MAINTAINERS b/MAINTAINERS index 6bce0aeecb4f2..e4ca6288ea3d1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13817,6 +13817,7 @@ L: linux-kernel@vger.kernel.org S: Supported F: Documentation/scheduler/membarrier.rst F: arch/*/include/asm/membarrier.h +F: arch/*/include/asm/sync_core.h F: include/uapi/linux/membarrier.h F: kernel/sched/membarrier.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 33d9ea5fa392f..2ad63a216d69a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -28,14 +28,17 @@ config RISCV select ARCH_HAS_GIGANTIC_PAGE select ARCH_HAS_KCOV select ARCH_HAS_MEMBARRIER_CALLBACKS + select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_MMIOWB select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API + select ARCH_HAS_PREPARE_SYNC_CORE_CMD select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SET_DIRECT_MAP if MMU select ARCH_HAS_SET_MEMORY if MMU select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL + select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UBSAN_SANITIZE_ALL diff --git a/arch/riscv/include/asm/membarrier.h b/arch/riscv/include/asm/membarrier.h index 6c016ebb5020a..47b240d0d596a 100644 --- a/arch/riscv/include/asm/membarrier.h +++ b/arch/riscv/include/asm/membarrier.h @@ -22,6 +22,25 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev, /* * The membarrier system call requires a full memory barrier * after storing to rq->curr, before going back to user-space. + * + * This barrier is also needed for the SYNC_CORE command when + * switching between processes; in particular, on a transition + * from a thread belonging to another mm to a thread belonging + * to the mm for which a membarrier SYNC_CORE is done on CPU0: + * + * - [CPU0] sets all bits in the mm icache_stale_mask (in + * prepare_sync_core_cmd()); + * + * - [CPU1] stores to rq->curr (by the scheduler); + * + * - [CPU0] loads rq->curr within membarrier and observes + * cpu_rq(1)->curr->mm != mm, so the IPI is skipped on + * CPU1; this means membarrier relies on switch_mm() to + * issue the sync-core; + * + * - [CPU1] switch_mm() loads icache_stale_mask; if the bit + * is zero, switch_mm() may incorrectly skip the sync-core. + * * Matches a full barrier in the proximity of the membarrier * system call entry. */ diff --git a/arch/riscv/include/asm/sync_core.h b/arch/riscv/include/asm/sync_core.h new file mode 100644 index 0000000000000..9153016da8f14 --- /dev/null +++ b/arch/riscv/include/asm/sync_core.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_RISCV_SYNC_CORE_H +#define _ASM_RISCV_SYNC_CORE_H + +/* + * RISC-V implements return to user-space through an xRET instruction, + * which is not core serializing. + */ +static inline void sync_core_before_usermode(void) +{ + asm volatile ("fence.i" ::: "memory"); +} + +#ifdef CONFIG_SMP +/* + * Ensure the next switch_mm() on every CPU issues a core serializing + * instruction for the given @mm. + */ +static inline void prepare_sync_core_cmd(struct mm_struct *mm) +{ + cpumask_setall(&mm->context.icache_stale_mask); +} +#else +static inline void prepare_sync_core_cmd(struct mm_struct *mm) +{ +} +#endif /* CONFIG_SMP */ + +#endif /* _ASM_RISCV_SYNC_CORE_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b51bc86f8340c..82de2b7d253cd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6682,6 +6682,10 @@ static void __sched notrace __schedule(unsigned int sched_mode) * * The barrier matches a full barrier in the proximity of * the membarrier system call entry. + * + * On RISC-V, this barrier pairing is also needed for the + * SYNC_CORE command when switching between processes, cf. + * the inline comments in membarrier_arch_switch_mm(). */ ++*switch_count; diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index 6d1f31b3a967b..703e8d80a576d 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -342,6 +342,10 @@ static int membarrier_private_expedited(int flags, int cpu_id) /* * Matches memory barriers after rq->curr modification in * scheduler. + * + * On RISC-V, this barrier pairing is also needed for the + * SYNC_CORE command when switching between processes, cf. + * the inline comments in membarrier_arch_switch_mm(). */ smp_mb(); /* system call entry is not a mb. */
RISC-V uses xRET instructions on return from interrupt and to go back to user-space; the xRET instruction is not core serializing. Use FENCE.I for providing core serialization as follows: - by calling sync_core_before_usermode() on return from interrupt (cf. ipi_sync_core()), - via switch_mm() and sync_core_before_usermode() (respectively, for uthread->uthread and kthread->uthread transitions) to go back to user-space. On RISC-V, the serialization in switch_mm() is activated by resetting the icache_stale_mask of the mm at prepare_sync_core_cmd(). Suggested-by: Palmer Dabbelt <palmer@dabbelt.com> Signed-off-by: Andrea Parri <parri.andrea@gmail.com> --- .../membarrier-sync-core/arch-support.txt | 18 +++++++++++- MAINTAINERS | 1 + arch/riscv/Kconfig | 3 ++ arch/riscv/include/asm/membarrier.h | 19 ++++++++++++ arch/riscv/include/asm/sync_core.h | 29 +++++++++++++++++++ kernel/sched/core.c | 4 +++ kernel/sched/membarrier.c | 4 +++ 7 files changed, 77 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/sync_core.h