Message ID | 20220623152948.1607295-1-christoph.muellner@vrull.eu (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC,v2] riscv: Add Zawrs support for spinlocks | expand |
Hi Christoph, Am Donnerstag, 23. Juni 2022, 17:29:48 CEST schrieb Christoph Muellner: > From: Christoph Müllner <christoph.muellner@vrull.eu> > > The current RISC-V code uses the generic ticket lock implementation, > that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). > Currently, RISC-V uses the generic implementation of these macros. > This patch introduces a RISC-V specific implementation, of these > macros, that peels off the first loop iteration and modifies the waiting > loop such, that it is possible to use the WRS.STO instruction of the Zawrs > ISA extension to stall the CPU. > > The resulting implementation of smp_cond_load_*() will only work for > 32-bit or 64-bit types for RV64 and 32-bit types for RV32. > This is caused by the restrictions of the LR instruction (RISC-V only > has LR.W and LR.D). Compiler assertions guard this new restriction. > > This patch uses the existing RISC-V ISA extension framework > to detect the presents of Zawrs at run-time. > If available a NOP instruction will be replaced by WRS.NTO or WRS.STO. > > The whole mechanism is gated by Kconfig setting, which defaults to Y. > > The Zawrs specification can be found here: > https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc > > Note, that the Zawrs extension is not frozen or ratified yet. > Therefore this patch is an RFC and not intended to get merged. > > Changes since v1: > * Adding "depends on !XIP_KERNEL" to RISCV_ISA_ZAWRS > * Fixing type checking code in __smp_load_reserved* > * Adjustments according to the specification change > > Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> With the matching Qemu-Patch on - rv64 + Debian rootfs - rv32 + 32bit-Buildroot rootfs Tested-by: Heiko Stuebner <heiko@sntech.de> apart from the one nit below Reviewed-by: Heiko Stuebner <heiko@sntech.de> > --- > arch/riscv/Kconfig | 11 ++++ > arch/riscv/include/asm/barrier.h | 92 ++++++++++++++++++++++++++++ > arch/riscv/include/asm/errata_list.h | 19 +++++- > arch/riscv/include/asm/hwcap.h | 3 +- > arch/riscv/kernel/cpu.c | 1 + > arch/riscv/kernel/cpufeature.c | 13 ++++ > 6 files changed, 136 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 32ffef9f6e5b..9d40569237c9 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -358,6 +358,17 @@ config RISCV_ISA_C > > If you don't know what to do here, say Y. > > +config RISCV_ISA_ZAWRS > + bool "Zawrs extension support" > + depends on !XIP_KERNEL > + select RISCV_ALTERNATIVE > + default y > + help > + Adds support to dynamically detect the presence of the Zawrs extension > + (wait for reservation set) and enable its usage. > + > + If you don't know what to do here, say Y. > + > config RISCV_ISA_SVPBMT > bool "SVPBMT extension support" > depends on 64BIT && MMU > diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h > index d0e24aaa2aa0..1f9628aaa7cb 100644 > --- a/arch/riscv/include/asm/barrier.h > +++ b/arch/riscv/include/asm/barrier.h > @@ -12,6 +12,8 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/errata_list.h> > + > #define nop() __asm__ __volatile__ ("nop") > > #define RISCV_FENCE(p, s) \ > @@ -42,6 +44,64 @@ do { \ > ___p1; \ > }) > > +#if __riscv_xlen == 64 > + nit: I guess we could do without the extra blanks? asm.h does so, and also the #else block below also doesn't use them ;-) . But I guess that is more a style debate > +#define __riscv_lrsc_word(t) \ > + (sizeof(t) == sizeof(int) || \ > + sizeof(t) == sizeof(long)) > + > +#elif __riscv_xlen == 32 > + > +#define __riscv_lrsc_word(t) \ > + (sizeof(t) == sizeof(int)) > + > +#else > +#error "Unexpected __riscv_xlen" > +#endif /* __riscv_xlen */ [...] Thanks Heiko
On Thu, Jun 23, 2022 at 6:32 PM Heiko Stübner <heiko@sntech.de> wrote: > > Hi Christoph, > > Am Donnerstag, 23. Juni 2022, 17:29:48 CEST schrieb Christoph Muellner: > > From: Christoph Müllner <christoph.muellner@vrull.eu> > > > > The current RISC-V code uses the generic ticket lock implementation, > > that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). > > Currently, RISC-V uses the generic implementation of these macros. > > This patch introduces a RISC-V specific implementation, of these > > macros, that peels off the first loop iteration and modifies the waiting > > loop such, that it is possible to use the WRS.STO instruction of the Zawrs > > ISA extension to stall the CPU. > > > > The resulting implementation of smp_cond_load_*() will only work for > > 32-bit or 64-bit types for RV64 and 32-bit types for RV32. > > This is caused by the restrictions of the LR instruction (RISC-V only > > has LR.W and LR.D). Compiler assertions guard this new restriction. > > > > This patch uses the existing RISC-V ISA extension framework > > to detect the presents of Zawrs at run-time. > > If available a NOP instruction will be replaced by WRS.NTO or WRS.STO. > > > > The whole mechanism is gated by Kconfig setting, which defaults to Y. > > > > The Zawrs specification can be found here: > > https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc > > > > Note, that the Zawrs extension is not frozen or ratified yet. > > Therefore this patch is an RFC and not intended to get merged. > > > > Changes since v1: > > * Adding "depends on !XIP_KERNEL" to RISCV_ISA_ZAWRS > > * Fixing type checking code in __smp_load_reserved* > > * Adjustments according to the specification change > > > > Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> > > With the matching Qemu-Patch on > - rv64 + Debian rootfs > - rv32 + 32bit-Buildroot rootfs > > Tested-by: Heiko Stuebner <heiko@sntech.de> > > apart from the one nit below > Reviewed-by: Heiko Stuebner <heiko@sntech.de> > > > --- > > arch/riscv/Kconfig | 11 ++++ > > arch/riscv/include/asm/barrier.h | 92 ++++++++++++++++++++++++++++ > > arch/riscv/include/asm/errata_list.h | 19 +++++- > > arch/riscv/include/asm/hwcap.h | 3 +- > > arch/riscv/kernel/cpu.c | 1 + > > arch/riscv/kernel/cpufeature.c | 13 ++++ > > 6 files changed, 136 insertions(+), 3 deletions(-) > > > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > > index 32ffef9f6e5b..9d40569237c9 100644 > > --- a/arch/riscv/Kconfig > > +++ b/arch/riscv/Kconfig > > @@ -358,6 +358,17 @@ config RISCV_ISA_C > > > > If you don't know what to do here, say Y. > > > > +config RISCV_ISA_ZAWRS > > + bool "Zawrs extension support" > > + depends on !XIP_KERNEL > > + select RISCV_ALTERNATIVE > > + default y > > + help > > + Adds support to dynamically detect the presence of the Zawrs extension > > + (wait for reservation set) and enable its usage. > > + > > + If you don't know what to do here, say Y. > > + > > config RISCV_ISA_SVPBMT > > bool "SVPBMT extension support" > > depends on 64BIT && MMU > > diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h > > index d0e24aaa2aa0..1f9628aaa7cb 100644 > > --- a/arch/riscv/include/asm/barrier.h > > +++ b/arch/riscv/include/asm/barrier.h > > @@ -12,6 +12,8 @@ > > > > #ifndef __ASSEMBLY__ > > > > +#include <asm/errata_list.h> > > + > > #define nop() __asm__ __volatile__ ("nop") > > > > #define RISCV_FENCE(p, s) \ > > @@ -42,6 +44,64 @@ do { \ > > ___p1; \ > > }) > > > > +#if __riscv_xlen == 64 > > + > > nit: I guess we could do without the extra blanks? > asm.h does so, and also the #else block below also doesn't > use them ;-) . But I guess that is more a style debate Ok, will remove the empty lines in a new revision. Thanks! > > > +#define __riscv_lrsc_word(t) \ > > + (sizeof(t) == sizeof(int) || \ > > + sizeof(t) == sizeof(long)) > > + > > +#elif __riscv_xlen == 32 > > + > > +#define __riscv_lrsc_word(t) \ > > + (sizeof(t) == sizeof(int)) > > + > > +#else > > +#error "Unexpected __riscv_xlen" > > +#endif /* __riscv_xlen */ > > [...] > > Thanks > Heiko > >
From: Christoph Muellner > Sent: 23 June 2022 16:30 > > From: Christoph Müllner <christoph.muellner@vrull.eu> > > The current RISC-V code uses the generic ticket lock implementation, > that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). > Currently, RISC-V uses the generic implementation of these macros. > This patch introduces a RISC-V specific implementation, of these > macros, that peels off the first loop iteration and modifies the waiting > loop such, that it is possible to use the WRS.STO instruction of the Zawrs > ISA extension to stall the CPU. > > The resulting implementation of smp_cond_load_*() will only work for > 32-bit or 64-bit types for RV64 and 32-bit types for RV32. > This is caused by the restrictions of the LR instruction (RISC-V only > has LR.W and LR.D). Compiler assertions guard this new restriction. > > This patch uses the existing RISC-V ISA extension framework > to detect the presents of Zawrs at run-time. > If available a NOP instruction will be replaced by WRS.NTO or WRS.STO. > > The whole mechanism is gated by Kconfig setting, which defaults to Y. > > The Zawrs specification can be found here: > https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc > > Note, that the Zawrs extension is not frozen or ratified yet. > Therefore this patch is an RFC and not intended to get merged. > > Changes since v1: > * Adding "depends on !XIP_KERNEL" to RISCV_ISA_ZAWRS > * Fixing type checking code in __smp_load_reserved* > * Adjustments according to the specification change > > Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> > --- > arch/riscv/Kconfig | 11 ++++ > arch/riscv/include/asm/barrier.h | 92 ++++++++++++++++++++++++++++ > arch/riscv/include/asm/errata_list.h | 19 +++++- > arch/riscv/include/asm/hwcap.h | 3 +- > arch/riscv/kernel/cpu.c | 1 + > arch/riscv/kernel/cpufeature.c | 13 ++++ > 6 files changed, 136 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig > index 32ffef9f6e5b..9d40569237c9 100644 > --- a/arch/riscv/Kconfig > +++ b/arch/riscv/Kconfig > @@ -358,6 +358,17 @@ config RISCV_ISA_C > > If you don't know what to do here, say Y. > > +config RISCV_ISA_ZAWRS > + bool "Zawrs extension support" > + depends on !XIP_KERNEL > + select RISCV_ALTERNATIVE > + default y > + help > + Adds support to dynamically detect the presence of the Zawrs extension > + (wait for reservation set) and enable its usage. > + > + If you don't know what to do here, say Y. > + > config RISCV_ISA_SVPBMT > bool "SVPBMT extension support" > depends on 64BIT && MMU > diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h > index d0e24aaa2aa0..1f9628aaa7cb 100644 > --- a/arch/riscv/include/asm/barrier.h > +++ b/arch/riscv/include/asm/barrier.h > @@ -12,6 +12,8 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/errata_list.h> > + > #define nop() __asm__ __volatile__ ("nop") > > #define RISCV_FENCE(p, s) \ > @@ -42,6 +44,64 @@ do { \ > ___p1; \ > }) > > +#if __riscv_xlen == 64 > + > +#define __riscv_lrsc_word(t) \ > + (sizeof(t) == sizeof(int) || \ > + sizeof(t) == sizeof(long)) That line doesn't need splitting. > +#elif __riscv_xlen == 32 > + > +#define __riscv_lrsc_word(t) \ > + (sizeof(t) == sizeof(int)) Can't you use the same test as 64bit? Both int and long wave size 4 - so the test is fine. > + > +#else > +#error "Unexpected __riscv_xlen" > +#endif /* __riscv_xlen */ > + > +#define compiletime_assert_atomic_lrsc_type(t) \ > + compiletime_assert(__riscv_lrsc_word(t), \ > + "Need type compatible with LR/SC instructions.") I think I'd try to get the type name into the error message. Either ##t or STR(t) should be right. > + > +#define ___smp_load_reservedN(pfx, ptr) \ > +({ \ > + typeof(*ptr) ___p1; \ > + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \ > + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \ > + ___p1; \ > +}) Isn't that missing the memory reference? It either needs a extra memory parameter for 'ptr' or a full/partial memory clobber. > + > +#define ___smp_load_reserved32(ptr) \ > + ___smp_load_reservedN("w", ptr) > + > +#define ___smp_load_reserved64(ptr) \ > + ___smp_load_reservedN("d", ptr) > + > +#define __smp_load_reserved_relaxed(ptr) \ > +({ \ > + typeof(*ptr) ___p1; \ > + compiletime_assert_atomic_lrsc_type(*ptr); \ > + if (sizeof(*ptr) == 4) { \ > + ___p1 = ___smp_load_reserved32(ptr); \ > + } else { \ > + ___p1 = ___smp_load_reserved64(ptr); \ > + } \ I think replacing all that with: typeof(*ptr) ___p1; \ if (sizeof(*ptr) == sizeof(int)) \ ___p1 = ___smp_load_reservedN("w", ptr); \ else if (sizeof(*ptr) == sizeof(long)) \ ___p1 = ___smp_load_reservedN("d", ptr); \ else compiletime_assert(1, \ "Need type compatible with LR/SC instructions."); Might make it easier to read. > + ___p1; \ > +}) > + > +#define __smp_load_reserved_acquire(ptr) \ > +({ \ > + typeof(*ptr) ___p1; \ > + compiletime_assert_atomic_lrsc_type(*ptr); \ > + if (sizeof(*ptr) == 4) { \ > + ___p1 = ___smp_load_reserved32(ptr); \ > + } else { \ > + ___p1 = ___smp_load_reserved64(ptr); \ > + } \ Isn't that identical to __smp_load_reserved_relaxed()? No point replicating it. ... David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
Hi David, Am Freitag, 24. Juni 2022, 10:52:40 CEST schrieb David Laight: > From: Christoph Muellner > > Sent: 23 June 2022 16:30 > > > > From: Christoph Müllner <christoph.muellner@vrull.eu> > > > > The current RISC-V code uses the generic ticket lock implementation, > > that calls the macros smp_cond_load_relaxed() and smp_cond_load_acquire(). > > Currently, RISC-V uses the generic implementation of these macros. > > This patch introduces a RISC-V specific implementation, of these > > macros, that peels off the first loop iteration and modifies the waiting > > loop such, that it is possible to use the WRS.STO instruction of the Zawrs > > ISA extension to stall the CPU. > > > > The resulting implementation of smp_cond_load_*() will only work for > > 32-bit or 64-bit types for RV64 and 32-bit types for RV32. > > This is caused by the restrictions of the LR instruction (RISC-V only > > has LR.W and LR.D). Compiler assertions guard this new restriction. > > > > This patch uses the existing RISC-V ISA extension framework > > to detect the presents of Zawrs at run-time. > > If available a NOP instruction will be replaced by WRS.NTO or WRS.STO. > > > > The whole mechanism is gated by Kconfig setting, which defaults to Y. > > > > The Zawrs specification can be found here: > > https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc > > > > Note, that the Zawrs extension is not frozen or ratified yet. > > Therefore this patch is an RFC and not intended to get merged. > > > > Changes since v1: > > * Adding "depends on !XIP_KERNEL" to RISCV_ISA_ZAWRS > > * Fixing type checking code in __smp_load_reserved* > > * Adjustments according to the specification change > > > > Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> I'm only addressing the point here were I don't agree with you :-) Everything else will be in the next version. [...] > > + > > +#define ___smp_load_reservedN(pfx, ptr) \ > > +({ \ > > + typeof(*ptr) ___p1; \ > > + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \ > > + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \ > > + ___p1; \ > > +}) > > Isn't that missing the memory reference? > It either needs a extra memory parameter for 'ptr' or > a full/partial memory clobber. Shouldn't the "+A" for *ptr should take care of that? From the gcc docs [0]: ‘+’ Means that this operand is both read and written by the instruction. Heiko [0] https://gcc.gnu.org/onlinedocs/gcc/Modifiers.html#index-_002b-in-constraint
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 32ffef9f6e5b..9d40569237c9 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -358,6 +358,17 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_ZAWRS + bool "Zawrs extension support" + depends on !XIP_KERNEL + select RISCV_ALTERNATIVE + default y + help + Adds support to dynamically detect the presence of the Zawrs extension + (wait for reservation set) and enable its usage. + + If you don't know what to do here, say Y. + config RISCV_ISA_SVPBMT bool "SVPBMT extension support" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h index d0e24aaa2aa0..1f9628aaa7cb 100644 --- a/arch/riscv/include/asm/barrier.h +++ b/arch/riscv/include/asm/barrier.h @@ -12,6 +12,8 @@ #ifndef __ASSEMBLY__ +#include <asm/errata_list.h> + #define nop() __asm__ __volatile__ ("nop") #define RISCV_FENCE(p, s) \ @@ -42,6 +44,64 @@ do { \ ___p1; \ }) +#if __riscv_xlen == 64 + +#define __riscv_lrsc_word(t) \ + (sizeof(t) == sizeof(int) || \ + sizeof(t) == sizeof(long)) + +#elif __riscv_xlen == 32 + +#define __riscv_lrsc_word(t) \ + (sizeof(t) == sizeof(int)) + +#else +#error "Unexpected __riscv_xlen" +#endif /* __riscv_xlen */ + +#define compiletime_assert_atomic_lrsc_type(t) \ + compiletime_assert(__riscv_lrsc_word(t), \ + "Need type compatible with LR/SC instructions.") + +#define ___smp_load_reservedN(pfx, ptr) \ +({ \ + typeof(*ptr) ___p1; \ + __asm__ __volatile__ ("lr." pfx " %[p], %[c]\n" \ + : [p]"=&r" (___p1), [c]"+A"(*ptr)); \ + ___p1; \ +}) + +#define ___smp_load_reserved32(ptr) \ + ___smp_load_reservedN("w", ptr) + +#define ___smp_load_reserved64(ptr) \ + ___smp_load_reservedN("d", ptr) + +#define __smp_load_reserved_relaxed(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 4) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + ___p1; \ +}) + +#define __smp_load_reserved_acquire(ptr) \ +({ \ + typeof(*ptr) ___p1; \ + compiletime_assert_atomic_lrsc_type(*ptr); \ + if (sizeof(*ptr) == 4) { \ + ___p1 = ___smp_load_reserved32(ptr); \ + } else { \ + ___p1 = ___smp_load_reserved64(ptr); \ + } \ + RISCV_FENCE(r,rw); \ + ___p1; \ +}) + /* * This is a very specific barrier: it's currently only used in two places in * the kernel, both in the scheduler. See include/linux/spinlock.h for the two @@ -69,6 +129,38 @@ do { \ */ #define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) +#define smp_cond_load_relaxed(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = READ_ONCE(*__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_relaxed(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS_STO(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + +#define smp_cond_load_acquire(ptr, cond_expr) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + VAL = smp_load_acquire(__PTR); \ + if (!cond_expr) { \ + for (;;) { \ + VAL = __smp_load_reserved_acquire(__PTR); \ + if (cond_expr) \ + break; \ + ALT_WRS_STO(); \ + } \ + } \ + (typeof(*ptr))VAL; \ +}) + #include <asm-generic/barrier.h> #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h index 9e2888dbb5b1..e15af9986b1a 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -19,8 +19,9 @@ #define ERRATA_THEAD_NUMBER 1 #endif -#define CPUFEATURE_SVPBMT 0 -#define CPUFEATURE_NUMBER 1 +#define CPUFEATURE_ZAWRS 0 +#define CPUFEATURE_SVPBMT 1 +#define CPUFEATURE_NUMBER 2 #ifdef __ASSEMBLY__ @@ -42,6 +43,20 @@ asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \ ERRATA_SIFIVE_CIP_1200, CONFIG_ERRATA_SIFIVE_CIP_1200) \ : : "r" (addr) : "memory") +#define ZAWRS_WRS_NTO ".long 0x00d00073" +#define ALT_WRS_NTO() \ +asm volatile(ALTERNATIVE( \ + "nop\n\t", \ + ZAWRS_WRS_NTO "\n\t", \ + 0, CPUFEATURE_ZAWRS, CONFIG_RISCV_ISA_ZAWRS)) + +#define ZAWRS_WRS_STO ".long 0x01d00073" +#define ALT_WRS_STO() \ +asm volatile(ALTERNATIVE( \ + "nop\n\t", \ + ZAWRS_WRS_STO "\n\t", \ + 0, CPUFEATURE_ZAWRS, CONFIG_RISCV_ISA_ZAWRS)) + /* * _val is marked as "will be overwritten", so need to set it to 0 * in the default case. diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 4e2486881840..c7dd8cc38bec 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -51,7 +51,8 @@ extern unsigned long elf_hwcap; * available logical extension id. */ enum riscv_isa_ext_id { - RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_ZAWRS = RISCV_ISA_EXT_BASE, + RISCV_ISA_EXT_SSCOFPMF, RISCV_ISA_EXT_SVPBMT, RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index fba9e9f46a8c..6c3a10ff5358 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -87,6 +87,7 @@ int riscv_of_parent_hartid(struct device_node *node) * extensions by an underscore. */ static struct riscv_isa_ext_data isa_ext_arr[] = { + __RISCV_ISA_EXT_DATA(zawrs, RISCV_ISA_EXT_ZAWRS), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), __RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 12b05ce164bb..ce610d8a0e8d 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -199,6 +199,7 @@ void __init riscv_fill_hwcap(void) } else { SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); + SET_ISA_EXT_MAP("zawrs", RISCV_ISA_EXT_ZAWRS); } #undef SET_ISA_EXT_MAP } @@ -250,6 +251,14 @@ struct cpufeature_info { bool (*check_func)(unsigned int stage); }; +static bool __init_or_module cpufeature_zawrs_check_func(unsigned int stage) +{ + if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) + return false; + + return riscv_isa_extension_available(NULL, ZAWRS); +} + static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) { #ifdef CONFIG_RISCV_ISA_SVPBMT @@ -266,6 +275,10 @@ static bool __init_or_module cpufeature_svpbmt_check_func(unsigned int stage) static const struct cpufeature_info __initdata_or_module cpufeature_list[CPUFEATURE_NUMBER] = { + { + .name = "zawrs", + .check_func = cpufeature_zawrs_check_func + }, { .name = "svpbmt", .check_func = cpufeature_svpbmt_check_func