Message ID | 20191114113932.26186-1-richard.henderson@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v7] arm64: Implement archrandom.h for ARMv8.5-RNG | expand |
On Thu, Nov 14, 2019 at 12:39:32PM +0100, richard.henderson@linaro.org wrote: > +bool arch_get_random_seed_long(unsigned long *v) > +{ > + bool ok; > + > + if (static_branch_likely(&arm64_const_caps_ready)) { > + if (__cpus_have_const_cap(ARM64_HAS_RNG)) > + return arm64_rndr(v); > + return false; > + } > + > + /* > + * Before const_caps_ready, check the current cpu. > + * This will generally be the boot cpu for rand_initialize(). > + */ > + preempt_disable_notrace(); > + ok = this_cpu_has_cap(ARM64_HAS_RNG) && arm64_rndr(v); > + preempt_enable_notrace(); > + > + return ok; > +} As I asked previously, please separate the common case and the boot-cpu init-time case into separate functions. The runtime function should just check the RNG cap before using the instruction, without any preemption check or explicit check of arm64_const_caps_ready. i.e. static bool arm64_rndr(unsigned long *v) { bool ok; if (!cpus_have_const_cap(ARM64_HAS_RNG)) return false; /* * Reads of RNDR set PSTATE.NZCV to 0b0000 on success, * and set PSTATE.NZCV to 0b0100 otherwise. */ asm volatile( __mrs_s("%0", SYS_RNDR_EL0) "\n" " cset %w1, ne\n" : "=r" (*v), "=r" (ok) : : "cc"); return ok; } Any boot-time seeding should be in a separate function that external callers cannot invoke at runtime. Either have an arch function that the common random code calls at init time on the boot CPU, or have some arch_add_foo_entropy() function that the arm64 code can call somewhere around setup_arch(). Thanks, Mark.
On 11/14/19 3:25 PM, Mark Rutland wrote: > On Thu, Nov 14, 2019 at 12:39:32PM +0100, richard.henderson@linaro.org wrote: >> +bool arch_get_random_seed_long(unsigned long *v) >> +{ >> + bool ok; >> + >> + if (static_branch_likely(&arm64_const_caps_ready)) { >> + if (__cpus_have_const_cap(ARM64_HAS_RNG)) >> + return arm64_rndr(v); >> + return false; >> + } >> + >> + /* >> + * Before const_caps_ready, check the current cpu. >> + * This will generally be the boot cpu for rand_initialize(). >> + */ >> + preempt_disable_notrace(); >> + ok = this_cpu_has_cap(ARM64_HAS_RNG) && arm64_rndr(v); >> + preempt_enable_notrace(); >> + >> + return ok; >> +} > > As I asked previously, please separate the common case and the boot-cpu > init-time case into separate functions. Ok, beyond just making arch_get_random_seed_long be a function pointer, how? I honestly don't understand how what you want is different from what's here. > The runtime function should just check the RNG cap before using the > instruction, without any preemption check or explicit check of > arm64_const_caps_ready. i.e. > > static bool arm64_rndr(unsigned long *v) > { > bool ok; > > if (!cpus_have_const_cap(ARM64_HAS_RNG)) > return false; > > /* > * Reads of RNDR set PSTATE.NZCV to 0b0000 on success, > * and set PSTATE.NZCV to 0b0100 otherwise. > */ > asm volatile( > __mrs_s("%0", SYS_RNDR_EL0) "\n" > " cset %w1, ne\n" > : "=r" (*v), "=r" (ok) > : > : "cc"); > > return ok; This is exactly what I have above, in arch_get_random_seed_long(), in the arm64_const_caps_ready case. BTW, you can't stick the cpus_have_const_cap check in arm64_rndr(), or it isn't usable before setup_cpu_features(). The embedded cpus_have_cap() check will not pass for the boot cpu alone, unless we use ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, which does not have the semantics that you have requested in previous review rounds. Which is *why* I wrote the test exactly as I did, so that when !arm64_const_caps_ready, I can use a different test than cpus_have_cap(). > Any boot-time seeding should be in a separate function that external > callers cannot invoke at runtime. Either have an arch function that the > common random code calls at init time on the boot CPU, or have some > arch_add_foo_entropy() function that the arm64 code can call somewhere > around setup_arch(). What "external callers" are you talking about? My boot-time code above can only be reached until arm64_const_caps_ready, at which point the branch is patched out. So after boot-time, "external callers" only get if (__cpus_have_const_cap(ARM64_HAS_RNG)) return arm64_rndr(v); return false; Which to my mind satisfies your "cannot invoke" clause. Anyway, the preempt_disable is present on my boot path because preempt *is* enabled during rand_initialize(). If I do not disable it, I do trigger the warning within this_cpu_has_cap() As for arch_add_boot_entropy() or whatnot... you're now you're asking for non-trivial changes to the common drivers/char/random.c code. I'm not keen on designing such a thing when I really don't know what the requirements are. In particular, how it would differ from what I have. Color me confused. r~
On Thu, Nov 14, 2019 at 07:11:29PM +0100, Richard Henderson wrote: > On 11/14/19 3:25 PM, Mark Rutland wrote: > > As I asked previously, please separate the common case and the boot-cpu > > init-time case into separate functions. > Ok, beyond just making arch_get_random_seed_long be a function pointer, how? > I honestly don't understand how what you want is different from what's here. I believe that what Mark is saying when he says you should change the arch hooks is that you should change the interface between the core code and the architecture code to separate the runtime and early init interfaces with the goal of making it clear the separation between the two. > > Any boot-time seeding should be in a separate function that external > > callers cannot invoke at runtime. Either have an arch function that the > > common random code calls at init time on the boot CPU, or have some > > arch_add_foo_entropy() function that the arm64 code can call somewhere > > around setup_arch(). > What "external callers" are you talking about? Again Mark can correct me if I'm wrong here but anything that isn't the arch code or the core random code. > As for arch_add_boot_entropy() or whatnot... you're now you're asking for > non-trivial changes to the common drivers/char/random.c code. I'm not keen on > designing such a thing when I really don't know what the requirements are. In > particular, how it would differ from what I have. The biggest issue here is one of reviewability and maintainability around early use of the capabilities code. Without being really up to speed on those interfaces it can be hard to tell what conditions are true when and what interfaces are safe to use at any given point in boot which makes things harder to work with. This is especially true when we start to mix in things like preemption disables which also need a lot of attention. Avoiding mixing code that needs to run in both init and normal runtime contexts in the same function makes things easier to understand. Looking briefly at the random code you may find that the existing add_device_randomness() is already close to the arch_add_foo_entropy() suggested by Mark if not actually good enough, data injected with that is not going to be credited as entropy but it will feed into the pool.
diff --git a/Documentation/arm64/cpu-feature-registers.rst b/Documentation/arm64/cpu-feature-registers.rst index 2955287e9acc..78d6f5c6e824 100644 --- a/Documentation/arm64/cpu-feature-registers.rst +++ b/Documentation/arm64/cpu-feature-registers.rst @@ -117,6 +117,8 @@ infrastructure: +------------------------------+---------+---------+ | Name | bits | visible | +------------------------------+---------+---------+ + | RNDR | [63-60] | y | + +------------------------------+---------+---------+ | TS | [55-52] | y | +------------------------------+---------+---------+ | FHM | [51-48] | y | diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h new file mode 100644 index 000000000000..9b940d976e70 --- /dev/null +++ b/arch/arm64/include/asm/archrandom.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARCHRANDOM_H +#define _ASM_ARCHRANDOM_H + +#ifdef CONFIG_ARCH_RANDOM + +static inline bool __must_check arch_get_random_long(unsigned long *v) +{ + return false; +} + +static inline bool __must_check arch_get_random_int(unsigned int *v) +{ + return false; +} + +bool __must_check arch_get_random_seed_long(unsigned long *v); + +static inline bool __must_check arch_get_random_seed_int(unsigned int *v) +{ + unsigned long val; + bool ok = arch_get_random_seed_long(&val); + + *v = val; + return ok; +} + +#endif /* CONFIG_ARCH_RANDOM */ +#endif /* _ASM_ARCHRANDOM_H */ diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index ac1dbca3d0cd..1dd7644bc59a 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -54,7 +54,8 @@ #define ARM64_WORKAROUND_1463225 44 #define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45 #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46 +#define ARM64_HAS_RNG 47 -#define ARM64_NCAPS 47 +#define ARM64_NCAPS 48 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 6e919fafb43d..5e718f279469 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -365,6 +365,9 @@ #define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1) #define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7) +#define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0) +#define SYS_RNDRRS_EL0 sys_reg(3, 3, 2, 4, 1) + #define SYS_PMCR_EL0 sys_reg(3, 3, 9, 12, 0) #define SYS_PMCNTENSET_EL0 sys_reg(3, 3, 9, 12, 1) #define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2) @@ -539,6 +542,7 @@ ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_RES1) /* id_aa64isar0 */ +#define ID_AA64ISAR0_RNDR_SHIFT 60 #define ID_AA64ISAR0_TS_SHIFT 52 #define ID_AA64ISAR0_FHM_SHIFT 48 #define ID_AA64ISAR0_DP_SHIFT 44 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 80f459ad0190..8c3be148c3a2 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -119,6 +119,7 @@ static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap); * sync with the documentation of the CPU feature register ABI. */ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0), @@ -1565,6 +1566,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sign = FTR_UNSIGNED, .min_field_value = 1, }, +#endif +#ifdef CONFIG_ARCH_RANDOM + { + .desc = "Random Number Generator", + .capability = ARM64_HAS_RNG, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + .sys_reg = SYS_ID_AA64ISAR0_EL1, + .field_pos = ID_AA64ISAR0_RNDR_SHIFT, + .sign = FTR_UNSIGNED, + .min_field_value = 1, + }, #endif {}, }; diff --git a/arch/arm64/kernel/random.c b/arch/arm64/kernel/random.c new file mode 100644 index 000000000000..4af105be5c9a --- /dev/null +++ b/arch/arm64/kernel/random.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Random number generation using ARMv8.5-RNG. + */ + +#include <linux/random.h> +#include <linux/ratelimit.h> +#include <linux/printk.h> +#include <linux/preempt.h> +#include <asm/cpufeature.h> + +static bool arm64_rndr(unsigned long *v) +{ + bool ok; + + /* + * Reads of RNDR set PSTATE.NZCV to 0b0000 on success, + * and set PSTATE.NZCV to 0b0100 otherwise. + */ + asm volatile( + __mrs_s("%0", SYS_RNDR_EL0) "\n" + " cset %w1, ne\n" + : "=r"(*v), "=r"(ok) + : + : "cc"); + + return ok; +} + +bool arch_get_random_seed_long(unsigned long *v) +{ + bool ok; + + if (static_branch_likely(&arm64_const_caps_ready)) { + if (__cpus_have_const_cap(ARM64_HAS_RNG)) + return arm64_rndr(v); + return false; + } + + /* + * Before const_caps_ready, check the current cpu. + * This will generally be the boot cpu for rand_initialize(). + */ + preempt_disable_notrace(); + ok = this_cpu_has_cap(ARM64_HAS_RNG) && arm64_rndr(v); + preempt_enable_notrace(); + + return ok; +} diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3f047afb982c..5bc88601f07b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1438,6 +1438,18 @@ config ARM64_PTR_AUTH endmenu +menu "ARMv8.5 architectural features" + +config ARCH_RANDOM + bool "Enable support for random number generation" + default y + help + Random number generation (part of the ARMv8.5 Extensions) + provides a high bandwidth, cryptographically secure + hardware random number generator. + +endmenu + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 478491f07b4f..a47c2b984da7 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_ARCH_RANDOM) += random.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/