Message ID | 20240801-arm64-gcs-v10-25-699e2bd2190b@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64/gcs: Provide support for GCS in userspace | expand |
On Thu, Aug 01, 2024 at 01:06:52PM +0100, Mark Brown wrote: > @@ -1440,6 +1441,51 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct > } > #endif > > +#ifdef CONFIG_ARM64_GCS > +static int gcs_get(struct task_struct *target, > + const struct user_regset *regset, > + struct membuf to) > +{ > + struct user_gcs user_gcs; > + > + if (target == current) > + gcs_preserve_current_state(); > + > + user_gcs.features_enabled = target->thread.gcs_el0_mode; > + user_gcs.features_locked = target->thread.gcs_el0_locked; > + user_gcs.gcspr_el0 = target->thread.gcspr_el0; If it's not the current thread, I guess the task was interrupted, scheduled out (potentially on another CPU) and its GCSPR_EL0 saved. > + > + return membuf_write(&to, &user_gcs, sizeof(user_gcs)); > +} > + > +static int gcs_set(struct task_struct *target, const struct > + user_regset *regset, unsigned int pos, > + unsigned int count, const void *kbuf, const > + void __user *ubuf) > +{ > + int ret; > + struct user_gcs user_gcs; > + > + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &user_gcs, 0, -1); > + if (ret) > + return ret; > + > + if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) > + return -EINVAL; > + > + /* Do not allow enable via ptrace */ > + if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) && > + !(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) > + return -EBUSY; > + > + target->thread.gcs_el0_mode = user_gcs.features_enabled; > + target->thread.gcs_el0_locked = user_gcs.features_locked; > + target->thread.gcspr_el0 = user_gcs.gcspr_el0; As in the previous thread, I thought we need to restore GCSPR_EL0 unconditionally. I don't particularly like that this register becomes some scrap one that threads can use regardless of GCS. Not sure we have a simple solution. We could track three states: GCS never enabled, GCS enabled and GCS disabled after being enabled. It's probably not worth it. On ptrace() access to the shadow stack, we rely on the barrier in the context switch code if stopping a thread. If other threads are running on other CPUs, it's racy anyway even for normal accesses, so I don't think we need to do anything more for ptrace.
On Wed, Aug 21, 2024 at 06:57:16PM +0100, Catalin Marinas wrote: > On Thu, Aug 01, 2024 at 01:06:52PM +0100, Mark Brown wrote: > > + if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) > > + return -EINVAL; > > + /* Do not allow enable via ptrace */ > > + if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) && > > + !(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) > > + return -EBUSY; > > + target->thread.gcs_el0_mode = user_gcs.features_enabled; > > + target->thread.gcs_el0_locked = user_gcs.features_locked; > > + target->thread.gcspr_el0 = user_gcs.gcspr_el0; > As in the previous thread, I thought we need to restore GCSPR_EL0 > unconditionally. My thinking here is that if we return an error code then ideally we shouldn't implement any changes, especially in cases like this one where we're rejecting inputs that are just invalid. That seems like the least surprising outcome and what we should strive for, even if it's too complicated for some cases. It is possible to write GCSPR_EL0 regardless of if GCS is enabled, it's just not possible to write it as part of an otherwise invalid write. The validation is checking for unknown features and enables. With clone3() we could relax the enable check, but I've just pulled that out of the series for the time being. > I don't particularly like that this register becomes some scrap one that > threads can use regardless of GCS. Not sure we have a simple solution. > We could track three states: GCS never enabled, GCS enabled and GCS > disabled after being enabled. It's probably not worth it. Yeah, I did give consideration to that but as you say I think it's probably not worth it - you end up with a state machine which for the most part only gets two of the states exercised. > On ptrace() access to the shadow stack, we rely on the barrier in the > context switch code if stopping a thread. If other threads are running > on other CPUs, it's racy anyway even for normal accesses, so I don't > think we need to do anything more for ptrace. Yes, I don't see any reason to try to do anything special there.
On Wed, Aug 21, 2024 at 07:28:08PM +0100, Mark Brown wrote: > part of an otherwise invalid write. The validation is checking for > unknown features and enables. With clone3() we could relax the enable > check, but I've just pulled that out of the series for the time being. Actually thinking about it some more I'll just remove the check for enable, the support for threads with GCS enabled and no kernel allocated GCS is already there and I didn't pull that bit out.
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 7fa2f7036aa7..0f39ba4f3efd 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -324,6 +324,14 @@ struct user_za_header { #define ZA_PT_SIZE(vq) \ (ZA_PT_ZA_OFFSET + ZA_PT_ZA_SIZE(vq)) +/* GCS state (NT_ARM_GCS) */ + +struct user_gcs { + __u64 features_enabled; + __u64 features_locked; + __u64 gcspr_el0; +}; + #endif /* __ASSEMBLY__ */ #endif /* _UAPI__ASM_PTRACE_H */ diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 0d022599eb61..9db0b669fee3 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -34,6 +34,7 @@ #include <asm/cpufeature.h> #include <asm/debug-monitors.h> #include <asm/fpsimd.h> +#include <asm/gcs.h> #include <asm/mte.h> #include <asm/pointer_auth.h> #include <asm/stacktrace.h> @@ -1440,6 +1441,51 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct } #endif +#ifdef CONFIG_ARM64_GCS +static int gcs_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + struct user_gcs user_gcs; + + if (target == current) + gcs_preserve_current_state(); + + user_gcs.features_enabled = target->thread.gcs_el0_mode; + user_gcs.features_locked = target->thread.gcs_el0_locked; + user_gcs.gcspr_el0 = target->thread.gcspr_el0; + + return membuf_write(&to, &user_gcs, sizeof(user_gcs)); +} + +static int gcs_set(struct task_struct *target, const struct + user_regset *regset, unsigned int pos, + unsigned int count, const void *kbuf, const + void __user *ubuf) +{ + int ret; + struct user_gcs user_gcs; + + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &user_gcs, 0, -1); + if (ret) + return ret; + + if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + /* Do not allow enable via ptrace */ + if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) && + !(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) + return -EBUSY; + + target->thread.gcs_el0_mode = user_gcs.features_enabled; + target->thread.gcs_el0_locked = user_gcs.features_locked; + target->thread.gcspr_el0 = user_gcs.gcspr_el0; + + return 0; +} +#endif + enum aarch64_regset { REGSET_GPR, REGSET_FPR, @@ -1469,6 +1515,9 @@ enum aarch64_regset { #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI REGSET_TAGGED_ADDR_CTRL, #endif +#ifdef CONFIG_ARM64_GCS + REGSET_GCS, +#endif }; static const struct user_regset aarch64_regsets[] = { @@ -1628,6 +1677,16 @@ static const struct user_regset aarch64_regsets[] = { .set = tagged_addr_ctrl_set, }, #endif +#ifdef CONFIG_ARM64_GCS + [REGSET_GCS] = { + .core_note_type = NT_ARM_GCS, + .n = sizeof(struct user_gcs) / sizeof(u64), + .size = sizeof(u64), + .align = sizeof(u64), + .regset_get = gcs_get, + .set = gcs_set, + }, +#endif }; static const struct user_regset_view user_aarch64_view = { diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index b54b313bcf07..77d4910bbb9d 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -441,6 +441,7 @@ typedef struct elf64_shdr { #define NT_ARM_ZA 0x40c /* ARM SME ZA registers */ #define NT_ARM_ZT 0x40d /* ARM SME ZT registers */ #define NT_ARM_FPMR 0x40e /* ARM floating point mode register */ +#define NT_ARM_GCS 0x40f /* ARM GCS state */ #define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */ #define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */ #define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */