Message ID | 20200515172756.27185-2-will@kernel.org (mailing list archive) |
---|---|
State | Mainlined |
Commit | 51189c7a7ed1b4ed4493e27275d466ff60406d3a |
Headers | show |
Series | Clean up Shadow Call Stack patches for 5.8 | expand |
On Fri, May 15, 2020 at 06:27:51PM +0100, Will Deacon wrote: > Storing the SCS information in thread_info as a {base,offset} pair > introduces an additional load instruction on the ret-to-user path, > since the SCS stack pointer in x18 has to be converted back to an offset > by subtracting the base. > > Replace the offset with the absolute SCS stack pointer value instead > and avoid the redundant load. > > Signed-off-by: Will Deacon <will@kernel.org> One trivial nit below, but regardless this looks sound to me, and I certainly prefer having the absolute address rather than an offset, so: Reviewed-by: Mark Rutland <mark.rutland@arm.com> > diff --git a/kernel/scs.c b/kernel/scs.c > index 9389c28f0853..5ff8663e4a67 100644 > --- a/kernel/scs.c > +++ b/kernel/scs.c > @@ -60,8 +60,7 @@ int scs_prepare(struct task_struct *tsk, int node) > if (!s) > return -ENOMEM; > > - task_scs(tsk) = s; > - task_scs_offset(tsk) = 0; > + task_scs(tsk) = task_scs_sp(tsk) = s; I think this would be more legible as two statements: | task_sys(tsk) = s; | task_scs_sp(tsk) = s; ... as we usually save `foo = bar = baz` stuff for the start of a function or within loop conditions. Thanks, Mark. > scs_account(tsk, 1); > return 0; > } > -- > 2.26.2.761.g0e0b3e54be-goog >
On Mon, May 18, 2020 at 12:37:10PM +0100, Mark Rutland wrote: > On Fri, May 15, 2020 at 06:27:51PM +0100, Will Deacon wrote: > > Storing the SCS information in thread_info as a {base,offset} pair > > introduces an additional load instruction on the ret-to-user path, > > since the SCS stack pointer in x18 has to be converted back to an offset > > by subtracting the base. > > > > Replace the offset with the absolute SCS stack pointer value instead > > and avoid the redundant load. > > > > Signed-off-by: Will Deacon <will@kernel.org> > > One trivial nit below, but regardless this looks sound to me, and I > certainly prefer having the absolute address rather than an offset, so: > > Reviewed-by: Mark Rutland <mark.rutland@arm.com> > > > diff --git a/kernel/scs.c b/kernel/scs.c > > index 9389c28f0853..5ff8663e4a67 100644 > > --- a/kernel/scs.c > > +++ b/kernel/scs.c > > @@ -60,8 +60,7 @@ int scs_prepare(struct task_struct *tsk, int node) > > if (!s) > > return -ENOMEM; > > > > - task_scs(tsk) = s; > > - task_scs_offset(tsk) = 0; > > + task_scs(tsk) = task_scs_sp(tsk) = s; > > I think this would be more legible as two statements: > > | task_sys(tsk) = s; > | task_scs_sp(tsk) = s; I think it's nice to be able to say: task_scs(tsk) = task_scs_sp(tsk); because it makes it very clear that they are initialised to the same thing. Having it as two statements means somebody will update one and forget to update the other one. > ... as we usually save `foo = bar = baz` stuff for the start of a > function or within loop conditions. Hmm, I can't really find anything consistent in that regard, to be honest with you. Did I miss something in the coding style doc? I'll leave it as-is for now. Will
diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index 96549353b0cb..6b8cf4352fe3 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -4,16 +4,15 @@ #ifdef __ASSEMBLY__ +#include <asm/asm-offsets.h> + #ifdef CONFIG_SHADOW_CALL_STACK .macro scs_load tsk, tmp - ldp x18, \tmp, [\tsk, #TSK_TI_SCS_BASE] - add x18, x18, \tmp + ldr x18, [\tsk, #TSK_TI_SCS_SP] .endm .macro scs_save tsk, tmp - ldr \tmp, [\tsk, #TSK_TI_SCS_BASE] - sub \tmp, x18, \tmp - str \tmp, [\tsk, #TSK_TI_SCS_OFFSET] + str x18, [\tsk, #TSK_TI_SCS_SP] .endm #else .macro scs_load tsk, tmp diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 9df79c0a4c43..6ea8b6a26ae9 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -43,7 +43,7 @@ struct thread_info { }; #ifdef CONFIG_SHADOW_CALL_STACK void *scs_base; - unsigned long scs_offset; + void *scs_sp; #endif }; @@ -107,7 +107,7 @@ void arch_release_task_struct(struct task_struct *tsk); #ifdef CONFIG_SHADOW_CALL_STACK #define INIT_SCS \ .scs_base = init_shadow_call_stack, \ - .scs_offset = 0, + .scs_sp = init_shadow_call_stack, #else #define INIT_SCS #endif diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index d7934250b68c..a098a45f63d8 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -36,7 +36,7 @@ int main(void) #endif #ifdef CONFIG_SHADOW_CALL_STACK DEFINE(TSK_TI_SCS_BASE, offsetof(struct task_struct, thread_info.scs_base)); - DEFINE(TSK_TI_SCS_OFFSET, offsetof(struct task_struct, thread_info.scs_offset)); + DEFINE(TSK_TI_SCS_SP, offsetof(struct task_struct, thread_info.scs_sp)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/include/linux/scs.h b/include/linux/scs.h index 3f3662621a27..0eb2485ef832 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -27,7 +27,7 @@ #define SCS_END_MAGIC (0x5f6UL + POISON_POINTER_DELTA) #define task_scs(tsk) (task_thread_info(tsk)->scs_base) -#define task_scs_offset(tsk) (task_thread_info(tsk)->scs_offset) +#define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp) void scs_init(void); int scs_prepare(struct task_struct *tsk, int node); @@ -39,7 +39,7 @@ static inline void scs_task_reset(struct task_struct *tsk) * Reset the shadow stack to the base address in case the task * is reused. */ - task_scs_offset(tsk) = 0; + task_scs_sp(tsk) = task_scs(tsk); } static inline unsigned long *__scs_magic(void *s) @@ -50,9 +50,9 @@ static inline unsigned long *__scs_magic(void *s) static inline bool scs_corrupted(struct task_struct *tsk) { unsigned long *magic = __scs_magic(task_scs(tsk)); + unsigned long sz = task_scs_sp(tsk) - task_scs(tsk); - return (task_scs_offset(tsk) >= SCS_SIZE - 1 || - READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC); + return sz >= SCS_SIZE - 1 || READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; } #else /* CONFIG_SHADOW_CALL_STACK */ diff --git a/kernel/scs.c b/kernel/scs.c index 9389c28f0853..5ff8663e4a67 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -60,8 +60,7 @@ int scs_prepare(struct task_struct *tsk, int node) if (!s) return -ENOMEM; - task_scs(tsk) = s; - task_scs_offset(tsk) = 0; + task_scs(tsk) = task_scs_sp(tsk) = s; scs_account(tsk, 1); return 0; }
Storing the SCS information in thread_info as a {base,offset} pair introduces an additional load instruction on the ret-to-user path, since the SCS stack pointer in x18 has to be converted back to an offset by subtracting the base. Replace the offset with the absolute SCS stack pointer value instead and avoid the redundant load. Signed-off-by: Will Deacon <will@kernel.org> --- arch/arm64/include/asm/scs.h | 9 ++++----- arch/arm64/include/asm/thread_info.h | 4 ++-- arch/arm64/kernel/asm-offsets.c | 2 +- include/linux/scs.h | 8 ++++---- kernel/scs.c | 3 +-- 5 files changed, 12 insertions(+), 14 deletions(-)