Message ID | 20190919150809.145400160@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | entry: Provide generic implementation for host and guest entry/exit work | expand |
On Thu, Sep 19, 2019 at 05:03:21PM +0200, Thomas Gleixner wrote: > The syscall handling code has an obscure check of pending work which does a > shortcut before returning to user space. It calls into the exit work code > when the flags at entry time required an entry into the slowpath. That does > not make sense because the underlying work functionality will reevaluate > the flags anyway and not do anything. The current C code was just matching the original behaviour in asm (converted by commit f37099b6992a0b81). The idea IIRC was to always pair a syscall_trace_enter() with a syscall_trace_exit() irrespective of the thread flag changes. I think the behaviour is preserved with your patch if no-one clears the work flags during el0_svc_common(). > @@ -105,33 +103,15 @@ static void el0_svc_common(struct pt_reg > user_exit(); > > scno = syscall_enter_from_usermode(regs, scno); > - if (scno == NO_SYSCALL) > - goto trace_exit; > - > - invoke_syscall(regs, scno, sc_nr, syscall_table); > + if (scno != NO_SYSCALL) > + invoke_syscall(regs, scno, sc_nr, syscall_table); > > - /* > - * The tracing status may have changed under our feet, so we have to > - * check again. However, if we were tracing entry, then we always trace > - * exit regardless, as the old entry assembly did. > - */ > - if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { > - local_daif_mask(); > - flags = current_thread_info()->flags; > - if (!has_syscall_work(flags)) { > - /* > - * We're off to userspace, where interrupts are > - * always enabled after we restore the flags from > - * the SPSR. > - */ > - trace_hardirqs_on(); > - return; > - } > + local_daif_mask(); > + if (has_syscall_work(current_thread_info()->flags) || > + IS_ENABLED(CONFIG_DEBUG_RSEQ)) { > local_daif_restore(DAIF_PROCCTX); > + syscall_trace_exit(regs); > } That's missing a trace_hardirqs_on() (off done in local_daif_mask()) before returning. > - > -trace_exit: > - syscall_trace_exit(regs); > }
--- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -93,8 +93,6 @@ static void cortex_a76_erratum_1463225_s static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, const syscall_fn_t syscall_table[]) { - unsigned long flags = current_thread_info()->flags; - regs->orig_x0 = regs->regs[0]; regs->syscallno = scno; /* Set default error number */ @@ -105,33 +103,15 @@ static void el0_svc_common(struct pt_reg user_exit(); scno = syscall_enter_from_usermode(regs, scno); - if (scno == NO_SYSCALL) - goto trace_exit; - - invoke_syscall(regs, scno, sc_nr, syscall_table); + if (scno != NO_SYSCALL) + invoke_syscall(regs, scno, sc_nr, syscall_table); - /* - * The tracing status may have changed under our feet, so we have to - * check again. However, if we were tracing entry, then we always trace - * exit regardless, as the old entry assembly did. - */ - if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { - local_daif_mask(); - flags = current_thread_info()->flags; - if (!has_syscall_work(flags)) { - /* - * We're off to userspace, where interrupts are - * always enabled after we restore the flags from - * the SPSR. - */ - trace_hardirqs_on(); - return; - } + local_daif_mask(); + if (has_syscall_work(current_thread_info()->flags) || + IS_ENABLED(CONFIG_DEBUG_RSEQ)) { local_daif_restore(DAIF_PROCCTX); + syscall_trace_exit(regs); } - -trace_exit: - syscall_trace_exit(regs); } static inline void sve_user_discard(void)
The syscall handling code has an obscure check of pending work which does a shortcut before returning to user space. It calls into the exit work code when the flags at entry time required an entry into the slowpath. That does not make sense because the underlying work functionality will reevaluate the flags anyway and not do anything. Replace that by a straight forward test for work flags. Preparatory change for switching to the generic syscall exit work handling code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- arch/arm64/kernel/syscall.c | 32 ++++++-------------------------- 1 file changed, 6 insertions(+), 26 deletions(-)