Message ID | 20200603151033.11512-2-will@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Fix single-stepping with reverse debugging | expand |
On Wed, Jun 3, 2020 at 11:10 AM Will Deacon <will@kernel.org> wrote: > > Luis reports that, when reverse debugging with GDB, single-step does not > function as expected on arm64: > > | I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP > | request by GDB won't execute the underlying instruction. As a consequence, > | the PC doesn't move, but we return a SIGTRAP just like we would for a > | regular successful PTRACE_SINGLESTEP request. > > The underlying problem is that when the CPU register state is restored > as part of a reverse step, the SPSR.SS bit is cleared and so the hardware > single-step state can transition to the "active-pending" state, causing > an unexpected step exception to be taken immediately if a step operation > is attempted. We saw this issue also and worked around it in user-space [1]. That said, I think I'm ok with this change in the kernel, since I can't think of a particularly useful usecase for this feature. However, at the same time as changing this, we should probably make sure to enable the syscall exit pseudo-singlestep trap (similar issue as the other patch I had sent for the signal pseudo-singlestep trap), since otherwise ptracers might get confused about the lack of singlestep trap during a singlestep -> seccomp -> singlestep path (which would give one trap less with this patch than before). Keno [1] https://github.com/mozilla/rr/blob/36aa5328a2240dc3d794c14926e0754f66ee28e0/src/Task.cc#L1352-L1362
Hi Keno, Thanks for having a look. On Wed, Jun 03, 2020 at 11:42:46AM -0400, Keno Fischer wrote: > On Wed, Jun 3, 2020 at 11:10 AM Will Deacon <will@kernel.org> wrote: > > > > Luis reports that, when reverse debugging with GDB, single-step does not > > function as expected on arm64: > > > > | I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP > > | request by GDB won't execute the underlying instruction. As a consequence, > > | the PC doesn't move, but we return a SIGTRAP just like we would for a > > | regular successful PTRACE_SINGLESTEP request. > > > > The underlying problem is that when the CPU register state is restored > > as part of a reverse step, the SPSR.SS bit is cleared and so the hardware > > single-step state can transition to the "active-pending" state, causing > > an unexpected step exception to be taken immediately if a step operation > > is attempted. > > We saw this issue also and worked around it in user-space [1]. That said, > I think I'm ok with this change in the kernel, since I can't think of > a particularly useful usecase for this feature. > > However, at the same time as changing this, we should probably make sure > to enable the syscall exit pseudo-singlestep trap (similar issue as the other > patch I had sent for the signal pseudo-singlestep trap), since otherwise > ptracers might get confused about the lack of singlestep trap during a > singlestep -> seccomp -> singlestep path (which would give one trap > less with this patch than before). Hmm, I don't completely follow your example. Please could you spell it out a bit more? I fast-forward the stepping state machine on sigreturn, which I thought would be sufficient. Perhaps you're referring to a variant of the situation mentioned by Mark, which I didn't think could happen with ptrace [2]. Cheers, Will [2] https://lore.kernel.org/r/20200531095216.GB30204@willie-the-truck
On Wed, Jun 3, 2020 at 11:53 AM Will Deacon <will@kernel.org> wrote: > > However, at the same time as changing this, we should probably make sure > > to enable the syscall exit pseudo-singlestep trap (similar issue as the other > > patch I had sent for the signal pseudo-singlestep trap), since otherwise > > ptracers might get confused about the lack of singlestep trap during a > > singlestep -> seccomp -> singlestep path (which would give one trap > > less with this patch than before). > > Hmm, I don't completely follow your example. Please could you spell it > out a bit more? I fast-forward the stepping state machine on sigreturn, > which I thought would be sufficient. Perhaps you're referring to a variant > of the situation mentioned by Mark, which I didn't think could happen > with ptrace [2]. Sure suppose we have code like the following: 0x0: svc #0 0x4: str x0, [x7] ... Then, if there's a seccomp filter active that just does SECCOMP_RET_TRACE of everything, right now we get traps: <- (ip: 0x0) -> PTRACE_SINGLESTEP <- (ip: 0x4 - seccomp trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x4 - TRAP_TRACE trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) With your proposed patch, we instead get <- (ip: 0x0) -> PTRACE_SINGLESTEP <- (ip: 0x4 - seccomp trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) This is problematic, because the ptracer may want to inspect the result of the syscall instruction. On other architectures, this problem is solved with a pseudo-singlestep trap that gets executed if you resume from a syscall-entry-like trap with PTRACE_SINGLESTEP. See the below patch for the change I'm proposing. There is a slight issue with that patch, still: It now makes the x7 issue apply to the singlestep trap at exit, so we should do the patch to fix that issue before we apply that change (or manually check for this situation and issue the pseudo-singlestep trap manually). My proposed patch below also changes <- (ip: 0x0) -> PTRACE_SYSCALL <- (ip: 0x4 - syscall entry trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) to <- (ip: 0x0) -> PTRACE_SYSCALL <- (ip: 0x4 - syscall entry trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x4 - pseudo-singlestep exit trap) -> PTRACE_SINGLESTEP <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) But I consider that a bugfix, since that's how other architectures behave and I was going to send in this patch for that reason anyway (since this was another one of the aarch64 ptrace quirks we had to work around). diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index b3d3005d9515..104cfcf117d0 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1820,7 +1820,7 @@ static void tracehook_report_syscall(struct pt_regs *regs, regs->regs[regno] = dir; if (dir == PTRACE_SYSCALL_EXIT) - tracehook_report_syscall_exit(regs, 0); + tracehook_report_syscall_exit(regs, test_thread_flag(TIF_SINGLESTEP)); else if (tracehook_report_syscall_entry(regs)) forget_syscall(regs);
Hi Keno, Cheers for the really helpful explanation. I have a bunch of questions/comments, since it's not very often that somebody shows up who understands how this is supposed to work and so I'd like to take advantage of that! On Wed, Jun 03, 2020 at 12:56:24PM -0400, Keno Fischer wrote: > On Wed, Jun 3, 2020 at 11:53 AM Will Deacon <will@kernel.org> wrote: > > > However, at the same time as changing this, we should probably make sure > > > to enable the syscall exit pseudo-singlestep trap (similar issue as the other > > > patch I had sent for the signal pseudo-singlestep trap), since otherwise > > > ptracers might get confused about the lack of singlestep trap during a > > > singlestep -> seccomp -> singlestep path (which would give one trap > > > less with this patch than before). > > > > Hmm, I don't completely follow your example. Please could you spell it > > out a bit more? I fast-forward the stepping state machine on sigreturn, > > which I thought would be sufficient. Perhaps you're referring to a variant > > of the situation mentioned by Mark, which I didn't think could happen > > with ptrace [2]. > > Sure suppose we have code like the following: > > 0x0: svc #0 > 0x4: str x0, [x7] > ... > > Then, if there's a seccomp filter active that just does > SECCOMP_RET_TRACE of everything, right now we get traps: > > <- (ip: 0x0) > -> PTRACE_SINGLESTEP > <- (ip: 0x4 - seccomp trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x4 - TRAP_TRACE trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) > > With your proposed patch, we instead get > <- (ip: 0x0) > -> PTRACE_SINGLESTEP > <- (ip: 0x4 - seccomp trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) Urgh, and actually, I think this is *only* the case if the seccomp handler actually changes a register in the target, right? In which case, your proposed patch should probably do something like: if (dir == PTRACE_SYSCALL_EXIT) { bool stepping = test_thread_flag(TIF_SINGLESTEP); tracehook_report_syscall_exit(regs, stepping); user_rewind_single_step(regs); } otherwise I think we could get a spurious SIGTRAP on return to userspace. What do you think? This has also got me thinking about your other patch to report a pseudo-step exception on entry to a signal handler: https://lore.kernel.org/r/20200524043827.GA33001@juliacomputing.com Although we don't actually disarm the step logic there (and so you might expect a spurious SIGTRAP on the second instruction of the handler), I think it's ok because the tracer will either do PTRACE_SINGLESTEP (and rearm the state machine) or PTRACE_CONT (and so stepping will be disabled). Do you agree? > This is problematic, because the ptracer may want to inspect the > result of the syscall instruction. On other architectures, this > problem is solved with a pseudo-singlestep trap that gets executed > if you resume from a syscall-entry-like trap with PTRACE_SINGLESTEP. > See the below patch for the change I'm proposing. There is a slight > issue with that patch, still: It now makes the x7 issue apply to the > singlestep trap at exit, so we should do the patch to fix that issue > before we apply that change (or manually check for this situation > and issue the pseudo-singlestep trap manually). I don't see the dependency on the x7 issue; x7 is nobbled on syscall entry, so it will be nobbled in the psuedo-step trap as well as the hardware step trap on syscall return. I'd also like to backport this to stable, without having to backport an optional extension to the ptrace API for preserving x7. Or are you saying that the value of x7 should be PTRACE_SYSCALL_ENTER for the pseudo trap? That seems a bit weird to me, but then this is all weird anyway. > My proposed patch below also changes > > <- (ip: 0x0) > -> PTRACE_SYSCALL > <- (ip: 0x4 - syscall entry trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) > > to > > <- (ip: 0x0) > -> PTRACE_SYSCALL > <- (ip: 0x4 - syscall entry trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x4 - pseudo-singlestep exit trap) > -> PTRACE_SINGLESTEP > <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) > > But I consider that a bugfix, since that's how other architectures > behave and I was going to send in this patch for that reason anyway > (since this was another one of the aarch64 ptrace quirks we had to > work around). I think that's still the case with my addition above, so that's good. Any other quirks you ran into that we should address here? Now that I have this stuff partially paged-in, it would be good to fix a bunch of this at once. I can send out a v2 which includes the two patches from you once we're agreed on the details. Thanks, Will
> > With your proposed patch, we instead get > > <- (ip: 0x0) > > -> PTRACE_SINGLESTEP > > <- (ip: 0x4 - seccomp trap) > > -> PTRACE_SINGLESTEP > > <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) > > Urgh, and actually, I think this is *only* the case if the seccomp > handler actually changes a register in the target, right? Ah yes, you are correct. Because otherwise the flag would not get toggled at all. That does raise an additional question about signal stops though, which have a similar issue. What if a single step gets completed, but the singlestep trap gets pre-empted by some other signal. What should PTRACE_SINGLESTEP from such a signal stop do? > In which case, your proposed patch should probably do something like: > > if (dir == PTRACE_SYSCALL_EXIT) { > bool stepping = test_thread_flag(TIF_SINGLESTEP); > > tracehook_report_syscall_exit(regs, stepping); > user_rewind_single_step(regs); > } > > otherwise I think we could get a spurious SIGTRAP on return to userspace. > What do you think? Yes, this change seems reasonable, though you may want to make the rewind conditional on stepping, since otherwise the armed pstate may become visible to a signal handler/ptrace signal stop which may be unexpected. > This has also got me thinking about your other patch to report a pseudo-step > exception on entry to a signal handler: > > https://lore.kernel.org/r/20200524043827.GA33001@juliacomputing.com > > Although we don't actually disarm the step logic there (and so you might > expect a spurious SIGTRAP on the second instruction of the handler), I > think it's ok because the tracer will either do PTRACE_SINGLESTEP (and > rearm the state machine) or PTRACE_CONT (and so stepping will be > disabled). Do you agree? Yes, I had thought about whether to re-arm the rewind the single-step logic, but then realized that since the next event was a ptrace stop anyway, the ptracer could decide. With your patch here, it would always just depend on which ptrace continue method is used, which is fine. > > This is problematic, because the ptracer may want to inspect the > > result of the syscall instruction. On other architectures, this > > problem is solved with a pseudo-singlestep trap that gets executed > > if you resume from a syscall-entry-like trap with PTRACE_SINGLESTEP. > > See the below patch for the change I'm proposing. There is a slight > > issue with that patch, still: It now makes the x7 issue apply to the > > singlestep trap at exit, so we should do the patch to fix that issue > > before we apply that change (or manually check for this situation > > and issue the pseudo-singlestep trap manually). > > I don't see the dependency on the x7 issue; x7 is nobbled on syscall entry, > so it will be nobbled in the psuedo-step trap as well as the hardware step > trap on syscall return. I'd also like to backport this to stable, without > having to backport an optional extension to the ptrace API for preserving > x7. Or are you saying that the value of x7 should be PTRACE_SYSCALL_ENTER > for the pseudo trap? That seems a bit weird to me, but then this is all > weird anyway. So the issue here is that writes to x7 at the singlestep stop after a seccomp stop will now be ignored (and reads incorrect), which is a definite change in behavior and one that I would not recommend backporting to stable. If you do want to backport something to stable, I would recommend making the register modification conditional on !stepping for the backport. > I think that's still the case with my addition above, so that's good. > Any other quirks you ran into that we should address here? Now that I have > this stuff partially paged-in, it would be good to fix a bunch of this > at once. I can send out a v2 which includes the two patches from you > once we're agreed on the details. As for other quirks, the behavior with negative syscalls is a bit odd, but not necessarily arm64 specific (though arm64 has an additional quirk). I'd emailed about that separately here: https://lkml.org/lkml/2020/5/22/1216 There's also the issue that orig_x0 is inaccessible, so syscall restarts of syscalls that have had their registers modified by a ptracer will potentially unexpectedly use the x0 value before modification during restart, rather than the modified values. My preference would be to fix these two issues along with the x7 issue, by introducing a new regset that: - Explicitly splits out orig_x0 - Sets regular x0 to -ENOSYS before the syscall entry stop (and using the orig_x0 value for syscall processing) - Addresses the x7 issue As I said, I'm planning to send a patch along these lines, but haven't had the time yet. Perhaps this weekend. Lastly, there was one additional issue with process_vm_readv, which isn't directly ptrace related, but slightly adjacent (and also no arm64 specific): I wrote about that here: https://lkml.org/lkml/2020/5/24/466 I think that's everything I ran into from the kernel perspective, though I'm not 100% done porting yet, so other things may come up. Keno
Hi, On 6/4/20 5:32 AM, Will Deacon wrote: > Hi Keno, > > Cheers for the really helpful explanation. I have a bunch of > questions/comments, since it's not very often that somebody shows up who > understands how this is supposed to work and so I'd like to take advantage > of that! > > On Wed, Jun 03, 2020 at 12:56:24PM -0400, Keno Fischer wrote: >> On Wed, Jun 3, 2020 at 11:53 AM Will Deacon <will@kernel.org> wrote: >>>> However, at the same time as changing this, we should probably make sure >>>> to enable the syscall exit pseudo-singlestep trap (similar issue as the other >>>> patch I had sent for the signal pseudo-singlestep trap), since otherwise >>>> ptracers might get confused about the lack of singlestep trap during a >>>> singlestep -> seccomp -> singlestep path (which would give one trap >>>> less with this patch than before). >>> >>> Hmm, I don't completely follow your example. Please could you spell it >>> out a bit more? I fast-forward the stepping state machine on sigreturn, >>> which I thought would be sufficient. Perhaps you're referring to a variant >>> of the situation mentioned by Mark, which I didn't think could happen >>> with ptrace [2]. >> >> Sure suppose we have code like the following: >> >> 0x0: svc #0 >> 0x4: str x0, [x7] >> ... >> >> Then, if there's a seccomp filter active that just does >> SECCOMP_RET_TRACE of everything, right now we get traps: >> >> <- (ip: 0x0) >> -> PTRACE_SINGLESTEP >> <- (ip: 0x4 - seccomp trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x4 - TRAP_TRACE trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) >> >> With your proposed patch, we instead get >> <- (ip: 0x0) >> -> PTRACE_SINGLESTEP >> <- (ip: 0x4 - seccomp trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) > > Urgh, and actually, I think this is *only* the case if the seccomp > handler actually changes a register in the target, right? > > In which case, your proposed patch should probably do something like: > > if (dir == PTRACE_SYSCALL_EXIT) { > bool stepping = test_thread_flag(TIF_SINGLESTEP); > > tracehook_report_syscall_exit(regs, stepping); > user_rewind_single_step(regs); > } > > otherwise I think we could get a spurious SIGTRAP on return to userspace. > What do you think? > > This has also got me thinking about your other patch to report a pseudo-step > exception on entry to a signal handler: > > https://lore.kernel.org/r/20200524043827.GA33001@juliacomputing.com > > Although we don't actually disarm the step logic there (and so you might > expect a spurious SIGTRAP on the second instruction of the handler), I > think it's ok because the tracer will either do PTRACE_SINGLESTEP (and > rearm the state machine) or PTRACE_CONT (and so stepping will be > disabled). Do you agree? > >> This is problematic, because the ptracer may want to inspect the >> result of the syscall instruction. On other architectures, this >> problem is solved with a pseudo-singlestep trap that gets executed >> if you resume from a syscall-entry-like trap with PTRACE_SINGLESTEP. >> See the below patch for the change I'm proposing. There is a slight >> issue with that patch, still: It now makes the x7 issue apply to the >> singlestep trap at exit, so we should do the patch to fix that issue >> before we apply that change (or manually check for this situation >> and issue the pseudo-singlestep trap manually). > > I don't see the dependency on the x7 issue; x7 is nobbled on syscall entry, > so it will be nobbled in the psuedo-step trap as well as the hardware step > trap on syscall return. I'd also like to backport this to stable, without > having to backport an optional extension to the ptrace API for preserving > x7. Or are you saying that the value of x7 should be PTRACE_SYSCALL_ENTER > for the pseudo trap? That seems a bit weird to me, but then this is all > weird anyway. > >> My proposed patch below also changes >> >> <- (ip: 0x0) >> -> PTRACE_SYSCALL >> <- (ip: 0x4 - syscall entry trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) >> >> to >> >> <- (ip: 0x0) >> -> PTRACE_SYSCALL >> <- (ip: 0x4 - syscall entry trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x4 - pseudo-singlestep exit trap) >> -> PTRACE_SINGLESTEP >> <- SIGTRAP (ip: 0x8 - TRAP_TRACE trap) >> >> But I consider that a bugfix, since that's how other architectures >> behave and I was going to send in this patch for that reason anyway >> (since this was another one of the aarch64 ptrace quirks we had to >> work around). > > I think that's still the case with my addition above, so that's good. > Any other quirks you ran into that we should address here? Now that I have > this stuff partially paged-in, it would be good to fix a bunch of this > at once. I can send out a v2 which includes the two patches from you > once we're agreed on the details. Since we're discussing arm64 ptrace/kernel quirks, I'm gonna go ahead and describe a strange behavior on arm64 that I could not reproduce on x86, for example. I apologize for hijacking the thread if this is a non-issue or not related. This is something I noticed when single-stepping over fork and vfork syscalls in GDB, so handling of PTRACE_EVENT_FORK, PTRACE_EVENT_VFORK and PTRACE_EVENT_VFORK_DONE. The situation seems to happen more reliably with vforks since it is a two stage operation with VFORK and VFORK_DONE. Suppose we're stopped at a vfork syscall instruction and that the child we spawn will exit immediately. If we attempt to single-step that particular instruction, this is what happens for arm64: -- [Step over vfork syscall] ptrace(PTRACE_SINGLESTEP, 63049, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=63049, si_uid=13595, si_status=SIGTRAP, si_utime=0, si_stime=0} --- [vfork event for child 63052] ptrace(PTRACE_GETEVENTMSG, 63049, NULL, [63052]) = 0 ... [Detach child] ptrace(PTRACE_DETACH, 63052, NULL, SIG_0) = 0 ptrace(PTRACE_CONT, 63049, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=63049, si_uid=13595, si_status=SIGTRAP, si_utime=0, si_stime=0} --- ... ptrace(PTRACE_SINGLESTEP, 63049, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=63049, si_uid=13595, si_status=SIGCHLD, si_utime=0, si_stime=0} --- -- For x86-64, we have this: -- [Step over vfork syscall] ptrace(PTRACE_SINGLESTEP, 13484, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=13484, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=13493, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} --- [vfork event for child 13493] ptrace(PTRACE_GETEVENTMSG, 13484, NULL, [13493]) = 0 ... [Detach child] ptrace(PTRACE_DETACH, 13493, NULL, SIG_0) = 0 ptrace(PTRACE_CONT, 13484, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=13484, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- ... ptrace(PTRACE_SINGLESTEP, 13484, 0x1, SIG_0) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=13484, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- -- There are a couple things off: 1 - x86-64 seems to get an extra SIGSTOP when we single-step over the vfork syscall, though this doesn't seem to do any harm. 2 - This is the one that throws GDB off. In the last single-step request, arm64 gets a SIGCHLD instead of the SIGTRAP x86-64 gets. I did some experiments with it, and it seems the last SIGCHLD is more prone to being delivered (instead of a SIGTRAP) if we put some load on the machine (by firing off processes or producing a lot of screen output for example). Does this ring any bells? I suppose signal delivery order is not guaranteed in this context, but x86-64 seems to deliver them consistently in the same order.
Hi Luis, On Fri, Jun 5, 2020 at 12:50 AM Luis Machado <luis.machado@linaro.org> wrote: > > 1 - x86-64 seems to get an extra SIGSTOP when we single-step over the > vfork syscall, though this doesn't seem to do any harm. Is it possible that on arm64 you attached to the original tracee using PTRACE_SEIZE, but used some other method on x86-64? That would explain this difference. > 2 - This is the one that throws GDB off. In the last single-step > request, arm64 gets a SIGCHLD instead of the SIGTRAP x86-64 gets. I believe this is ok. The timing of SIGCHLD is not guaranteed, so it's allowed to pre-empt the single step. I have seen some differences in signal delivery order on arm64, but I believe it just comes down to scheduling and performance differences between the systems, since these interactions are a bit racy. Keno
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h index e5ceea213e39..0b298f48f5bf 100644 --- a/arch/arm64/include/asm/debug-monitors.h +++ b/arch/arm64/include/asm/debug-monitors.h @@ -109,6 +109,8 @@ void disable_debug_monitors(enum dbg_active_el el); void user_rewind_single_step(struct task_struct *task); void user_fastforward_single_step(struct task_struct *task); +void user_regs_reset_single_step(struct user_pt_regs *regs, + struct task_struct *task); void kernel_enable_single_step(struct pt_regs *regs); void kernel_disable_single_step(void); diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 15e80c876d46..732e7ecaa692 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -141,17 +141,20 @@ postcore_initcall(debug_monitors_init); /* * Single step API and exception handling. */ -static void set_regs_spsr_ss(struct pt_regs *regs) +static void set_user_regs_spsr_ss(struct user_pt_regs *regs) { regs->pstate |= DBG_SPSR_SS; } -NOKPROBE_SYMBOL(set_regs_spsr_ss); +NOKPROBE_SYMBOL(set_user_regs_spsr_ss); -static void clear_regs_spsr_ss(struct pt_regs *regs) +static void clear_user_regs_spsr_ss(struct user_pt_regs *regs) { regs->pstate &= ~DBG_SPSR_SS; } -NOKPROBE_SYMBOL(clear_regs_spsr_ss); +NOKPROBE_SYMBOL(clear_user_regs_spsr_ss); + +#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs) +#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs) static DEFINE_SPINLOCK(debug_hook_lock); static LIST_HEAD(user_step_hook); @@ -402,6 +405,15 @@ void user_fastforward_single_step(struct task_struct *task) clear_regs_spsr_ss(task_pt_regs(task)); } +void user_regs_reset_single_step(struct user_pt_regs *regs, + struct task_struct *task) +{ + if (test_tsk_thread_flag(task, TIF_SINGLESTEP)) + set_user_regs_spsr_ss(regs); + else + clear_user_regs_spsr_ss(regs); +} + /* Kernel API */ void kernel_enable_single_step(struct pt_regs *regs) { diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 585dd7f5c826..e871ab3ab29b 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1934,8 +1934,8 @@ static int valid_native_regs(struct user_pt_regs *regs) */ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task) { - if (!test_tsk_thread_flag(task, TIF_SINGLESTEP)) - regs->pstate &= ~DBG_SPSR_SS; + /* https://lore.kernel.org/lkml/20191118131525.GA4180@willie-the-truck */ + user_regs_reset_single_step(regs, task); if (is_compat_thread(task_thread_info(task))) return valid_compat_regs(regs); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 801d56cdf701..c57a077f66cf 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -505,8 +505,12 @@ static int restore_sigframe(struct pt_regs *regs, forget_syscall(regs); err |= !valid_user_regs(®s->user_regs, current); - if (err == 0) + + if (err == 0) { + /* Make it look like we stepped the sigreturn system call */ + user_fastforward_single_step(current); err = parse_user_sigframe(&user, sf); + } if (err == 0 && system_supports_fpsimd()) { if (!user.fpsimd)