Message ID | 20200623085436.3696-4-will@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Fix unwinding through sigreturn trampolines | expand |
On Tue, Jun 23, 2020 at 09:54:33AM +0100, Will Deacon wrote: > The 32-bit sigreturn trampoline in the compat sigpage matches the binary > representation of the arch/arm/ sigpage exactly. This is important for > debuggers (e.g. GDB) and unwinders (e.g. libunwind) since they rely > on matching the instruction sequence in order to identify that they are > unwinding through a signal. The same cannot be said for the sigreturn > trampoline in the compat vDSO, which defeats the unwinder heuristics and > instead attempts to use unwind directives for the unwinding. This is in > contrast to arch/arm/, which never uses the vDSO for sigreturn. > > Ensure compatibility with arch/arm/ and existing unwinders by always > using the sigpage for the sigreturn trampoline, regardless of the > presence of the compat vDSO. > > Signed-off-by: Will Deacon <will@kernel.org> AFAICT this patch doesn't leave any dangling references to the vdso in the actual signal code, and the diff looks sound to me. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Mark. > --- > arch/arm64/kernel/signal32.c | 25 ------------------------- > 1 file changed, 25 deletions(-) > > diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c > index 0aa0b33744de..2f507f565c48 100644 > --- a/arch/arm64/kernel/signal32.c > +++ b/arch/arm64/kernel/signal32.c > @@ -342,30 +342,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > retcode = ptr_to_compat(ka->sa.sa_restorer); > } else { > /* Set up sigreturn pointer */ > -#ifdef CONFIG_COMPAT_VDSO > - void *vdso_base = current->mm->context.vdso; > - void *vdso_trampoline; > - > - if (ka->sa.sa_flags & SA_SIGINFO) { > - if (thumb) { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_rt_sigreturn_thumb); > - } else { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_rt_sigreturn_arm); > - } > - } else { > - if (thumb) { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_sigreturn_thumb); > - } else { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_sigreturn_arm); > - } > - } > - > - retcode = ptr_to_compat(vdso_trampoline) + thumb; > -#else > unsigned int idx = thumb << 1; > > if (ka->sa.sa_flags & SA_SIGINFO) > @@ -373,7 +349,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > > retcode = (unsigned long)current->mm->context.sigpage + > (idx << 2) + thumb; > -#endif > } > > regs->regs[0] = usig; > -- > 2.27.0.111.gc72c7da667-goog > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 6/23/20 9:54 AM, Will Deacon wrote: > The 32-bit sigreturn trampoline in the compat sigpage matches the binary > representation of the arch/arm/ sigpage exactly. This is important for > debuggers (e.g. GDB) and unwinders (e.g. libunwind) since they rely > on matching the instruction sequence in order to identify that they are > unwinding through a signal. The same cannot be said for the sigreturn > trampoline in the compat vDSO, which defeats the unwinder heuristics and > instead attempts to use unwind directives for the unwinding. This is in > contrast to arch/arm/, which never uses the vDSO for sigreturn. > > Ensure compatibility with arch/arm/ and existing unwinders by always > using the sigpage for the sigreturn trampoline, regardless of the > presence of the compat vDSO. > > Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > --- > arch/arm64/kernel/signal32.c | 25 ------------------------- > 1 file changed, 25 deletions(-) > > diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c > index 0aa0b33744de..2f507f565c48 100644 > --- a/arch/arm64/kernel/signal32.c > +++ b/arch/arm64/kernel/signal32.c > @@ -342,30 +342,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > retcode = ptr_to_compat(ka->sa.sa_restorer); > } else { > /* Set up sigreturn pointer */ > -#ifdef CONFIG_COMPAT_VDSO > - void *vdso_base = current->mm->context.vdso; > - void *vdso_trampoline; > - > - if (ka->sa.sa_flags & SA_SIGINFO) { > - if (thumb) { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_rt_sigreturn_thumb); > - } else { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_rt_sigreturn_arm); > - } > - } else { > - if (thumb) { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_sigreturn_thumb); > - } else { > - vdso_trampoline = VDSO_SYMBOL(vdso_base, > - compat_sigreturn_arm); > - } > - } > - > - retcode = ptr_to_compat(vdso_trampoline) + thumb; > -#else > unsigned int idx = thumb << 1; > > if (ka->sa.sa_flags & SA_SIGINFO) > @@ -373,7 +349,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, > > retcode = (unsigned long)current->mm->context.sigpage + > (idx << 2) + thumb; > -#endif > } > > regs->regs[0] = usig; >
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index 0aa0b33744de..2f507f565c48 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -342,30 +342,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, retcode = ptr_to_compat(ka->sa.sa_restorer); } else { /* Set up sigreturn pointer */ -#ifdef CONFIG_COMPAT_VDSO - void *vdso_base = current->mm->context.vdso; - void *vdso_trampoline; - - if (ka->sa.sa_flags & SA_SIGINFO) { - if (thumb) { - vdso_trampoline = VDSO_SYMBOL(vdso_base, - compat_rt_sigreturn_thumb); - } else { - vdso_trampoline = VDSO_SYMBOL(vdso_base, - compat_rt_sigreturn_arm); - } - } else { - if (thumb) { - vdso_trampoline = VDSO_SYMBOL(vdso_base, - compat_sigreturn_thumb); - } else { - vdso_trampoline = VDSO_SYMBOL(vdso_base, - compat_sigreturn_arm); - } - } - - retcode = ptr_to_compat(vdso_trampoline) + thumb; -#else unsigned int idx = thumb << 1; if (ka->sa.sa_flags & SA_SIGINFO) @@ -373,7 +349,6 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, retcode = (unsigned long)current->mm->context.sigpage + (idx << 2) + thumb; -#endif } regs->regs[0] = usig;
The 32-bit sigreturn trampoline in the compat sigpage matches the binary representation of the arch/arm/ sigpage exactly. This is important for debuggers (e.g. GDB) and unwinders (e.g. libunwind) since they rely on matching the instruction sequence in order to identify that they are unwinding through a signal. The same cannot be said for the sigreturn trampoline in the compat vDSO, which defeats the unwinder heuristics and instead attempts to use unwind directives for the unwinding. This is in contrast to arch/arm/, which never uses the vDSO for sigreturn. Ensure compatibility with arch/arm/ and existing unwinders by always using the sigpage for the sigreturn trampoline, regardless of the presence of the compat vDSO. Signed-off-by: Will Deacon <will@kernel.org> --- arch/arm64/kernel/signal32.c | 25 ------------------------- 1 file changed, 25 deletions(-)