Message ID | defe9ee646da85d15f70259018480e6182cbc0c2.1603745852.git.pcc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2,1/2] arm64: Change the on_*stack functions to take a size argument | expand |
On Mon, Oct 26, 2020 at 2:00 PM Peter Collingbourne <pcc@google.com> wrote: > > The AAPCS places no requirements on the alignment of the frame > record. In theory it could be placed anywhere, although it seems > sensible to require it to be aligned to 8 bytes. With an upcoming > enhancement to tag-based KASAN Clang will begin creating frame records > located at an address that is only aligned to 8 bytes. Accommodate > such frame records in the stack unwinding code. > > As pointed out by Mark Rutland, the userspace stack unwinding code > has the same problem, so fix it there as well. As a reminder, this series is required in order to avoid breaking stack traces once [1] is applied. Peter [1] https://lore.kernel.org/linux-arm-kernel/20201120230211.584929-1-pcc@google.com/T/
On Mon, Oct 26, 2020 at 10:00 PM Peter Collingbourne <pcc@google.com> wrote: > > The AAPCS places no requirements on the alignment of the frame > record. In theory it could be placed anywhere, although it seems > sensible to require it to be aligned to 8 bytes. With an upcoming > enhancement to tag-based KASAN Clang will begin creating frame records > located at an address that is only aligned to 8 bytes. Accommodate > such frame records in the stack unwinding code. > > As pointed out by Mark Rutland, the userspace stack unwinding code > has the same problem, so fix it there as well. > > Signed-off-by: Peter Collingbourne <pcc@google.com> > Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268 > --- > v2: > - fix it in the userspace unwinding code as well > > arch/arm64/kernel/perf_callchain.c | 2 +- > arch/arm64/kernel/stacktrace.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c > index 88ff471b0bce..4a72c2727309 100644 > --- a/arch/arm64/kernel/perf_callchain.c > +++ b/arch/arm64/kernel/perf_callchain.c > @@ -116,7 +116,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, > tail = (struct frame_tail __user *)regs->regs[29]; > > while (entry->nr < entry->max_stack && > - tail && !((unsigned long)tail & 0xf)) > + tail && !((unsigned long)tail & 0x7)) > tail = user_backtrace(tail, entry); > } else { > #ifdef CONFIG_COMPAT > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > index ce613e22b58b..3bc1c44b7910 100644 > --- a/arch/arm64/kernel/stacktrace.c > +++ b/arch/arm64/kernel/stacktrace.c > @@ -44,7 +44,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) > unsigned long fp = frame->fp; > struct stack_info info; > > - if (fp & 0xf) > + if (fp & 0x7) > return -EINVAL; > > if (!tsk) > -- > 2.29.0.rc2.309.g374f81d7ae-goog > Acked-by: Andrey Konovalov <andreyknvl@google.com>
On Fri, Nov 20, 2020 at 06:32:35PM -0800, Peter Collingbourne wrote: > On Mon, Oct 26, 2020 at 2:00 PM Peter Collingbourne <pcc@google.com> wrote: > > The AAPCS places no requirements on the alignment of the frame > > record. In theory it could be placed anywhere, although it seems > > sensible to require it to be aligned to 8 bytes. With an upcoming > > enhancement to tag-based KASAN Clang will begin creating frame records > > located at an address that is only aligned to 8 bytes. Accommodate > > such frame records in the stack unwinding code. > > > > As pointed out by Mark Rutland, the userspace stack unwinding code > > has the same problem, so fix it there as well. > > As a reminder, this series is required in order to avoid breaking > stack traces once [1] is applied. > > Peter > > [1] https://lore.kernel.org/linux-arm-kernel/20201120230211.584929-1-pcc@google.com/T/ While this series looks fine, on its own it doesn't solve any issue we currently have. So, could you please post this series together with the outlined tags mismatch checks patch? I think they should be merged together. Thanks.
On Tue, Dec 1, 2020 at 9:28 AM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Fri, Nov 20, 2020 at 06:32:35PM -0800, Peter Collingbourne wrote: > > On Mon, Oct 26, 2020 at 2:00 PM Peter Collingbourne <pcc@google.com> wrote: > > > The AAPCS places no requirements on the alignment of the frame > > > record. In theory it could be placed anywhere, although it seems > > > sensible to require it to be aligned to 8 bytes. With an upcoming > > > enhancement to tag-based KASAN Clang will begin creating frame records > > > located at an address that is only aligned to 8 bytes. Accommodate > > > such frame records in the stack unwinding code. > > > > > > As pointed out by Mark Rutland, the userspace stack unwinding code > > > has the same problem, so fix it there as well. > > > > As a reminder, this series is required in order to avoid breaking > > stack traces once [1] is applied. > > > > Peter > > > > [1] https://lore.kernel.org/linux-arm-kernel/20201120230211.584929-1-pcc@google.com/T/ > > While this series looks fine, on its own it doesn't solve any issue we > currently have. So, could you please post this series together with the > outlined tags mismatch checks patch? I think they should be merged > together. > > Thanks. Done (as also requested by Mark Rutland) in v3 of that series. Peter
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 88ff471b0bce..4a72c2727309 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -116,7 +116,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, tail = (struct frame_tail __user *)regs->regs[29]; while (entry->nr < entry->max_stack && - tail && !((unsigned long)tail & 0xf)) + tail && !((unsigned long)tail & 0x7)) tail = user_backtrace(tail, entry); } else { #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ce613e22b58b..3bc1c44b7910 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -44,7 +44,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) unsigned long fp = frame->fp; struct stack_info info; - if (fp & 0xf) + if (fp & 0x7) return -EINVAL; if (!tsk)
The AAPCS places no requirements on the alignment of the frame record. In theory it could be placed anywhere, although it seems sensible to require it to be aligned to 8 bytes. With an upcoming enhancement to tag-based KASAN Clang will begin creating frame records located at an address that is only aligned to 8 bytes. Accommodate such frame records in the stack unwinding code. As pointed out by Mark Rutland, the userspace stack unwinding code has the same problem, so fix it there as well. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268 --- v2: - fix it in the userspace unwinding code as well arch/arm64/kernel/perf_callchain.c | 2 +- arch/arm64/kernel/stacktrace.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)