From patchwork Fri Nov 24 11:05:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13467538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD831C61D97 for ; Fri, 24 Nov 2023 11:06:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lgPbr1MU/xVkcSUJsq3CN7i3gKqKzG9hjEgquwg14lE=; b=E+kZ1CUUl+dPjM EjE2t83cUlCVS2+/R4ssGFAvEBzhBZCpq55bEDgXfnSCZTDe0V07IkcPY/wRub4jOp5ai2e1weyba BDrv4TloyuWFMgwJjZtr/c93T2z02V3oioVtZt/FQdp6l2azWJ2WIZgdZuhBxI6I6HB9vjL4F/UlF oSgftZp2OgIM5pdhPB1PmHwzmajCtSE3QxZNsu3qEv+6mFXkLDhQjEjrfyoXvgAJKlLmvxziMblby +BUCFglQxK4xKtuAqJZXjaGURDLBZ1vb201Nitwdup5cv34G0FpkvWwTCtmGj3QB96oxn6xK/LmRE h+S8F+FSyJAupeMg5yjg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r6U07-006wOf-0S; Fri, 24 Nov 2023 11:05:35 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r6Tzy-006wN8-2b for linux-arm-kernel@lists.infradead.org; Fri, 24 Nov 2023 11:05:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 919D8143D; Fri, 24 Nov 2023 03:06:11 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0A9BB3F7A6; Fri, 24 Nov 2023 03:05:23 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, kaleshsingh@google.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, puranjay12@gmail.com, will@kernel.org Subject: [PATCH 2/2] arm64: stacktrace: factor out kunwind_stack_walk() Date: Fri, 24 Nov 2023 11:05:11 +0000 Message-Id: <20231124110511.2795958-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231124110511.2795958-1-mark.rutland@arm.com> References: <20231124110511.2795958-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231124_030526_939165_7FC98F65 X-CRM114-Status: GOOD ( 13.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently arm64 uses the generic arch_stack_walk() interface for all stack walking code. This only passes a PC value and cookie to the unwind callback, whereas we'd like to pass some additional information in some cases. For example, the BPF exception unwinder wants the FP, for reliable stacktrace we'll want to perform additional checks on other portions of unwind state, and we'd like to expand the information printed by dump_backtrace() to include provenance and reliability information. As preparation for all of the above, this patch factors the core unwind logic out of arch_stack_walk() and into a new kunwind_stack_walk() function which provides all of the unwind state to a callback function. The existing arch_stack_walk() interface is implemented atop this. The kunwind_stack_walk() function is intended to be a private implementation detail of unwinders in stacktrace.c, and not something to be exported generally to kernel code. It is __always_inline'd into its caller so that neither it or its caller appear in stactraces (which is the existing/required behavior for arch_stack_walk() and friends) and so that the compiler can optimize away some of the indirection. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Kalesh Singh Cc: Madhavan T. Venkataraman Cc: Mark Brown Cc: Puranjay Mohan Cc: Will Deacon Reviewed-by: Kalesh Singh Reviewed-by: Puranjay Mohan --- arch/arm64/kernel/stacktrace.c | 39 ++++++++++++++++++++++++++++------ 1 file changed, 33 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index aafc89192787a..7f88028a00c02 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -154,8 +154,10 @@ kunwind_next(struct kunwind_state *state) return kunwind_recover_return_address(state); } +typedef bool (*kunwind_consume_fn)(const struct kunwind_state *state, void *cookie); + static __always_inline void -do_kunwind(struct kunwind_state *state, stack_trace_consume_fn consume_entry, +do_kunwind(struct kunwind_state *state, kunwind_consume_fn consume_state, void *cookie) { if (kunwind_recover_return_address(state)) @@ -164,7 +166,7 @@ do_kunwind(struct kunwind_state *state, stack_trace_consume_fn consume_entry, while (1) { int ret; - if (!consume_entry(cookie, state->common.pc)) + if (!consume_state(state, cookie)) break; ret = kunwind_next(state); if (ret < 0) @@ -201,9 +203,10 @@ do_kunwind(struct kunwind_state *state, stack_trace_consume_fn consume_entry, : stackinfo_get_unknown(); \ }) -noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, - void *cookie, struct task_struct *task, - struct pt_regs *regs) +static __always_inline void +kunwind_stack_walk(kunwind_consume_fn consume_state, + void *cookie, struct task_struct *task, + struct pt_regs *regs) { struct stack_info stacks[] = { stackinfo_get_task(task), @@ -236,7 +239,31 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, kunwind_init_from_task(&state, task); } - do_kunwind(&state, consume_entry, cookie); + do_kunwind(&state, consume_state, cookie); +} + +struct kunwind_consume_entry_data { + stack_trace_consume_fn consume_entry; + void *cookie; +}; + +static bool +arch_kunwind_consume_entry(const struct kunwind_state *state, void *cookie) +{ + struct kunwind_consume_entry_data *data = cookie; + return data->consume_entry(data->cookie, state->common.pc); +} + +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task, + struct pt_regs *regs) +{ + struct kunwind_consume_entry_data data = { + .consume_entry = consume_entry, + .cookie = cookie, + }; + + kunwind_stack_walk(arch_kunwind_consume_entry, &data, task, regs); } static bool dump_backtrace_entry(void *arg, unsigned long where)