From patchwork Wed Jun 30 22:33:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12352875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91854C11F66 for ; Wed, 30 Jun 2021 22:36:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F80C60FE6 for ; Wed, 30 Jun 2021 22:36:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F80C60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oMkg8AIebGuP1LLkE8ccJ/TmjaqHTrXbNjVU32CLuJA=; b=NeqDHQ0CrXpxse /d/RrZvAhwmbjKmx0o5+qnF0bLIn7QxBJOwoeJ+K2jPO01JrZwByPRRk/vxLI4+i4uyhxXvfpij7o bsHh8zS2rEmL+M6klnPR/0MIJoA4aypzc0tEo5mWJMsByhpEfYdsj+3OEn7G6h6BI923niyUPs+Ok InRF1xmDgZlDMBvOfiBL3PD9UIEJ+sZxN1bJ7Vbu4F+ZlLxm52DVhN3RLEHL2f2t2W9mzVU8fvTgu BZ63LRsSX6I7NDF5D252sdStnSKfgWZOPq1Qo7Kh8xbLk7fTi2lOa828ubykGctR7Ts4FTPc80hO+ W11bYSx6pDEW9LLNGilA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyimv-00FOSG-6O; Wed, 30 Jun 2021 22:34:33 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyimW-00FODl-W7 for linux-arm-kernel@lists.infradead.org; Wed, 30 Jun 2021 22:34:11 +0000 Received: from x64host.home (unknown [47.187.214.213]) by linux.microsoft.com (Postfix) with ESMTPSA id 4005620B7188; Wed, 30 Jun 2021 15:34:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 4005620B7188 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1625092447; bh=Gr8ouXN268cAeUy8k+pwSBgaBbzeDcX4e30mNG/JOrQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=lRY/qmCVWLW1T4MtBQvjwz1vDpW+QkLTwKyKyI5gFcDkVJmth0HxxRTMqu6sjeMWO 3UCOIUn70Ne3V9z6qU3o46HTavSZyKdKILuXK6pG17YGn6U3gu1E16Wi/mhJjxMZ5t u4BRnBIyeW1F/kI3sz9JYIOAwqCDh1thdcSgvDpw= From: madvenka@linux.microsoft.com To: broonie@kernel.org, mark.rutland@arm.com, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v6 1/3] arm64: Improve the unwinder return value Date: Wed, 30 Jun 2021 17:33:54 -0500 Message-Id: <20210630223356.58714-2-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630223356.58714-1-madvenka@linux.microsoft.com> References: <3f2aab69a35c243c5e97f47c4ad84046355f5b90> <20210630223356.58714-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210630_153409_144434_CB2BACE5 X-CRM114-Status: GOOD ( 18.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" Currently, the unwinder returns a tri-state return value: 0 means "continue with the unwind" -ENOENT means "successful termination of the stack trace" -EINVAL means "fatal error, abort the stack trace" This is confusing. To fix this, define an enumeration of different return codes to make it clear. Handle the return codes in all of the unwind consumers. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 14 ++++++-- arch/arm64/kernel/perf_callchain.c | 5 ++- arch/arm64/kernel/process.c | 8 +++-- arch/arm64/kernel/return_address.c | 10 ++++-- arch/arm64/kernel/stacktrace.c | 53 ++++++++++++++++------------- arch/arm64/kernel/time.c | 9 +++-- 6 files changed, 64 insertions(+), 35 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index eb29b1fe8255..6fcd58553fb1 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -30,6 +30,12 @@ struct stack_info { enum stack_type type; }; +enum unwind_rc { + UNWIND_CONTINUE, /* No errors encountered */ + UNWIND_ABORT, /* Fatal errors encountered */ + UNWIND_FINISH, /* End of stack reached successfully */ +}; + /* * A snapshot of a frame record or fp/lr register values, along with some * accounting information necessary for robust unwinding. @@ -61,7 +67,8 @@ struct stackframe { #endif }; -extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); +extern enum unwind_rc unwind_frame(struct task_struct *tsk, + struct stackframe *frame); extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, bool (*fn)(void *, unsigned long), void *data); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, @@ -148,8 +155,8 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, return false; } -static inline void start_backtrace(struct stackframe *frame, - unsigned long fp, unsigned long pc) +static inline enum unwind_rc start_backtrace(struct stackframe *frame, + unsigned long fp, unsigned long pc) { frame->fp = fp; frame->pc = pc; @@ -169,6 +176,7 @@ static inline void start_backtrace(struct stackframe *frame, bitmap_zero(frame->stacks_done, __NR_STACK_TYPES); frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; + return UNWIND_CONTINUE; } #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 88ff471b0bce..f459208149ae 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -148,13 +148,16 @@ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { struct stackframe frame; + enum unwind_rc rc; if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { /* We don't support guest os callchain now */ return; } - start_backtrace(&frame, regs->regs[29], regs->pc); + rc = start_backtrace(&frame, regs->regs[29], regs->pc); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return; walk_stackframe(current, &frame, callchain_trace, entry); } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 6e60aa3b5ea9..e9c763b44fd4 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -573,6 +573,7 @@ unsigned long get_wchan(struct task_struct *p) struct stackframe frame; unsigned long stack_page, ret = 0; int count = 0; + enum unwind_rc rc; if (!p || p == current || p->state == TASK_RUNNING) return 0; @@ -580,10 +581,13 @@ unsigned long get_wchan(struct task_struct *p) if (!stack_page) return 0; - start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p)); + rc = start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p)); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return 0; do { - if (unwind_frame(p, &frame)) + rc = unwind_frame(p, &frame); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) goto out; if (!in_sched_functions(frame.pc)) { ret = frame.pc; diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c index a6d18755652f..1224e043e98f 100644 --- a/arch/arm64/kernel/return_address.c +++ b/arch/arm64/kernel/return_address.c @@ -36,13 +36,17 @@ void *return_address(unsigned int level) { struct return_address_data data; struct stackframe frame; + enum unwind_rc rc; data.level = level + 2; data.addr = NULL; - start_backtrace(&frame, - (unsigned long)__builtin_frame_address(0), - (unsigned long)return_address); + rc = start_backtrace(&frame, + (unsigned long)__builtin_frame_address(0), + (unsigned long)return_address); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return NULL; + walk_stackframe(current, &frame, save_return_addr, &data); if (!data.level) diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index d55bdfb7789c..e9c2c1fa9dde 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -39,26 +39,27 @@ * records (e.g. a cycle), determined based on the location and fp value of A * and the location (but not the fp value) of B. */ -int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) +enum unwind_rc notrace unwind_frame(struct task_struct *tsk, + struct stackframe *frame) { unsigned long fp = frame->fp; struct stack_info info; /* Terminal record; nothing to unwind */ if (!fp) - return -ENOENT; + return UNWIND_FINISH; if (fp & 0xf) - return -EINVAL; + return UNWIND_ABORT; if (!tsk) tsk = current; if (!on_accessible_stack(tsk, fp, &info)) - return -EINVAL; + return UNWIND_ABORT; if (test_bit(info.type, frame->stacks_done)) - return -EINVAL; + return UNWIND_ABORT; /* * As stacks grow downward, any valid record on the same stack must be @@ -75,7 +76,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) */ if (info.type == frame->prev_type) { if (fp <= frame->prev_fp) - return -EINVAL; + return UNWIND_ABORT; } else { set_bit(frame->prev_type, frame->stacks_done); } @@ -101,14 +102,14 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) */ ret_stack = ftrace_graph_get_ret_stack(tsk, frame->graph++); if (WARN_ON_ONCE(!ret_stack)) - return -EINVAL; + return UNWIND_ABORT; frame->pc = ret_stack->ret; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ frame->pc = ptrauth_strip_insn_pac(frame->pc); - return 0; + return UNWIND_CONTINUE; } NOKPROBE_SYMBOL(unwind_frame); @@ -116,12 +117,12 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, bool (*fn)(void *, unsigned long), void *data) { while (1) { - int ret; + enum unwind_rc rc; if (!fn(data, frame->pc)) break; - ret = unwind_frame(tsk, frame); - if (ret < 0) + rc = unwind_frame(tsk, frame); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) break; } } @@ -137,6 +138,7 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, { struct stackframe frame; int skip = 0; + enum unwind_rc rc; pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); @@ -153,17 +155,19 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, return; if (tsk == current) { - start_backtrace(&frame, - (unsigned long)__builtin_frame_address(0), - (unsigned long)dump_backtrace); + rc = start_backtrace(&frame, + (unsigned long)__builtin_frame_address(0), + (unsigned long)dump_backtrace); } else { /* * task blocked in __switch_to */ - start_backtrace(&frame, - thread_saved_fp(tsk), - thread_saved_pc(tsk)); + rc = start_backtrace(&frame, + thread_saved_fp(tsk), + thread_saved_pc(tsk)); } + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return; printk("%sCall trace:\n", loglvl); do { @@ -181,7 +185,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, */ dump_backtrace_entry(regs->pc, loglvl); } - } while (!unwind_frame(tsk, &frame)); + rc = unwind_frame(tsk, &frame); + } while (rc != UNWIND_FINISH && rc != UNWIND_ABORT); put_task_stack(tsk); } @@ -199,17 +204,19 @@ noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, struct pt_regs *regs) { struct stackframe frame; + enum unwind_rc rc; if (regs) - start_backtrace(&frame, regs->regs[29], regs->pc); + rc = start_backtrace(&frame, regs->regs[29], regs->pc); else if (task == current) - start_backtrace(&frame, + rc = start_backtrace(&frame, (unsigned long)__builtin_frame_address(1), (unsigned long)__builtin_return_address(0)); else - start_backtrace(&frame, thread_saved_fp(task), - thread_saved_pc(task)); - + rc = start_backtrace(&frame, thread_saved_fp(task), + thread_saved_pc(task)); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return; walk_stackframe(task, &frame, consume_entry, cookie); } diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c index eebbc8d7123e..eb50218ec9a4 100644 --- a/arch/arm64/kernel/time.c +++ b/arch/arm64/kernel/time.c @@ -35,15 +35,18 @@ unsigned long profile_pc(struct pt_regs *regs) { struct stackframe frame; + enum unwind_rc rc; if (!in_lock_functions(regs->pc)) return regs->pc; - start_backtrace(&frame, regs->regs[29], regs->pc); + rc = start_backtrace(&frame, regs->regs[29], regs->pc); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) + return 0; do { - int ret = unwind_frame(NULL, &frame); - if (ret < 0) + rc = unwind_frame(NULL, &frame); + if (rc == UNWIND_FINISH || rc == UNWIND_ABORT) return 0; } while (in_lock_functions(frame.pc)); From patchwork Wed Jun 30 22:33:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12352871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED5A1C11F66 for ; Wed, 30 Jun 2021 22:35:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A74B360FF1 for ; Wed, 30 Jun 2021 22:35:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A74B360FF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aEEpu9B9yOmVakgxejeQN+i2BRP+6g8h5X8tfRgZn5g=; b=RUX4Tk0W/7bPLV c3xZ58stnXeII76Yn5+pI4jVcMcXRiBhnkXOYRUiPPsIIAo+MXbtHAR1J8o/fv8/xCqbwQONdgwUt hVHZAmBY/6kChkROJbGwV44cAzLhQfgb6ha5zs2YHng4EBlM+79Ut2jq8pPJ6lEH6PMqzGCZDDMWb GHUsnMHLSCAd/jZ9boqgTkS3GM5ND+4pKLMuEc7YX8sE/jlTTMrOtP73EkertlJFU62EVvs6hjAMA mbZafKVIml0A07Z531XC/EEPQixWDC5Q4NqBHlQc47j6wElMzFqzlhcZM/R6uBkI3Acrgf1lKeai8 p63jJrCF5xFm+dOvbtMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyima-00FOF4-FX; Wed, 30 Jun 2021 22:34:12 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyimW-00FOE8-Q9 for linux-arm-kernel@lists.infradead.org; Wed, 30 Jun 2021 22:34:10 +0000 Received: from x64host.home (unknown [47.187.214.213]) by linux.microsoft.com (Postfix) with ESMTPSA id 5D74320B6C50; Wed, 30 Jun 2021 15:34:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 5D74320B6C50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1625092448; bh=vvpqMQAHDttXStCeHW5CJ8/Wc5hwp1u/fuylw90c96k=; h=From:To:Subject:Date:In-Reply-To:References:From; b=a1psfSJyxoBli1dd32NGtI32qMUWrmrp6sTy1CCHLu3gX2KXBWDcwHBarGoSXhPay F+euBIEVzUjv2Wdl2YQrdq9wowl2DistAvqba19L4lMYnGInhahOspvRZy+/8DhNLR g08JtM77Z5Z2+YPzM4f/psmtIaqv9oSeU31QxZxI= From: madvenka@linux.microsoft.com To: broonie@kernel.org, mark.rutland@arm.com, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v6 2/3] arm64: Introduce stack trace reliability checks in the unwinder Date: Wed, 30 Jun 2021 17:33:55 -0500 Message-Id: <20210630223356.58714-3-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630223356.58714-1-madvenka@linux.microsoft.com> References: <3f2aab69a35c243c5e97f47c4ad84046355f5b90> <20210630223356.58714-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210630_153408_945507_6E61F45D X-CRM114-Status: GOOD ( 16.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" The unwinder should check for the presence of various features and conditions that can render the stack trace unreliable. Introduce a function unwind_check_frame() for this purpose. Introduce the first reliability check in unwind_check_frame() - If a return PC is not a valid kernel text address, consider the stack trace unreliable. It could be some generated code. Other reliability checks will be added in the future. If a reliability check fails, it is a non-fatal error. Introduce a new return code, UNWIND_CONTINUE_WITH_RISK, for non-fatal errors. Call unwind_check_frame() from unwind_frame(). Also, call it from start_backtrace() to remove the current assumption that the starting frame is reliable. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 4 +++- arch/arm64/kernel/stacktrace.c | 17 ++++++++++++++++- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 6fcd58553fb1..d1625d55b980 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -32,6 +32,7 @@ struct stack_info { enum unwind_rc { UNWIND_CONTINUE, /* No errors encountered */ + UNWIND_CONTINUE_WITH_RISK, /* Non-fatal errors encountered */ UNWIND_ABORT, /* Fatal errors encountered */ UNWIND_FINISH, /* End of stack reached successfully */ }; @@ -73,6 +74,7 @@ extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, bool (*fn)(void *, unsigned long), void *data); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, const char *loglvl); +extern enum unwind_rc unwind_check_frame(struct stackframe *frame); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); @@ -176,7 +178,7 @@ static inline enum unwind_rc start_backtrace(struct stackframe *frame, bitmap_zero(frame->stacks_done, __NR_STACK_TYPES); frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; - return UNWIND_CONTINUE; + return unwind_check_frame(frame); } #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index e9c2c1fa9dde..ba7b97b119e4 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,6 +18,21 @@ #include #include +/* + * Check the stack frame for conditions that make unwinding unreliable. + */ +enum unwind_rc unwind_check_frame(struct stackframe *frame) +{ + /* + * If the PC is not a known kernel text address, then we cannot + * be sure that a subsequent unwind will be reliable, as we + * don't know that the code follows our unwind requirements. + */ + if (!__kernel_text_address(frame->pc)) + return UNWIND_CONTINUE_WITH_RISK; + return UNWIND_CONTINUE; +} + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -109,7 +124,7 @@ enum unwind_rc notrace unwind_frame(struct task_struct *tsk, frame->pc = ptrauth_strip_insn_pac(frame->pc); - return UNWIND_CONTINUE; + return unwind_check_frame(frame); } NOKPROBE_SYMBOL(unwind_frame); From patchwork Wed Jun 30 22:33:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12352877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7BE7C11F69 for ; Wed, 30 Jun 2021 22:36:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9DE3F60FF0 for ; Wed, 30 Jun 2021 22:36:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DE3F60FF0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pS8uHeYQuXnTEZDCxDzRnH48aN5IKvwkRI7d0+WXAv4=; b=e20n993Pc2bRIX kDAAOfjZIHdG9Z/Ej3GJN8nPz//vdD5jsFqhusVx6YKLizB5kLNjJT6AIRKtZ6hI9JLc3Y+R119DD Beje3AhH2KwdkpAZvxpDv21Ivef05TVoMsd5iFpCDIIIN6TgFXLAZuqoQCHNsnwdMmRSxZmzSe/rv N/8uX5yRijotKvu3ZNKdf0krFbzDASRqLKndo1JuAg4zcG51Lf477fN8xPG5cYfREF1VcDetYcMjd 4VGYwRoqB63AUY7qSznoO0NyztEhClCnfnCIHDVDZ6CTNuv3xHlpEEho6bFWltyVSkm3fJ1opA6Q3 zWdMQ5u9zZTNHAHNn0MA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyin5-00FOXZ-Rd; Wed, 30 Jun 2021 22:34:44 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyimX-00FOEe-OL for linux-arm-kernel@lists.infradead.org; Wed, 30 Jun 2021 22:34:12 +0000 Received: from x64host.home (unknown [47.187.214.213]) by linux.microsoft.com (Postfix) with ESMTPSA id 79A8D20B6C14; Wed, 30 Jun 2021 15:34:08 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 79A8D20B6C14 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1625092449; bh=EIkrnuHsXydAy1Mr1ACrmDwLd+NDKeEMTVqoJ76ucmU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=NiugGKPxZcL/8uve/meCb6mPpjJPBIxFy2jjgH2br4YroVqB3RlAqF/3cx0566jEK KVhz7CqhcJCsUkOxsFb4fiiusKMpe2MUBOv7j6NCVNc1ibXNDpnGhgN044eVRLRoL5 8riLusxAksrA6AUnXRs+8S78XTsK241QuVSYR3kQ= From: madvenka@linux.microsoft.com To: broonie@kernel.org, mark.rutland@arm.com, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v6 3/3] arm64: Create a list of SYM_CODE functions, check return PC against list Date: Wed, 30 Jun 2021 17:33:56 -0500 Message-Id: <20210630223356.58714-4-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630223356.58714-1-madvenka@linux.microsoft.com> References: <3f2aab69a35c243c5e97f47c4ad84046355f5b90> <20210630223356.58714-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210630_153409_900486_D56A0E3F X-CRM114-Status: GOOD ( 31.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" The unwinder should check if the return PC falls in any function that is considered unreliable from an unwinding perspective. If it does, return UNWIND_CONTINUE_WITH_RISK. Function types ============== The compiler generates code for C functions and assigns the type STT_FUNC to them. Assembly functions are manually assigned a type: - STT_FUNC for functions defined with SYM_FUNC*() macros - STT_NONE for functions defined with SYM_CODE*() macros In the future, STT_FUNC functions will be analyzed by objtool and "fixed" as necessary. So, they are not "interesting" to the reliable unwinder in the kernel. That leaves SYM_CODE*() functions. These contain low-level code that is difficult or impossible for objtool to analyze. So, objtool ignores them leaving them to the reliable unwinder. These functions must be considered unreliable from an unwinding perspective. Define a special section for unreliable functions ================================================= Define a SYM_CODE_END() macro for arm64 that adds the function address range to a new section called "sym_code_functions". Linker file =========== Include the "sym_code_functions" section under initdata in vmlinux.lds.S. Initialization ============== Define an early_initcall() to copy the function address ranges from the "sym_code_functions" section to an array by the same name. Unwinder check ============== Add a reliability check in unwind_check_frame() that compares a return PC with sym_code_functions[]. If there is a match, then return UNWIND_CONTINUE_WITH_RISK. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/linkage.h | 12 ++++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/stacktrace.c | 112 ++++++++++++++++++++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 7 ++ 4 files changed, 132 insertions(+) diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h index ba89a9af820a..3b5f1fd332b0 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -60,4 +60,16 @@ SYM_FUNC_END(x); \ SYM_FUNC_END_ALIAS(__pi_##x) +/* + * Record the address range of each SYM_CODE function in a struct code_range + * in a special section. + */ +#define SYM_CODE_END(name) \ + SYM_END(name, SYM_T_NONE) ;\ + 99: ;\ + .pushsection "sym_code_functions", "aw" ;\ + .quad name ;\ + .quad 99b ;\ + .popsection + #endif diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 2f36b16a5b5d..29cb566f65ec 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -20,5 +20,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __sym_code_functions_start[], __sym_code_functions_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ba7b97b119e4..5d5728c3088e 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,11 +18,43 @@ #include #include +struct code_range { + unsigned long start; + unsigned long end; +}; + +static struct code_range *sym_code_functions; +static int num_sym_code_functions; + +int __init init_sym_code_functions(void) +{ + size_t size; + + size = (unsigned long)__sym_code_functions_end - + (unsigned long)__sym_code_functions_start; + + sym_code_functions = kmalloc(size, GFP_KERNEL); + if (!sym_code_functions) + return -ENOMEM; + + memcpy(sym_code_functions, __sym_code_functions_start, size); + /* Update num_sym_code_functions after copying sym_code_functions. */ + smp_mb(); + num_sym_code_functions = size / sizeof(struct code_range); + + return 0; +} +early_initcall(init_sym_code_functions); + /* * Check the stack frame for conditions that make unwinding unreliable. */ enum unwind_rc unwind_check_frame(struct stackframe *frame) { + const struct code_range *range; + unsigned long pc; + int i; + /* * If the PC is not a known kernel text address, then we cannot * be sure that a subsequent unwind will be reliable, as we @@ -30,6 +62,86 @@ enum unwind_rc unwind_check_frame(struct stackframe *frame) */ if (!__kernel_text_address(frame->pc)) return UNWIND_CONTINUE_WITH_RISK; + + /* + * If the final frame has been reached, there is no more unwinding + * to do. There is no need to check if the return PC is considered + * unreliable by the unwinder. + */ + if (!frame->fp) + return UNWIND_CONTINUE; + + /* + * Check the return PC against sym_code_functions[]. If there is a + * match, then the consider the stack frame unreliable. These functions + * contain low-level code where the frame pointer and/or the return + * address register cannot be relied upon. This addresses the following + * situations: + * + * - Exception handlers and entry assembly + * - Trampoline assembly (e.g., ftrace, kprobes) + * - Hypervisor-related assembly + * - Hibernation-related assembly + * - CPU start-stop, suspend-resume assembly + * - Kernel relocation assembly + * + * Some special cases covered by sym_code_functions[] deserve a mention + * here: + * + * - All EL1 interrupt and exception stack traces will be considered + * unreliable. This is the correct behavior as interrupts and + * exceptions can happen on any instruction including ones in the + * frame pointer prolog and epilog. Unless stack metadata is + * available so the unwinder can unwind through these special + * cases, such stack traces will be considered unreliable. + * + * - A task can get preempted at the end of an interrupt. Stack + * traces of preempted tasks will show the interrupt frame in the + * stack trace and will be considered unreliable. + * + * - Breakpoints are exceptions. So, all stack traces in the break + * point handler (including probes) will be considered unreliable. + * + * - All of the ftrace entry trampolines are considered unreliable. + * So, all stack traces taken from tracer functions will be + * considered unreliable. + * + * - The Function Graph Tracer return trampoline (return_to_handler) + * and the Kretprobe return trampoline (kretprobe_trampoline) are + * also considered unreliable. + * + * Some of the special cases above can be unwound through using + * special logic in unwind_frame(). + * + * - return_to_handler() is handled by the unwinder by attempting + * to retrieve the original return address from the per-task + * return address stack. + * + * - kretprobe_trampoline() can be handled in a similar fashion by + * attempting to retrieve the original return address from the + * per-task kretprobe instance list. + * + * - I reckon optprobes can be handled in a similar fashion in the + * future? + * + * - Stack traces taken from the FTrace tracer functions can be + * handled as well. ftrace_call is an inner label defined in the + * Ftrace entry trampoline. This is the location where the call + * to a tracer function is patched. So, if the return PC equals + * ftrace_call+4, it is reliable. At that point, proper stack + * frames have already been set up for the traced function and + * its caller. + * + * NOTE: + * If sym_code_functions[] were sorted, a binary search could be + * done to make this more performant. + */ + pc = frame->pc; + for (i = 0; i < num_sym_code_functions; i++) { + range = &sym_code_functions[i]; + if (pc >= range->start && pc < range->end) + return UNWIND_CONTINUE_WITH_RISK; + } return UNWIND_CONTINUE; } diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7eea7888bb02..ee203f7ca084 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -103,6 +103,12 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SYM_CODE_FUNCTIONS \ + . = ALIGN(16); \ + __sym_code_functions_start = .; \ + KEEP(*(sym_code_functions)) \ + __sym_code_functions_end = .; + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -218,6 +224,7 @@ SECTIONS CON_INITCALL INIT_RAM_FS *(.init.altinstructions .init.bss) /* from the EFI stub */ + SYM_CODE_FUNCTIONS } .exit.data : { EXIT_DATA