From patchwork Tue Nov 23 19:37:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12693499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1FC70C433F5 for ; Tue, 23 Nov 2021 19:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ohwqRDp8cLUqykiWVoNU3C90qSHSFAjWmNGF6+aNwII=; b=TZUUMeJAE9VB37 XlgnTitmNjslhiJqk3hFsxUESKSY/Gz5OWj7c6scl31pCgzeQgLrRzma5axO+jSpGFauOIOqKcOBL b5gCjzfQif4o+PLMtFuwbWacLr8MHxfsLXTBQgtnwYbRIPm72nZ/ujNXDQ176Bai6GPZUc/dH5iJV /RjM3I+pz6Oatx2jDQY5AAP0V9Wqsba3HszZKntU64+22jVPMt6OZ3KuvDVsXRFPmsEm8RYoXz2CZ SbN3S+A4R4YJq/T4L96ei3eLO8HseRfw3Pmh4SwHz2fZ2MgAFUeDvlJa0D/d1fEnQvuF6cPrszBvn zA3kZ0xfnNBDtk/2/39w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mpbcY-003Kc6-BS; Tue, 23 Nov 2021 19:38:26 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mpbbm-003KQ6-BY for linux-arm-kernel@lists.infradead.org; Tue, 23 Nov 2021 19:37:39 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 062DB20D4D3D; Tue, 23 Nov 2021 11:37:36 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 062DB20D4D3D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1637696257; bh=ANijem9qrqDRCyqcDOusbbhCMatgyg1SMa0VQG+ycgk=; h=From:To:Subject:Date:In-Reply-To:References:From; b=FDK5nwZ6mp0rwag3tiw+83PEtTTQG2hsym0HD4/QKmyTZH6ca4rRngn+vdO5ZzQTs hJgZMffOgqx1+WS2EgyScKVsK7BEeUOuvmURYFtcyjS6+rRgyAXnQQ9+IUFXpsNQb1 FOh07RjB5AJVx13n2fluXl7haaRuQjzzaJ5EDsoc= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [PATCH v11 4/5] arm64: Introduce stack trace reliability checks in the unwinder Date: Tue, 23 Nov 2021 13:37:22 -0600 Message-Id: <20211123193723.12112-5-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123193723.12112-1-madvenka@linux.microsoft.com> References: <8b861784d85a21a9bf08598938c11aff1b1249b9> <20211123193723.12112-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211123_113738_471632_B0A62E6D X-CRM114-Status: GOOD ( 23.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" There are some kernel features and conditions that make a stack trace unreliable. Callers may require the unwinder to detect these cases. E.g., livepatch. Introduce a new function called unwind_check_reliability() that will detect these cases and set a flag in the stack frame. Call unwind_check_reliability() for every frame, that is, in unwind_start() and unwind_next(). Introduce the first reliability check in unwind_check_reliability() - If a return PC is not a valid kernel text address, consider the stack trace unreliable. It could be some generated code. Other reliability checks will be added in the future. Let unwind() return a boolean to indicate if the stack trace is reliable. Introduce arch_stack_walk_reliable() for ARM64. This works like arch_stack_walk() except that it returns -EINVAL if the stack trace is not reliable. Until all the reliability checks are in place, arch_stack_walk_reliable() may not be used by livepatch. But it may be used by debug and test code. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 3 ++ arch/arm64/kernel/stacktrace.c | 59 +++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index d838586adef9..7143e80c3d96 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -53,6 +53,8 @@ struct stack_info { * value. * * @failed: Unwind failed. + * + * @reliable: Stack trace is reliable. */ struct stackframe { unsigned long fp; @@ -64,6 +66,7 @@ struct stackframe { struct llist_node *kr_cur; #endif bool failed; + bool reliable; }; extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 3b670ab1f0e9..77eb00e45558 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,6 +18,26 @@ #include #include +/* + * Check the stack frame for conditions that make further unwinding unreliable. + */ +static void unwind_check_reliability(struct task_struct *task, + struct stackframe *frame) +{ + if (frame->fp == (unsigned long)task_pt_regs(task)->stackframe) { + /* Final frame; no more unwind, no need to check reliability */ + return; + } + + /* + * If the PC is not a known kernel text address, then we cannot + * be sure that a subsequent unwind will be reliable, as we + * don't know that the code follows our unwind requirements. + */ + if (!__kernel_text_address(frame->pc)) + frame->reliable = false; +} + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -33,8 +53,9 @@ */ -static void unwind_start(struct stackframe *frame, unsigned long fp, - unsigned long pc) +static void unwind_start(struct task_struct *task, + struct stackframe *frame, + unsigned long fp, unsigned long pc) { frame->fp = fp; frame->pc = pc; @@ -55,6 +76,8 @@ static void unwind_start(struct stackframe *frame, unsigned long fp, frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; frame->failed = false; + frame->reliable = true; + unwind_check_reliability(task, frame); } /* @@ -141,6 +164,7 @@ static void notrace unwind_next(struct task_struct *tsk, if (is_kretprobe_trampoline(frame->pc)) frame->pc = kretprobe_find_ret_addr(tsk, (void *)frame->fp, &frame->kr_cur); #endif + unwind_check_reliability(tsk, frame); } NOKPROBE_SYMBOL(unwind_next); @@ -166,15 +190,16 @@ static bool unwind_continue(struct task_struct *task, return true; } -static void notrace unwind(struct task_struct *tsk, +static bool notrace unwind(struct task_struct *tsk, unsigned long fp, unsigned long pc, bool (*fn)(void *, unsigned long), void *data) { struct stackframe frame; - unwind_start(&frame, fp, pc); + unwind_start(tsk, &frame, fp, pc); while (unwind_continue(tsk, &frame, fn, data)) unwind_next(tsk, &frame); + return !frame.failed && frame.reliable; } NOKPROBE_SYMBOL(unwind); @@ -231,3 +256,29 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, } unwind(task, fp, pc, consume_entry, cookie); } + +/* + * arch_stack_walk_reliable() may not be used for livepatch until all of + * the reliability checks are in place in unwind_consume(). However, + * debug and test code can choose to use it even if all the checks are not + * in place. + */ +noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, + void *cookie, + struct task_struct *task) +{ + unsigned long fp, pc; + + if (task == current) { + /* Skip arch_stack_walk_reliable() in the stack trace. */ + fp = (unsigned long)__builtin_frame_address(1); + pc = (unsigned long)__builtin_return_address(0); + } else { + /* Caller guarantees that the task is not running. */ + fp = thread_saved_fp(task); + pc = thread_saved_pc(task); + } + if (unwind(task, fp, pc, consume_fn, cookie)) + return 0; + return -EINVAL; +}