From patchwork Fri Oct 15 02:34:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12559745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69203C433EF for ; Fri, 15 Oct 2021 02:36:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3257461039 for ; Fri, 15 Oct 2021 02:36:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3257461039 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AI7pJN/6LOQAhnXRrMRsmtf7Q30WwlXts8Tmeo4OTkc=; b=MMMXX/8Ri2oINx o1FTjwXOoGZbSfXAKu551kD1H31gXdc4zXbS2AnjGQ6TJNU6cuFv8wgdgDhdy/b/a8vZ+CZD7eTAB xCxHZWCV9b1ZmIUXC+tSaFBwYvCViqgAw9yK9Nu3Z+i8qDaEh8tHVThKqlsipGcWd1wbOCDQlfeBA Ruk0PQ8u+WeFyrqyY0sGR7qzE6juSQuwJQgB5SA65h7H3dihxNABQdVAxHByx68FQlCwlCcWbk1HM izpvJHMuS9lAa/JEhNkgyEs7jWtgsraEy6/yAieaN89c2uDsVeEMmc1fvlEgtVN4fNnO5sH6g7kVK gQEqaYS9i35aNkoo/vbw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mbD4A-0054Xj-OH; Fri, 15 Oct 2021 02:35:26 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mbD3f-0054Lv-JL for linux-arm-kernel@lists.infradead.org; Fri, 15 Oct 2021 02:34:57 +0000 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 381C920B9D1E; Thu, 14 Oct 2021 19:34:54 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 381C920B9D1E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1634265295; bh=kf57gBPyYShD4NkDAxb6KRwdylhexezlNMblj+oE9gY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Owr1a+lwGxT0wdjI1wstYlndxf1K3QtzPWbfk+QbxcgGVtBvQc0XdW7ZF+Geq6Gkk 5XuWju8J2qBJEMAfet6P5XeZHbA5JO+LrkNDTC02Vu4CvKqq6PI+uMAEPMIv88dYKA 4Sr9fgLUDz70xCq7oYEWzGQ2NWw29csRVbGqDd9I= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [PATCH v9 11/11] arm64: Create a list of SYM_CODE functions, check return PC against list Date: Thu, 14 Oct 2021 21:34:05 -0500 Message-Id: <20211015023413.16614-4-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211015023413.16614-1-madvenka@linux.microsoft.com> References: <20211015023413.16614-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211014_193455_716383_F67289B5 X-CRM114-Status: GOOD ( 20.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" SYM_CODE functions don't follow the usual calling conventions. Check if the return PC in a stack frame falls in any of these. If it does, consider the stack trace unreliable. Define a special section for unreliable functions ================================================= Define a SYM_CODE_END() macro for arm64 that adds the function address range to a new section called "sym_code_functions". Linker file =========== Include the "sym_code_functions" section under read-only data in vmlinux.lds.S. Initialization ============== Define an early_initcall() to create a sym_code_functions[] array from the linker data. Unwinder check ============== Add a reliability check in unwind_is_reliable() that compares a return PC with sym_code_functions[]. If there is a match, then return failure. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/linkage.h | 12 +++++++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/stacktrace.c | 55 +++++++++++++++++++++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 10 ++++++ 4 files changed, 78 insertions(+) diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h index 9906541a6861..616bad74e297 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -68,4 +68,16 @@ SYM_FUNC_END_ALIAS(x); \ SYM_FUNC_END_ALIAS(__pi_##x) +/* + * Record the address range of each SYM_CODE function in a struct code_range + * in a special section. + */ +#define SYM_CODE_END(name) \ + SYM_END(name, SYM_T_NONE) ;\ + 99: ;\ + .pushsection "sym_code_functions", "aw" ;\ + .quad name ;\ + .quad 99b ;\ + .popsection + #endif diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index e4ad9db53af1..c84c71063d6e 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -21,5 +21,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __sym_code_functions_start[], __sym_code_functions_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 142f08ae515f..40e5af7e5b1d 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,11 +18,40 @@ #include #include +struct code_range { + unsigned long start; + unsigned long end; +}; + +static struct code_range *sym_code_functions; +static int num_sym_code_functions; + +int __init init_sym_code_functions(void) +{ + size_t size = (unsigned long)__sym_code_functions_end - + (unsigned long)__sym_code_functions_start; + + sym_code_functions = (struct code_range *)__sym_code_functions_start; + /* + * Order it so that sym_code_functions is not visible before + * num_sym_code_functions. + */ + smp_mb(); + num_sym_code_functions = size / sizeof(struct code_range); + + return 0; +} +early_initcall(init_sym_code_functions); + /* * Check the stack frame for conditions that make further unwinding unreliable. */ static void notrace unwind_check_reliability(struct stackframe *frame) { + const struct code_range *range; + unsigned long pc; + int i; + /* * If the PC is not a known kernel text address, then we cannot * be sure that a subsequent unwind will be reliable, as we @@ -30,6 +59,32 @@ static void notrace unwind_check_reliability(struct stackframe *frame) */ if (!__kernel_text_address(frame->pc)) frame->reliable = false; + + /* + * Check the return PC against sym_code_functions[]. If there is a + * match, then the consider the stack frame unreliable. + * + * As SYM_CODE functions don't follow the usual calling conventions, + * we assume by default that any SYM_CODE function cannot be unwound + * reliably. + * + * Note that this includes: + * + * - Exception handlers and entry assembly + * - Trampoline assembly (e.g., ftrace, kprobes) + * - Hypervisor-related assembly + * - Hibernation-related assembly + * - CPU start-stop, suspend-resume assembly + * - Kernel relocation assembly + */ + pc = frame->pc; + for (i = 0; i < num_sym_code_functions; i++) { + range = &sym_code_functions[i]; + if (pc >= range->start && pc < range->end) { + frame->reliable = false; + return; + } + } } NOKPROBE_SYMBOL(unwind_check_reliability); diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 709d2c433c5e..2bf769f45b54 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -111,6 +111,14 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SYM_CODE_FUNCTIONS \ + . = ALIGN(16); \ + .symcode : AT(ADDR(.symcode) - LOAD_OFFSET) { \ + __sym_code_functions_start = .; \ + KEEP(*(sym_code_functions)) \ + __sym_code_functions_end = .; \ + } + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -196,6 +204,8 @@ SECTIONS swapper_pg_dir = .; . += PAGE_SIZE; + SYM_CODE_FUNCTIONS + . = ALIGN(SEGMENT_ALIGN); __init_begin = .; __inittext_begin = .;