From patchwork Wed May 26 21:49:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12282799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FD09C47088 for ; Wed, 26 May 2021 21:52:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0DF596108E for ; Wed, 26 May 2021 21:52:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DF596108E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QYhHK0ACTWUh/GUOOyuplzn4X6YKTDCp8RRyqNTSbkM=; b=YZPBZHcHz0qR7D u18el+vhI6F9+cRbQb77p+nXyY1BwrkZykp6rQp2CPpMbI4OmximJ9RSLOAQVxzk+K7qBrkJxTBy8 qzmfTkVncYXZa9rkkulI51M3t2ut2ZBdeg2uAAnkmvqkQqbkK/uR4XgTQ1fav2/pFgbOf1WbS1ZEx UDYjMBCrykRpPSkv4NFgdvggbDqefsHYuYnm5FPLtRYJPRKGnB+JIndqtUG+jSasDHNLt6kVojXjB PY1ctEvnSVlKsoBdJstEAzhgzuk8KLuXRxrFE14P2dkdaF5QsY7yjaTy04dFn8OyS6qeciMJkfacC BFbA+AnwtY1r+gE5dZwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lm1PB-000FLJ-Lc; Wed, 26 May 2021 21:49:33 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lm1P7-000FIk-Qd for linux-arm-kernel@lists.infradead.org; Wed, 26 May 2021 21:49:31 +0000 Received: from x64host.home (unknown [47.187.214.213]) by linux.microsoft.com (Postfix) with ESMTPSA id 0995520B8006; Wed, 26 May 2021 14:49:26 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0995520B8006 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1622065767; bh=ihZF+GLrE0mD1iSEIwOBnY6H49hrBdX+eSLd+PDM1JY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=TAcE4DaL553IBcGipZlwzGy8I0pFBSND9rvBo8RoO28SetNzeU5UfSTy1alujvBMX b1TejqPPuQm6E/TDjpj863F5Z/oLBtCBuphAOTHhIRrnhslu4NfRBaaBdKx/2qtODI nSrePeiti1oYacAsUAZFD8RKtSgrOzsX5l+bvh/8= From: madvenka@linux.microsoft.com To: broonie@kernel.org, mark.rutland@arm.com, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v5 1/2] arm64: Introduce stack trace reliability checks in the unwinder Date: Wed, 26 May 2021 16:49:16 -0500 Message-Id: <20210526214917.20099-2-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210526214917.20099-1-madvenka@linux.microsoft.com> References: <20210526214917.20099-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210526_144930_001265_8F4685C2 X-CRM114-Status: GOOD ( 21.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" The unwinder should check for the presence of various features and conditions that can render the stack trace unreliable and mark the the stack trace as unreliable for the benefit of the caller. Introduce the first reliability check - If a return PC is not a valid kernel text address, consider the stack trace unreliable. It could be some generated code. Other reliability checks will be added in the future. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 9 +++++++ arch/arm64/kernel/stacktrace.c | 38 +++++++++++++++++++++++++---- 2 files changed, 42 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index eb29b1fe8255..4c822ef7f588 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -49,6 +49,13 @@ struct stack_info { * * @graph: When FUNCTION_GRAPH_TRACER is selected, holds the index of a * replacement lr value in the ftrace graph stack. + * + * @reliable: Is this stack frame reliable? There are several checks that + * need to be performed in unwind_frame() before a stack frame + * is truly reliable. Until all the checks are present, this flag + * is just a place holder. Once all the checks are implemented, + * this comment will be updated and the flag can be used by the + * caller of unwind_frame(). */ struct stackframe { unsigned long fp; @@ -59,6 +66,7 @@ struct stackframe { #ifdef CONFIG_FUNCTION_GRAPH_TRACER int graph; #endif + bool reliable; }; extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); @@ -169,6 +177,7 @@ static inline void start_backtrace(struct stackframe *frame, bitmap_zero(frame->stacks_done, __NR_STACK_TYPES); frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; + frame->reliable = true; } #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index d55bdfb7789c..9061375c8785 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -44,21 +44,29 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) unsigned long fp = frame->fp; struct stack_info info; + frame->reliable = true; + /* Terminal record; nothing to unwind */ if (!fp) return -ENOENT; - if (fp & 0xf) + if (fp & 0xf) { + frame->reliable = false; return -EINVAL; + } if (!tsk) tsk = current; - if (!on_accessible_stack(tsk, fp, &info)) + if (!on_accessible_stack(tsk, fp, &info)) { + frame->reliable = false; return -EINVAL; + } - if (test_bit(info.type, frame->stacks_done)) + if (test_bit(info.type, frame->stacks_done)) { + frame->reliable = false; return -EINVAL; + } /* * As stacks grow downward, any valid record on the same stack must be @@ -74,8 +82,10 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * stack. */ if (info.type == frame->prev_type) { - if (fp <= frame->prev_fp) + if (fp <= frame->prev_fp) { + frame->reliable = false; return -EINVAL; + } } else { set_bit(frame->prev_type, frame->stacks_done); } @@ -100,14 +110,32 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * So replace it to an original value. */ ret_stack = ftrace_graph_get_ret_stack(tsk, frame->graph++); - if (WARN_ON_ONCE(!ret_stack)) + if (WARN_ON_ONCE(!ret_stack)) { + frame->reliable = false; return -EINVAL; + } frame->pc = ret_stack->ret; } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ frame->pc = ptrauth_strip_insn_pac(frame->pc); + /* + * Check the return PC for conditions that make unwinding unreliable. + * In each case, mark the stack trace as such. + */ + + /* + * Make sure that the return address is a proper kernel text address. + * A NULL or invalid return address could mean: + * + * - generated code such as eBPF and optprobe trampolines + * - Foreign code (e.g. EFI runtime services) + * - Procedure Linkage Table (PLT) entries and veneer functions + */ + if (!__kernel_text_address(frame->pc)) + frame->reliable = false; + return 0; } NOKPROBE_SYMBOL(unwind_frame); From patchwork Wed May 26 21:49:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhavan T. Venkataraman" X-Patchwork-Id: 12282795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39B0DC47088 for ; Wed, 26 May 2021 21:51:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 00884613C5 for ; Wed, 26 May 2021 21:51:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00884613C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QyDqXI5vJST/R4Flm6VtBORR9LwhG5dzSpl3r1lIFFM=; b=h4qb5Ldk99+fH+ yjbV3R7Tvk1XWnf6iyX6Rx4vPBx9vGjKJEViR2aaGsnarN8MtWkkK/TUGt6Lw7abTtRivxOMCRpMy mhAEEYL66pCcws1w0auUSonUk+Mn0jBC/AeNHmq2jG6VVQUIMHiEU22MPPU0t6BWw3Ejm/cjvK94o K9yQVH7keMyFN2Vj1JIRWw9xQfdQ9Q//Yv0soFppjuH5y8APRMUpu1I0TCRPLj5GW+j9QiLw17Prr ihZBLDirYo5W4g7OX6SptULq4Gka847wc/tPFwB38oHys6fEOHBF5D67TAa30pFCiWIe22okdsKEY tVKFDIC3uw07AIaXxbUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lm1Pa-000FTi-Gh; Wed, 26 May 2021 21:49:58 +0000 Received: from linux.microsoft.com ([13.77.154.182]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lm1P8-000FJU-3y for linux-arm-kernel@lists.infradead.org; Wed, 26 May 2021 21:49:33 +0000 Received: from x64host.home (unknown [47.187.214.213]) by linux.microsoft.com (Postfix) with ESMTPSA id 29A1220B8008; Wed, 26 May 2021 14:49:28 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 29A1220B8008 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1622065769; bh=ESpyZLlRv061AvnJH30Utw2fdRza4055Ovdy85dsDXg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=f/Vyemg+FkJKCXaxElSXEgx9V5gTmlvl9sXoxlz52UakYV8r8xF59xccvCfphZF4X PcC5OOk1riwJPduVhGoVKI5As2mZnb2gDPo5LR0m1hpaZNDo6ueRoQ1J2CbuAjZLtn AAW7d5+BrxmRcQy03Gdwl9kFHwgLpw9hShcr0uNM= From: madvenka@linux.microsoft.com To: broonie@kernel.org, mark.rutland@arm.com, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v5 2/2] arm64: Create a list of SYM_CODE functions, check return PC against list Date: Wed, 26 May 2021 16:49:17 -0500 Message-Id: <20210526214917.20099-3-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210526214917.20099-1-madvenka@linux.microsoft.com> References: <20210526214917.20099-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210526_144930_265510_421DCEA7 X-CRM114-Status: GOOD ( 32.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Madhavan T. Venkataraman" The unwinder should check if the return PC falls in any function that is considered unreliable from an unwinding perspective. If it does, mark the stack trace unreliable. Function types ============== The compiler generates code for C functions and assigns the type STT_FUNC to them. Assembly functions are manually assigned a type: - STT_FUNC for functions defined with SYM_FUNC*() macros - STT_NONE for functions defined with SYM_CODE*() macros In the future, STT_FUNC functions will be analyzed by objtool and "fixed" as necessary. So, they are not "interesting" to the reliable unwinder in the kernel. That leaves SYM_CODE*() functions. These contain low-level code that is difficult or impossible for objtool to analyze. So, objtool ignores them leaving them to the reliable unwinder. These functions must be considered unreliable from an unwinding perspective. Define a special section for unreliable functions ================================================= Define a SYM_CODE_END() macro for arm64 that adds the function address range to a new section called "sym_code_functions". Linker file =========== Include the "sym_code_functions" section under initdata in vmlinux.lds.S. Initialization ============== Define an early_initcall() to copy the function address ranges from the "sym_code_functions" section to an array by the same name. Unwinder check ============== Define a function called unwinder_is_unreliable() that compares a return PC with sym_code_functions[]. If there is a match, then mark the stack trace as unreliable. Call unwinder_is_unreliable() from unwind_frame(). Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/linkage.h | 12 +++ arch/arm64/include/asm/sections.h | 1 + arch/arm64/kernel/stacktrace.c | 118 +++++++++++++++++++++++++++++- arch/arm64/kernel/vmlinux.lds.S | 7 ++ 4 files changed, 137 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h index ba89a9af820a..3b5f1fd332b0 100644 --- a/arch/arm64/include/asm/linkage.h +++ b/arch/arm64/include/asm/linkage.h @@ -60,4 +60,16 @@ SYM_FUNC_END(x); \ SYM_FUNC_END_ALIAS(__pi_##x) +/* + * Record the address range of each SYM_CODE function in a struct code_range + * in a special section. + */ +#define SYM_CODE_END(name) \ + SYM_END(name, SYM_T_NONE) ;\ + 99: ;\ + .pushsection "sym_code_functions", "aw" ;\ + .quad name ;\ + .quad 99b ;\ + .popsection + #endif diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 2f36b16a5b5d..29cb566f65ec 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -20,5 +20,6 @@ extern char __exittext_begin[], __exittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; +extern char __sym_code_functions_start[], __sym_code_functions_end[]; #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 9061375c8785..5477a9d39b12 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -18,6 +18,109 @@ #include #include +struct code_range { + unsigned long start; + unsigned long end; +}; + +static struct code_range *sym_code_functions; +static int num_sym_code_functions; + +int __init init_sym_code_functions(void) +{ + size_t size; + + size = (unsigned long)__sym_code_functions_end - + (unsigned long)__sym_code_functions_start; + + sym_code_functions = kmalloc(size, GFP_KERNEL); + if (!sym_code_functions) + return -ENOMEM; + + memcpy(sym_code_functions, __sym_code_functions_start, size); + /* Update num_sym_code_functions after copying sym_code_functions. */ + smp_mb(); + num_sym_code_functions = size / sizeof(struct code_range); + + return 0; +} +early_initcall(init_sym_code_functions); + +/* + * Check the return PC against sym_code_functions[]. If there is a match, then + * the consider the stack frame unreliable. These functions contain low-level + * code where the frame pointer and/or the return address register cannot be + * relied upon. This addresses the following situations: + * + * - Exception handlers and entry assembly + * - Trampoline assembly (e.g., ftrace, kprobes) + * - Hypervisor-related assembly + * - Hibernation-related assembly + * - CPU start-stop, suspend-resume assembly + * - Kernel relocation assembly + * + * Some special cases covered by sym_code_functions[] deserve a mention here: + * + * - All EL1 interrupt and exception stack traces will be considered + * unreliable. This is the correct behavior as interrupts and exceptions + * can happen on any instruction including ones in the frame pointer + * prolog and epilog. Unless stack metadata is available so the unwinder + * can unwind through these special cases, such stack traces will be + * considered unreliable. + * + * - A task can get preempted at the end of an interrupt. Stack traces + * of preempted tasks will show the interrupt frame in the stack trace + * and will be considered unreliable. + * + * - Breakpoints are exceptions. So, all stack traces in the break point + * handler (including probes) will be considered unreliable. + * + * - All of the ftrace entry trampolines are considered unreliable. So, + * all stack traces taken from tracer functions will be considered + * unreliable. + * + * - The Function Graph Tracer return trampoline (return_to_handler) + * and the Kretprobe return trampoline (kretprobe_trampoline) are + * also considered unreliable. + * + * Some of the special cases above can be unwound through using special logic + * in unwind_frame(). + * + * - return_to_handler() is handled by the unwinder by attempting to + * retrieve the original return address from the per-task return + * address stack. + * + * - kretprobe_trampoline() can be handled in a similar fashion by + * attempting to retrieve the original return address from the per-task + * kretprobe instance list. + * + * - I reckon optprobes can be handled in a similar fashion in the future? + * + * - Stack traces taken from the FTrace tracer functions can be handled + * as well. ftrace_call is an inner label defined in the Ftrace entry + * trampoline. This is the location where the call to a tracer function + * is patched. So, if the return PC equals ftrace_call+4, it is + * reliable. At that point, proper stack frames have already been set + * up for the traced function and its caller. + */ +static bool unwinder_is_unreliable(unsigned long pc) +{ + const struct code_range *range; + int i; + + /* + * If sym_code_functions[] were sorted, a binary search could be + * done to make this more performant. + */ + for (i = 0; i < num_sym_code_functions; i++) { + range = &sym_code_functions[i]; + if (pc >= range->start && pc < range->end) + return true; + } + + return false; +} + /* * AArch64 PCS assigns the frame pointer to x29. * @@ -133,7 +236,20 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * - Foreign code (e.g. EFI runtime services) * - Procedure Linkage Table (PLT) entries and veneer functions */ - if (!__kernel_text_address(frame->pc)) + if (!__kernel_text_address(frame->pc)) { + frame->reliable = false; + return 0; + } + + /* + * If the final frame has been reached, there is no more unwinding + * to do. There is no need to check if the return PC is considered + * unreliable by the unwinder. + */ + if (!frame->fp) + return 0; + + if (unwinder_is_unreliable(frame->pc)) frame->reliable = false; return 0; diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7eea7888bb02..32e8d57397a1 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -103,6 +103,12 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SYM_CODE_FUNCTIONS \ + . = ALIGN(16); \ + __sym_code_functions_start = .; \ + KEEP(*(sym_code_functions)) \ + __sym_code_functions_end = .; + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -218,6 +224,7 @@ SECTIONS CON_INITCALL INIT_RAM_FS *(.init.altinstructions .init.bss) /* from the EFI stub */ + SYM_CODE_FUNCTIONS } .exit.data : { EXIT_DATA