From patchwork Tue Nov 22 16:36:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13052571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 222A2C4332F for ; Tue, 22 Nov 2022 16:37:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=X5p+6Sul8ptj42DNT2p13BWtTRkUb6ahDp3ma78BMdM=; b=1bmZ7ubwE7VETF /3cISjFoSYhX9u3dnXL8NTHaVAjcTNdwU4QzEy+22ouikxiAiwdsLDXSloZHFPazggawPpkpNBucj CoKVz+SBvBfiWaWiX4JjBuUQAA0gps6JAEYacfCOKQ+Jv60KP/ISulQooCn0hKgTR2n4ITpIFv/un Y40+6Wd+oc/aAXNaXqTjX94YPph4hnSNvE+6pQhP9iklAxpuhfoAWaGYzz3PZ91Un0xI6tGAOO0aW cbecloTlULwjXFZMV/z9XUu4GsS7pXhluzvtU2hT4pgI2O3whyLx8RN0k0czvQZ69lUdTN569mHWL 9YcYKUwdeZYtVcJPieIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oxWGD-00Am8A-DE; Tue, 22 Nov 2022 16:36:37 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oxWG8-00Am5l-1u for linux-arm-kernel@lists.infradead.org; Tue, 22 Nov 2022 16:36:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D2CC1FB; Tue, 22 Nov 2022 08:36:35 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2FA2E3F73D; Tue, 22 Nov 2022 08:36:28 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, mark.rutland@arm.com, rostedt@goodmis.org, will@kernel.org Subject: [PATCH] ftrace: arm64: remove static ftrace Date: Tue, 22 Nov 2022 16:36:24 +0000 Message-Id: <20221122163624.1225912-1-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221122_083632_212962_B6A65023 X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The build test robot pointer out that there's a build failure when: CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y CONFIG_DYNAMIC_FTRACE_WITH_ARGS=n ... due to some mismatched ifdeffery, some of which checks CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS, and some of which checks CONFIG_DYNAMIC_FTRACE_WITH_ARGS, leading to some missing definitions expected by the core code when CONFIG_DYNAMIC_FTRACE=n and consequently CONFIG_DYNAMIC_FTRACE_WITH_ARGS=n. There's really not much point in supporting CONFIG_DYNAMIC_FTRACE=n (AKA static ftrace). All supported toolchains allow us to implement DYNAMIC_FTRACE, distributions all prefer DYNAMIC_FTRACE, and both powerpc and s390 removed support for static ftrace in commits: 0c0c52306f4792a4 ("powerpc: Only support DYNAMIC_FTRACE not static") 5d6a0163494c78ad ("s390/ftrace: enforce DYNAMIC_FTRACE if FUNCTION_TRACER is selected") ... and according to Steven, static ftrace is only supported on x86 to allow testing that the core code still functions in this configuration. Given that, let's simplify matters by removing arm64's support for static ftrace. This avoids the problem originally reported, and leaves us with less code to maintain. Fixes: 26299b3f6ba26bfc ("ftrace: arm64: move from REGS to ARGS") Link: https://lore.kernel.org/r/202211212249.livTPi3Y-lkp@intel.com Suggested-by: Steven Rostedt Signed-off-by: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Acked-by: Steven Rostedt (Google) --- arch/arm64/Kconfig | 1 + arch/arm64/kernel/entry-ftrace.S | 39 -------------------------------- arch/arm64/kernel/ftrace.c | 5 ---- 3 files changed, 1 insertion(+), 44 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b6b3305ba7013..5553a734123eb 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -117,6 +117,7 @@ config ARM64 select CPU_PM if (SUSPEND || CPU_IDLE) select CRC32 select DCACHE_WORD_ACCESS + select DYNAMIC_FTRACE if FUNCTION_TRACER select DMA_DIRECT_REMAP select EDAC_SUPPORT select FRAME_POINTER diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 4d3050549aa6e..30cc2a9d1757a 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -170,44 +170,6 @@ SYM_CODE_END(ftrace_caller) add \reg, \reg, #8 .endm -#ifndef CONFIG_DYNAMIC_FTRACE -/* - * void _mcount(unsigned long return_address) - * @return_address: return address to instrumented function - * - * This function makes calls, if enabled, to: - * - tracer function to probe instrumented function's entry, - * - ftrace_graph_caller to set up an exit hook - */ -SYM_FUNC_START(_mcount) - mcount_enter - - ldr_l x2, ftrace_trace_function - adr x0, ftrace_stub - cmp x0, x2 // if (ftrace_trace_function - b.eq skip_ftrace_call // != ftrace_stub) { - - mcount_get_pc x0 // function's pc - mcount_get_lr x1 // function's lr (= parent's pc) - blr x2 // (*ftrace_trace_function)(pc, lr); - -skip_ftrace_call: // } -#ifdef CONFIG_FUNCTION_GRAPH_TRACER - ldr_l x2, ftrace_graph_return - cmp x0, x2 // if ((ftrace_graph_return - b.ne ftrace_graph_caller // != ftrace_stub) - - ldr_l x2, ftrace_graph_entry // || (ftrace_graph_entry - adr_l x0, ftrace_graph_entry_stub // != ftrace_graph_entry_stub)) - cmp x0, x2 - b.ne ftrace_graph_caller // ftrace_graph_caller(); -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ - mcount_exit -SYM_FUNC_END(_mcount) -EXPORT_SYMBOL(_mcount) -NOKPROBE(_mcount) - -#else /* CONFIG_DYNAMIC_FTRACE */ /* * _mcount() is used to build the kernel with -pg option, but all the branch * instructions to _mcount() are replaced to NOP initially at kernel start up, @@ -247,7 +209,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller(); mcount_exit SYM_FUNC_END(ftrace_caller) -#endif /* CONFIG_DYNAMIC_FTRACE */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 5cf990d052ba8..b30b955a89211 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -60,7 +60,6 @@ int ftrace_regs_query_register_offset(const char *name) } #endif -#ifdef CONFIG_DYNAMIC_FTRACE /* * Replace a single instruction, which may be a branch or NOP. * If @validate == true, a replaced instruction is checked against 'old'. @@ -268,7 +267,6 @@ void arch_ftrace_update_code(int command) command |= FTRACE_MAY_SLEEP; ftrace_modify_all_code(command); } -#endif /* CONFIG_DYNAMIC_FTRACE */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* @@ -299,8 +297,6 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, } } -#ifdef CONFIG_DYNAMIC_FTRACE - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *op, struct ftrace_regs *fregs) @@ -338,5 +334,4 @@ int ftrace_disable_ftrace_graph_caller(void) return ftrace_modify_graph_caller(false); } #endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */ -#endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */