From patchwork Tue Oct 18 12:43:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Huafei X-Patchwork-Id: 13010491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F811C4332F for ; Tue, 18 Oct 2022 12:48:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JjFwgQwgr3GgsbZW/8I0SN+k7cZqpl5cwM413yWgC9c=; b=oB0h9woCFa3Qq8 pKS8mzS6vklSuns8Xd0sF+aRUobc59GFoSIoSLaFMFb1OtXHsnoxjJjy8jw19Q5b/YEE9r9mtY2Ii znIOPVKjEAoFvWBM1Uo0pz+KAialX1R6Kbxx3mot1Kdu+upLnhBG8hthLWY9Enl84QBgGflXmprF/ QNOmzoZGUqTJgXUdXO2HS50PIDsm+cpddC6K5FH/8TSC8uHDpHgdKLScsgttRQBa3byAKiAS3fEYq M+nrhzHWOCNcxHoQ0ckHzXivi7T5CxJd5JOBjjjWRCFH5JCql2tQxpRfMhtdhq27dguiXP7yAc0b7 8FXHrTKkbTFN8dBX7VyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1okm0I-006iVl-V7; Tue, 18 Oct 2022 12:47:31 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1okm03-006iNK-0F for linux-arm-kernel@lists.infradead.org; Tue, 18 Oct 2022 12:47:18 +0000 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MsD7W0y5RzpVjm; Tue, 18 Oct 2022 20:43:51 +0800 (CST) Received: from kwepemm600010.china.huawei.com (7.193.23.86) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 20:47:05 +0800 Received: from ubuntu1804.huawei.com (10.67.174.174) by kwepemm600010.china.huawei.com (7.193.23.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 20:47:04 +0800 From: Li Huafei To: , CC: , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v4 1/2] ARM: stacktrace: Make stack walk callback consistent with generic code Date: Tue, 18 Oct 2022 20:43:38 +0800 Message-ID: <20221018124339.115793-2-lihuafei1@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221018124339.115793-1-lihuafei1@huawei.com> References: <20221018124339.115793-1-lihuafei1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.174] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600010.china.huawei.com (7.193.23.86) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221018_054715_388123_A92B1290 X-CRM114-Status: GOOD ( 17.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As with the generic arch_stack_walk() code the ARM stack walk code takes a callback that is called per stack frame. Currently the ARM code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The ARM code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when ARM is converted to use arch_stack_walk() change the signature and return sense of the ARM specific callback to match that of the generic code. Signed-off-by: Li Huafei Reviewed-by: Mark Brown Reviewed-by: Linus Waleij --- arch/arm/include/asm/stacktrace.h | 2 +- arch/arm/kernel/perf_callchain.c | 9 ++++----- arch/arm/kernel/return_address.c | 8 ++++---- arch/arm/kernel/stacktrace.c | 13 ++++++------- 4 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/arm/include/asm/stacktrace.h b/arch/arm/include/asm/stacktrace.h index 36b2ff44fcbb..360f0d2406bf 100644 --- a/arch/arm/include/asm/stacktrace.h +++ b/arch/arm/include/asm/stacktrace.h @@ -44,7 +44,7 @@ void arm_get_current_stackframe(struct pt_regs *regs, struct stackframe *frame) extern int unwind_frame(struct stackframe *frame); extern void walk_stackframe(struct stackframe *frame, - int (*fn)(struct stackframe *, void *), void *data); + bool (*fn)(void *, unsigned long), void *data); extern void dump_mem(const char *lvl, const char *str, unsigned long bottom, unsigned long top); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c index bc6b246ab55e..7147edbe56c6 100644 --- a/arch/arm/kernel/perf_callchain.c +++ b/arch/arm/kernel/perf_callchain.c @@ -81,13 +81,12 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs * whist unwinding the stackframe and is like a subroutine return so we use * the PC. */ -static int -callchain_trace(struct stackframe *fr, - void *data) +static bool +callchain_trace(void *data, unsigned long pc) { struct perf_callchain_entry_ctx *entry = data; - perf_callchain_store(entry, fr->pc); - return 0; + perf_callchain_store(entry, pc); + return true; } void diff --git a/arch/arm/kernel/return_address.c b/arch/arm/kernel/return_address.c index 38f1ea9c724d..ac15db66df4c 100644 --- a/arch/arm/kernel/return_address.c +++ b/arch/arm/kernel/return_address.c @@ -16,17 +16,17 @@ struct return_address_data { void *addr; }; -static int save_return_addr(struct stackframe *frame, void *d) +static bool save_return_addr(void *d, unsigned long pc) { struct return_address_data *data = d; if (!data->level) { - data->addr = (void *)frame->pc; + data->addr = (void *)pc; - return 1; + return false; } else { --data->level; - return 0; + return true; } } diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index 85443b5d1922..d05968bc7812 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -127,12 +127,12 @@ int notrace unwind_frame(struct stackframe *frame) #endif void notrace walk_stackframe(struct stackframe *frame, - int (*fn)(struct stackframe *, void *), void *data) + bool (*fn)(void *, unsigned long), void *data) { while (1) { int ret; - if (fn(frame, data)) + if (!fn(data, frame->pc)) break; ret = unwind_frame(frame); if (ret < 0) @@ -148,21 +148,20 @@ struct stack_trace_data { unsigned int skip; }; -static int save_trace(struct stackframe *frame, void *d) +static bool save_trace(void *d, unsigned long addr) { struct stack_trace_data *data = d; struct stack_trace *trace = data->trace; - unsigned long addr = frame->pc; if (data->no_sched_functions && in_sched_functions(addr)) - return 0; + return true; if (data->skip) { data->skip--; - return 0; + return true; } trace->entries[trace->nr_entries++] = addr; - return trace->nr_entries >= trace->max_entries; + return trace->nr_entries < trace->max_entries; } /* This must be noinline to so that our skip calculation works correctly */