From patchwork Wed Jul 27 04:00:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Huafei X-Patchwork-Id: 12930025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B16AC19F21 for ; Wed, 27 Jul 2022 04:05:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zsSt0h5VUJRW04vVXyb8YchYSjC4DPSbMGmWlIF/3LA=; b=h0MWauyc/Equsd iTcKLt/oHy9dg14fsmFIMbFwPYViT9bOu3w7uUuwO1u1ybjYitSIdSUFpsAVZgK049cbzcGIpi7eK HTF9AzXu/iFV7llyI4MSVZ2zjff8Og5ZS9a4q4t6IDrkUaON8ISvtpll7wI2y2XNU/i1KuhdlwoCo XhBuYK7qfM3XJjt6H4gzj2EDVY3qrZYCz4HgMJL6/+0IWS2DZWp194IIzZBkaRgvBoHEU+RNH3vE7 jbNG5ztQ2sCp4v/4j60v2Kpz/IyMuSSEsQ5UlRtFGG1bw61+jrA08kRLx8XKfMhUnSpcMUzFfh5+Q ccQRhHHJsWgxmpTe72Jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oGYHE-008cUl-PW; Wed, 27 Jul 2022 04:04:04 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oGYGx-008cJ2-Co for linux-arm-kernel@lists.infradead.org; Wed, 27 Jul 2022 04:03:49 +0000 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Lt0VM1H8Lz9swc; Wed, 27 Jul 2022 12:02:35 +0800 (CST) Received: from kwepemm600010.china.huawei.com (7.193.23.86) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 27 Jul 2022 12:03:06 +0800 Received: from ubuntu1804.huawei.com (10.67.174.174) by kwepemm600010.china.huawei.com (7.193.23.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 27 Jul 2022 12:03:06 +0800 From: Li Huafei To: , , , , , CC: , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 3/4] ARM: stacktrace: Make stack walk callback consistent with generic code Date: Wed, 27 Jul 2022 12:00:21 +0800 Message-ID: <20220727040022.139387-4-lihuafei1@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220727040022.139387-1-lihuafei1@huawei.com> References: <20220727040022.139387-1-lihuafei1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.174] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600010.china.huawei.com (7.193.23.86) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220726_210347_784183_5DB7451E X-CRM114-Status: GOOD ( 16.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As with the generic arch_stack_walk() code the ARM stack walk code takes a callback that is called per stack frame. Currently the ARM code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The ARM code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when ARM is converted to use arch_stack_walk() change the signature and return sense of the ARM specific callback to match that of the generic code. Signed-off-by: Li Huafei Reviewed-by: Mark Brown Reviewed-by: Linus Waleij --- arch/arm/include/asm/stacktrace.h | 2 +- arch/arm/kernel/perf_callchain.c | 9 ++++----- arch/arm/kernel/return_address.c | 8 ++++---- arch/arm/kernel/stacktrace.c | 13 ++++++------- 4 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/arm/include/asm/stacktrace.h b/arch/arm/include/asm/stacktrace.h index 39be2d1aa27b..7269d1e13449 100644 --- a/arch/arm/include/asm/stacktrace.h +++ b/arch/arm/include/asm/stacktrace.h @@ -44,7 +44,7 @@ void arm_get_current_stackframe(struct pt_regs *regs, struct stackframe *frame) extern int unwind_frame(struct stackframe *frame); extern void walk_stackframe(struct stackframe *frame, - int (*fn)(struct stackframe *, void *), void *data); + bool (*fn)(void *, unsigned long), void *data); extern void dump_mem(const char *lvl, const char *str, unsigned long bottom, unsigned long top); diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c index bc6b246ab55e..7147edbe56c6 100644 --- a/arch/arm/kernel/perf_callchain.c +++ b/arch/arm/kernel/perf_callchain.c @@ -81,13 +81,12 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs * whist unwinding the stackframe and is like a subroutine return so we use * the PC. */ -static int -callchain_trace(struct stackframe *fr, - void *data) +static bool +callchain_trace(void *data, unsigned long pc) { struct perf_callchain_entry_ctx *entry = data; - perf_callchain_store(entry, fr->pc); - return 0; + perf_callchain_store(entry, pc); + return true; } void diff --git a/arch/arm/kernel/return_address.c b/arch/arm/kernel/return_address.c index 38f1ea9c724d..ac15db66df4c 100644 --- a/arch/arm/kernel/return_address.c +++ b/arch/arm/kernel/return_address.c @@ -16,17 +16,17 @@ struct return_address_data { void *addr; }; -static int save_return_addr(struct stackframe *frame, void *d) +static bool save_return_addr(void *d, unsigned long pc) { struct return_address_data *data = d; if (!data->level) { - data->addr = (void *)frame->pc; + data->addr = (void *)pc; - return 1; + return false; } else { --data->level; - return 0; + return true; } } diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index 85443b5d1922..d05968bc7812 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -127,12 +127,12 @@ int notrace unwind_frame(struct stackframe *frame) #endif void notrace walk_stackframe(struct stackframe *frame, - int (*fn)(struct stackframe *, void *), void *data) + bool (*fn)(void *, unsigned long), void *data) { while (1) { int ret; - if (fn(frame, data)) + if (!fn(data, frame->pc)) break; ret = unwind_frame(frame); if (ret < 0) @@ -148,21 +148,20 @@ struct stack_trace_data { unsigned int skip; }; -static int save_trace(struct stackframe *frame, void *d) +static bool save_trace(void *d, unsigned long addr) { struct stack_trace_data *data = d; struct stack_trace *trace = data->trace; - unsigned long addr = frame->pc; if (data->no_sched_functions && in_sched_functions(addr)) - return 0; + return true; if (data->skip) { data->skip--; - return 0; + return true; } trace->entries[trace->nr_entries++] = addr; - return trace->nr_entries >= trace->max_entries; + return trace->nr_entries < trace->max_entries; } /* This must be noinline to so that our skip calculation works correctly */