From patchwork Wed Oct 11 07:26:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13416766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A506CD98F5 for ; Wed, 11 Oct 2023 07:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=J6ISO9ZkDn/ZpuApa3Ry1YLN/n62RFlPkDSRFPfbWPo=; b=i/kISZvmzN7Anm 9XUVRppiFwQj1Tj70K8uq+L12vv78JjQpYZDx0nEk3liwPccr56g3Dkr/+GGBrRH1lCN3s3HlPa2h 1FyMZdMiZ3DCImHWi+6eimp6xfwm2kzSisEyWnSHEfqlnWLxVQGcjUClEUKSjtScpTA3SATtpHP64 2kNvgioVXY7ARwgXqf++YBfJb7yjc/qfeXtQiQTsWbK/nD0j+FPoQ4B084WFsf8G4vfqgvC9uW5Lx +fsdOxMe55ZxWGzHH+bZbMDg9H30Y2o/rhDMFgRo/f2UuHRxVuhL895xFeEZZfoNhwqUsLdO5uPAA crgDKliMSOEVj8TppbYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qqTdm-00F5PU-0w; Wed, 11 Oct 2023 07:28:22 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qqTdY-00F5CK-1L for linux-arm-kernel@lists.infradead.org; Wed, 11 Oct 2023 07:28:13 +0000 Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4S546n2hJ1z1M9RB; Wed, 11 Oct 2023 15:25:21 +0800 (CST) Received: from huawei.com (10.67.174.55) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 11 Oct 2023 15:27:54 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v5.10 02/15] arm64: die(): pass 'err' as long Date: Wed, 11 Oct 2023 07:26:48 +0000 Message-ID: <20231011072701.876772-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231011072701.876772-1-ruanjinjie@huawei.com> References: <20231011072701.876772-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.55] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemi500008.china.huawei.com (7.221.188.139) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231011_002808_785399_DD89CEB7 X-CRM114-Status: GOOD ( 12.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland Recently, we reworked a lot of code to consistentlt pass ESR_ELx as a 64-bit quantity. However, we missed that this can be passed into die() and __die() as the 'err' parameter where it is truncated to a 32-bit int. As notify_die() already takes 'err' as a long, this patch changes die() and __die() to also take 'err' as a long, ensuring that the full value of ESR_ELx is retained. At the same time, die() is updated to consistently log 'err' as a zero-padded 64-bit quantity. Subsequent patches will pass the ESR_ELx value to die() for a number of exceptions. Signed-off-by: Mark Rutland Reviewed-by: Mark Brown Reviewed-by: Anshuman Khandual Cc: Alexandru Elisei Cc: Amit Daniel Kachhap Cc: James Morse Cc: Will Deacon Link: https://lore.kernel.org/r/20220913101732.3925290-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas --- arch/arm64/include/asm/system_misc.h | 2 +- arch/arm64/kernel/traps.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h index 1ab63cfbbaf1..624a608933f1 100644 --- a/arch/arm64/include/asm/system_misc.h +++ b/arch/arm64/include/asm/system_misc.h @@ -18,7 +18,7 @@ struct pt_regs; -void die(const char *msg, struct pt_regs *regs, int err); +void die(const char *msg, struct pt_regs *regs, long err); struct siginfo; void arm64_notify_die(const char *str, struct pt_regs *regs, diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index fcc490699c22..3a88bb1202ad 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -90,12 +90,12 @@ static void dump_kernel_instr(const char *lvl, struct pt_regs *regs) #define S_SMP " SMP" -static int __die(const char *str, int err, struct pt_regs *regs) +static int __die(const char *str, long err, struct pt_regs *regs) { static int die_counter; int ret; - pr_emerg("Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n", + pr_emerg("Internal error: %s: %016lx [#%d]" S_PREEMPT S_SMP "\n", str, err, ++die_counter); /* trap and error numbers are mostly meaningless on ARM */ @@ -116,7 +116,7 @@ static DEFINE_RAW_SPINLOCK(die_lock); /* * This function is protected against re-entrancy. */ -void die(const char *str, struct pt_regs *regs, int err) +void die(const char *str, struct pt_regs *regs, long err) { int ret; unsigned long flags;