From patchwork Fri Aug 4 02:10:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13346138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D96CAC001DB for ; Tue, 8 Aug 2023 15:35:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PoQhQo78BweZ6H0wE+DjV6DpggcUr6eDEdQU9z2Mgvs=; b=aBGnXTPUQ0D5+c HOoZ3IGHWVNi/ItBtXCwHb2fJh7w8iuROio+Pk3FX4x7Acfs9USUDSNH8Dp4ZYEQMvnG0RMD+c8MN vVUyr/Hs9uYrkEf/XKy2Sq/gmvYDz0dhIg38bFyzCtCGrEmJN2ZLq1Kkb6OYpVivhCF3R8s7LoqGf GUGtlFbTFgCCgL7hYCdDkwd0scHlTit3AyaMlX2My4oBYrun3LWh1aXibas3cLpi06jNgI2TKe4no WfnbsEr2XYJ2AFu4o/IWoNrLq0ezIetTmqdR1MZHKF6o1xs/UwmlfjI4QVJE9Bn5e3mfdQL6ueZNY /wAeogCcKgCDH5n6oXRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTOjk-002p0c-1u; Tue, 08 Aug 2023 15:35:09 +0000 Received: from mail-pg1-x532.google.com ([2607:f8b0:4864:20::532]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHG-00BKtY-35 for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:10:59 +0000 Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-563f752774fso905337a12.1 for ; Thu, 03 Aug 2023 19:10:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115053; x=1691719853; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=HSPBF1Dt/dbsP7gmIpURAGgqh4hEnSXSTRC5xtu2r3E=; b=uly/MUWcWc7+qggyuA0CmJaQ4aoIJ+gwAQVPh36cwpBeLF5u9gLNRERkBry53Rb+Dr JtmWs4EBSkjB7hbCY69gPZRwjq1Qii5cCV97Btl2MEORUSkKEem3HDJ1qQwNIte1UWMc TNzV08/6AEj2BM2qa+vGZIdGzpQ2sYK0vAIzFmyOLGXpkFraU7zxVH1Ml3II+ls3BVVC 1Rb2LvjPSmVu1zmjvlp8KNb+C1NJ3PWx97Et+NxlqpIAlv9P03mIsVnmyB4rN1tt5Ci1 DH6l2MsUHu5jIAyOSvPiPS3QkbTrk+eS+ElRgeIKs9Xm157zz6QbxP3xNLPdBoXR/hvl 5PvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115053; x=1691719853; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HSPBF1Dt/dbsP7gmIpURAGgqh4hEnSXSTRC5xtu2r3E=; b=L6xx4Zh32xszSRI/GkwP8VQR3guAySbnzN+qJkppBef4YMGsd/maWCwfK5bX3hjxkE 8Xt1BwH/G48p/Ie0cBzF/n+wqmAiZkFTvogvvFavLk1KjXkviVvjtYHLfw2VQDoNjuN3 baBqjxIwJTTG4r6SNnFRPW7z8m/K6kFedFQUu5lTzL4GENu7A29VdDCkOPZ9piReLqK5 xkw5X7mxcUy+2rWX2H7TBs82MWG6UMgM1ER57LJASZNjNb5B7pgWS0UMAbp9PPjiREz6 nh3VURRlYPzgybjIeGCg7EN4k9+K7dz9DKYcgm7fVCa4bzwobGiVQrFYc0g7XX0PKld0 yEXw== X-Gm-Message-State: AOJu0YwW+cNO+jxuNW0Rd9UcnoOQ5GzQzEZHqmMhVXc+FsPU6gXtyEGv 76jzt9uPLTHdq6My30Cbm6RLQQ== X-Google-Smtp-Source: AGHT+IEgSWdX3Q6JLUQMVYVdq28Lvg0GHdzIFRxoKUQ83V2QGbZzS7QGx0PLOsP+Ji6Y3XOzxaPySA== X-Received: by 2002:a05:6a20:4310:b0:138:1c5b:274f with SMTP id h16-20020a056a20431000b001381c5b274fmr340318pzk.45.1691115052347; Thu, 03 Aug 2023 19:10:52 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:51 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:26 -0700 Subject: [PATCH 01/10] RISC-V: Expand instruction definitions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-1-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-Mailman-Approved-At: Tue, 08 Aug 2023 08:35:06 -0700 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org There are many systems across the kernel that rely on directly creating and modifying instructions. In order to unify them, create shared definitions for instructions and registers. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/insn.h | 2742 +++++++++++++++++++++++++++--- arch/riscv/include/asm/reg.h | 88 + arch/riscv/kernel/kgdb.c | 4 +- arch/riscv/kernel/probes/simulate-insn.c | 39 +- arch/riscv/kernel/vector.c | 2 +- 5 files changed, 2629 insertions(+), 246 deletions(-) diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h index 4e1505cef8aa..04f7649e1add 100644 --- a/arch/riscv/include/asm/insn.h +++ b/arch/riscv/include/asm/insn.h @@ -7,15 +7,28 @@ #define _ASM_RISCV_INSN_H #include +#include + +#define RV_INSN_FUNCT5_IN_OPOFF 2 +#define RV_INSN_AQ_IN_OPOFF 1 +#define RV_INSN_RL_IN_OPOFF 0 -#define RV_INSN_FUNCT3_MASK GENMASK(14, 12) -#define RV_INSN_FUNCT3_OPOFF 12 -#define RV_INSN_OPCODE_MASK GENMASK(6, 0) #define RV_INSN_OPCODE_OPOFF 0 +#define RV_INSN_FUNCT3_OPOFF 12 +#define RV_INSN_FUNCT5_OPOFF 27 +#define RV_INSN_FUNCT7_OPOFF 25 #define RV_INSN_FUNCT12_OPOFF 20 - -#define RV_ENCODE_FUNCT3(f_) (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF) -#define RV_ENCODE_FUNCT12(f_) (RVG_FUNCT12_##f_ << RV_INSN_FUNCT12_OPOFF) +#define RV_INSN_RD_OPOFF 7 +#define RV_INSN_RS1_OPOFF 15 +#define RV_INSN_RS2_OPOFF 20 +#define RV_INSN_OPCODE_MASK GENMASK(6, 0) +#define RV_INSN_FUNCT3_MASK GENMASK(2, 0) +#define RV_INSN_FUNCT5_MASK GENMASK(4, 0) +#define RV_INSN_FUNCT7_MASK GENMASK(6, 0) +#define RV_INSN_FUNCT12_MASK GENMASK(11, 0) +#define RV_INSN_RD_MASK GENMASK(4, 0) +#define RV_INSN_RS1_MASK GENMASK(4, 0) +#define RV_INSN_RS2_MASK GENMASK(4, 0) /* The bit field of immediate value in I-type instruction */ #define RV_I_IMM_SIGN_OPOFF 31 @@ -24,6 +37,33 @@ #define RV_I_IMM_11_0_OFF 0 #define RV_I_IMM_11_0_MASK GENMASK(11, 0) +/* The bit field of immediate value in S-type instruction */ +#define RV_S_IMM_11_5_OPOFF 25 +#define RV_S_IMM_4_0_OPOFF 7 +#define RV_S_IMM_11_5_OFF 5 +#define RV_S_IMM_4_0_OFF 0 +#define RV_S_IMM_11_5_MASK GENMASK(6, 0) +#define RV_S_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in B-type instruction */ +#define RV_B_IMM_SIGN_OPOFF 31 +#define RV_B_IMM_4_1_OPOFF 8 +#define RV_B_IMM_10_5_OPOFF 25 +#define RV_B_IMM_11_OPOFF 7 +#define RV_B_IMM_SIGN_OFF 12 +#define RV_B_IMM_4_1_OFF 1 +#define RV_B_IMM_10_5_OFF 5 +#define RV_B_IMM_11_OFF 11 +#define RV_B_IMM_SIGN_MASK GENMASK(0, 0) +#define RV_B_IMM_4_1_MASK GENMASK(3, 0) +#define RV_B_IMM_10_5_MASK GENMASK(5, 0) +#define RV_B_IMM_11_MASK GENMASK(0, 0) + +/* The bit field of immediate value in S-type instruction */ +#define RV_S_IMM_31_12_OPOFF 12 +#define RV_S_IMM_31_12_OFF 12 +#define RV_S_IMM_31_12_MASK GENMASK(19, 0) + /* The bit field of immediate value in J-type instruction */ #define RV_J_IMM_SIGN_OPOFF 31 #define RV_J_IMM_10_1_OPOFF 21 @@ -38,82 +78,716 @@ #define RV_J_IMM_19_12_MASK GENMASK(7, 0) /* - * U-type IMMs contain the upper 20bits [31:20] of an immediate with + * U-type IMMs contain the upper 20bits [31:12] of an immediate with * the rest filled in by zeros, so no shifting required. Similarly, * bit31 contains the signed state, so no sign extension necessary. */ #define RV_U_IMM_SIGN_OPOFF 31 -#define RV_U_IMM_31_12_OPOFF 0 -#define RV_U_IMM_31_12_MASK GENMASK(31, 12) - -/* The bit field of immediate value in B-type instruction */ -#define RV_B_IMM_SIGN_OPOFF 31 -#define RV_B_IMM_10_5_OPOFF 25 -#define RV_B_IMM_4_1_OPOFF 8 -#define RV_B_IMM_11_OPOFF 7 -#define RV_B_IMM_SIGN_OFF 12 -#define RV_B_IMM_10_5_OFF 5 -#define RV_B_IMM_4_1_OFF 1 -#define RV_B_IMM_11_OFF 11 -#define RV_B_IMM_10_5_MASK GENMASK(5, 0) -#define RV_B_IMM_4_1_MASK GENMASK(3, 0) -#define RV_B_IMM_11_MASK GENMASK(0, 0) +#define RV_U_IMM_31_12_OPOFF 12 +#define RV_U_IMM_31_12_OFF 12 +#define RV_U_IMM_SIGN_OFF 31 +#define RV_U_IMM_31_12_MASK GENMASK(19, 0) /* The register offset in RVG instruction */ #define RVG_RS1_OPOFF 15 #define RVG_RS2_OPOFF 20 #define RVG_RD_OPOFF 7 +#define RVG_RS1_MASK GENMASK(4, 0) +#define RVG_RS2_MASK GENMASK(4, 0) #define RVG_RD_MASK GENMASK(4, 0) -/* The bit field of immediate value in RVC J instruction */ -#define RVC_J_IMM_SIGN_OPOFF 12 -#define RVC_J_IMM_4_OPOFF 11 -#define RVC_J_IMM_9_8_OPOFF 9 -#define RVC_J_IMM_10_OPOFF 8 -#define RVC_J_IMM_6_OPOFF 7 -#define RVC_J_IMM_7_OPOFF 6 -#define RVC_J_IMM_3_1_OPOFF 3 -#define RVC_J_IMM_5_OPOFF 2 -#define RVC_J_IMM_SIGN_OFF 11 -#define RVC_J_IMM_4_OFF 4 -#define RVC_J_IMM_9_8_OFF 8 -#define RVC_J_IMM_10_OFF 10 -#define RVC_J_IMM_6_OFF 6 -#define RVC_J_IMM_7_OFF 7 -#define RVC_J_IMM_3_1_OFF 1 -#define RVC_J_IMM_5_OFF 5 -#define RVC_J_IMM_4_MASK GENMASK(0, 0) -#define RVC_J_IMM_9_8_MASK GENMASK(1, 0) -#define RVC_J_IMM_10_MASK GENMASK(0, 0) -#define RVC_J_IMM_6_MASK GENMASK(0, 0) -#define RVC_J_IMM_7_MASK GENMASK(0, 0) -#define RVC_J_IMM_3_1_MASK GENMASK(2, 0) -#define RVC_J_IMM_5_MASK GENMASK(0, 0) +/* Register sizes in RV instructions */ +#define RV_STANDARD_REG_BITS 5 +#define RV_COMPRESSED_REG_BITS 3 +#define RV_STANDARD_REG_MASK GENMASK(4, 0) +#define RV_COMPRESSED_REG_MASK GENMASK(2, 0) + +/* The bit field for F,D,Q extensions */ +#define RVG_FL_FS_WIDTH_OFF 12 +#define RVG_FL_FS_WIDTH_MASK GENMASK(3, 0) +#define RVG_FL_FS_WIDTH_W 2 +#define RVG_FL_FS_WIDTH_D 3 +#define RVG_LS_FS_WIDTH_Q 4 + +/* The bit field for Zicsr extension */ +#define RVG_SYSTEM_CSR_OPOFF 20 +#define RVG_SYSTEM_CSR_MASK GENMASK(12, 0) + +/* RVV widths */ +#define RVV_VL_VS_WIDTH_8 0 +#define RVV_VL_VS_WIDTH_16 5 +#define RVV_VL_VS_WIDTH_32 6 +#define RVV_VL_VS_WIDTH_64 7 + +/* The bit field of immediate value in RVC I instruction */ +#define RVC_I_IMM_LO_OPOFF 2 +#define RVC_I_IMM_HI_OPOFF 12 +#define RVC_I_IMM_LO_OFF 0 +#define RVC_I_IMM_HI_OFF 0 +#define RVC_I_IMM_LO_MASK GENMASK(4, 0) +#define RVC_I_IMM_HI_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC SS instruction */ +#define RVC_SS_IMM_OPOFF 6 +#define RVC_SS_IMM_OFF 0 +#define RVC_SS_IMM_MASK GENMASK(5, 0) + +/* The bit field of immediate value in RVC IW instruction */ +#define RVC_IW_IMM_OPOFF 5 +#define RVC_IW_IMM_OFF 0 +#define RVC_IW_IMM_MASK GENMASK(7, 0) + +/* The bit field of immediate value in RVC L instruction */ +#define RVC_L_IMM_LO_OPOFF 5 +#define RVC_L_IMM_HI_OPOFF 10 +#define RVC_L_IMM_LO_OFF 0 +#define RVC_L_IMM_HI_OFF 0 +#define RVC_L_IMM_LO_MASK GENMASK(1, 0) +#define RVC_L_IMM_HI_MASK GENMASK(2, 0) + +/* The bit field of immediate value in RVC S instruction */ +#define RVC_S_IMM_LO_OPOFF 5 +#define RVC_S_IMM_HI_OPOFF 10 +#define RVC_S_IMM_LO_OFF 0 +#define RVC_S_IMM_HI_OFF 0 +#define RVC_S_IMM_LO_MASK GENMASK(1, 0) +#define RVC_S_IMM_HI_MASK GENMASK(2, 0) /* The bit field of immediate value in RVC B instruction */ -#define RVC_B_IMM_SIGN_OPOFF 12 -#define RVC_B_IMM_4_3_OPOFF 10 -#define RVC_B_IMM_7_6_OPOFF 5 -#define RVC_B_IMM_2_1_OPOFF 3 -#define RVC_B_IMM_5_OPOFF 2 -#define RVC_B_IMM_SIGN_OFF 8 -#define RVC_B_IMM_4_3_OFF 3 -#define RVC_B_IMM_7_6_OFF 6 -#define RVC_B_IMM_2_1_OFF 1 -#define RVC_B_IMM_5_OFF 5 -#define RVC_B_IMM_4_3_MASK GENMASK(1, 0) -#define RVC_B_IMM_7_6_MASK GENMASK(1, 0) -#define RVC_B_IMM_2_1_MASK GENMASK(1, 0) -#define RVC_B_IMM_5_MASK GENMASK(0, 0) - -#define RVC_INSN_FUNCT4_MASK GENMASK(15, 12) -#define RVC_INSN_FUNCT4_OPOFF 12 -#define RVC_INSN_FUNCT3_MASK GENMASK(15, 13) -#define RVC_INSN_FUNCT3_OPOFF 13 -#define RVC_INSN_J_RS2_MASK GENMASK(6, 2) -#define RVC_INSN_OPCODE_MASK GENMASK(1, 0) -#define RVC_ENCODE_FUNCT3(f_) (RVC_FUNCT3_##f_ << RVC_INSN_FUNCT3_OPOFF) -#define RVC_ENCODE_FUNCT4(f_) (RVC_FUNCT4_##f_ << RVC_INSN_FUNCT4_OPOFF) +#define RVC_B_IMM_LO_OFF 2 +#define RVC_B_IMM_HI_OFF 10 +#define RVC_B_IMM_LO_OPOFF 0 +#define RVC_B_IMM_HI_OPOFF 0 +#define RVC_B_IMM_LO_MASK GENMASK(4, 0) +#define RVC_B_IMM_HI_MASK GENMASK(2, 0) + +/* The bit field of immediate value in RVC J instruction */ +#define RVC_J_IMM_OFF 2 +#define RVC_J_IMM_OPOFF 0 +#define RVC_J_IMM_MASK GENMASK(10, 0) + +/* + * Bit field of various RVC instruction immediates. + * These base OPOFF on the start of the immediate + * rather than the start of the instruction. + */ + +/* The bit field of immediate value in RVC ADDI4SPN instruction */ +#define RVC_ADDI4SPN_IMM_5_4_OPOFF 11 +#define RVC_ADDI4SPN_IMM_9_6_OPOFF 7 +#define RVC_ADDI4SPN_IMM_2_OPOFF 6 +#define RVC_ADDI4SPN_IMM_3_OPOFF 5 +#define RVC_ADDI4SPN_IMM_5_4_OFF 4 +#define RVC_ADDI4SPN_IMM_9_6_OFF 6 +#define RVC_ADDI4SPN_IMM_2_OFF 2 +#define RVC_ADDI4SPN_IMM_3_OFF 3 +#define RVC_ADDI4SPN_IMM_5_4_MASK GENMASK(1, 0) +#define RVC_ADDI4SPN_IMM_9_6_MASK GENMASK(3, 0) +#define RVC_ADDI4SPN_IMM_2_MASK GENMASK(0, 0) +#define RVC_ADDI4SPN_IMM_3_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC FLD instruction */ +#define RVC_FLD_IMM_5_3_OPOFF 0 +#define RVC_FLD_IMM_7_6_OPOFF 0 +#define RVC_FLD_IMM_5_3_OFF 3 +#define RVC_FLD_IMM_7_6_OFF 6 +#define RVC_FLD_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_FLD_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC LW instruction */ +#define RVC_LW_IMM_5_3_OPOFF 0 +#define RVC_LW_IMM_2_OPOFF 1 +#define RVC_LW_IMM_6_OPOFF 0 +#define RVC_LW_IMM_5_3_OFF 3 +#define RVC_LW_IMM_2_OFF 2 +#define RVC_LW_IMM_6_OFF 6 +#define RVC_LW_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_LW_IMM_2_MASK GENMASK(0, 0) +#define RVC_LW_IMM_6_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC FLW instruction */ +#define RVC_FLW_IMM_5_3_OPOFF 0 +#define RVC_FLW_IMM_2_OPOFF 1 +#define RVC_FLW_IMM_6_OPOFF 0 +#define RVC_FLW_IMM_5_3_OFF 3 +#define RVC_FLW_IMM_2_OFF 2 +#define RVC_FLW_IMM_6_OFF 6 +#define RVC_FLW_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_FLW_IMM_2_MASK GENMASK(0, 0) +#define RVC_FLW_IMM_6_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC LD instruction */ +#define RVC_LD_IMM_5_3_OPOFF 0 +#define RVC_LD_IMM_7_6_OPOFF 0 +#define RVC_LD_IMM_5_3_OFF 3 +#define RVC_LD_IMM_7_6_OFF 6 +#define RVC_LD_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_LD_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC FSD instruction */ +#define RVC_FSD_IMM_5_3_OPOFF 0 +#define RVC_FSD_IMM_7_6_OPOFF 0 +#define RVC_FSD_IMM_5_3_OFF 3 +#define RVC_FSD_IMM_7_6_OFF 6 +#define RVC_FSD_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_FSD_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC SW instruction */ +#define RVC_SW_IMM_5_3_OPOFF 0 +#define RVC_SW_IMM_2_OPOFF 1 +#define RVC_SW_IMM_6_OPOFF 0 +#define RVC_SW_IMM_5_3_OFF 3 +#define RVC_SW_IMM_2_OFF 2 +#define RVC_SW_IMM_6_OFF 6 +#define RVC_SW_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_SW_IMM_2_MASK GENMASK(0, 0) +#define RVC_SW_IMM_6_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC FSW instruction */ +#define RVC_FSW_IMM_5_3_OPOFF 0 +#define RVC_FSW_IMM_2_OPOFF 1 +#define RVC_FSW_IMM_6_OPOFF 0 +#define RVC_FSW_IMM_5_3_OFF 3 +#define RVC_FSW_IMM_2_OFF 2 +#define RVC_FSW_IMM_6_OFF 6 +#define RVC_FSW_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_FSW_IMM_2_MASK GENMASK(0, 0) +#define RVC_FSW_IMM_6_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC SD instruction */ +#define RVC_SD_IMM_5_3_OPOFF 0 +#define RVC_SD_IMM_7_6_OPOFF 0 +#define RVC_SD_IMM_5_3_OFF 3 +#define RVC_SD_IMM_7_6_OFF 6 +#define RVC_SD_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_SD_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC ADDI instruction */ +#define RVC_ADDI_IMM_5_OPOFF 0 +#define RVC_ADDI_IMM_4_0_OPOFF 0 +#define RVC_ADDI_IMM_5_OFF 5 +#define RVC_ADDI_IMM_4_0_OFF 0 +#define RVC_ADDI_IMM_5_MASK GENMASK(0, 0) +#define RVC_ADDI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC JAL instruction */ +#define RVC_JAL_IMM_SIGN_OPOFF 12 +#define RVC_JAL_IMM_4_OPOFF 11 +#define RVC_JAL_IMM_9_8_OPOFF 9 +#define RVC_JAL_IMM_10_OPOFF 8 +#define RVC_JAL_IMM_6_OPOFF 7 +#define RVC_JAL_IMM_7_OPOFF 6 +#define RVC_JAL_IMM_3_1_OPOFF 3 +#define RVC_JAL_IMM_5_OPOFF 2 +#define RVC_JAL_IMM_SIGN_OFF 11 +#define RVC_JAL_IMM_4_OFF 4 +#define RVC_JAL_IMM_9_8_OFF 8 +#define RVC_JAL_IMM_10_OFF 10 +#define RVC_JAL_IMM_6_OFF 6 +#define RVC_JAL_IMM_7_OFF 7 +#define RVC_JAL_IMM_3_1_OFF 1 +#define RVC_JAL_IMM_5_OFF 5 +#define RVC_JAL_IMM_SIGN_MASK GENMASK(0, 0) +#define RVC_JAL_IMM_4_MASK GENMASK(0, 0) +#define RVC_JAL_IMM_9_8_MASK GENMASK(1, 0) +#define RVC_JAL_IMM_10_MASK GENMASK(0, 0) +#define RVC_JAL_IMM_6_MASK GENMASK(0, 0) +#define RVC_JAL_IMM_7_MASK GENMASK(0, 0) +#define RVC_JAL_IMM_3_1_MASK GENMASK(2, 0) +#define RVC_JAL_IMM_5_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC ADDIW instruction */ +#define RVC_ADDIW_IMM_5_OPOFF 0 +#define RVC_ADDIW_IMM_4_0_OPOFF 0 +#define RVC_ADDIW_IMM_5_OFF 5 +#define RVC_ADDIW_IMM_4_0_OFF 0 +#define RVC_ADDIW_IMM_5_MASK GENMASK(0, 0) +#define RVC_ADDIW_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC LI instruction */ +#define RVC_LI_IMM_5_OPOFF 0 +#define RVC_LI_IMM_4_0_OPOFF 0 +#define RVC_LI_IMM_5_OFF 5 +#define RVC_LI_IMM_4_0_OFF 0 +#define RVC_LI_IMM_5_MASK GENMASK(0, 0) +#define RVC_LI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC ADDI16SP instruction */ +#define RVC_ADDI16SP_IMM_9_OPOFF 0 +#define RVC_ADDI16SP_IMM_4_OPOFF 4 +#define RVC_ADDI16SP_IMM_6_OPOFF 3 +#define RVC_ADDI16SP_IMM_8_7_OPOFF 1 +#define RVC_ADDI16SP_IMM_5_OPOFF 0 +#define RVC_ADDI16SP_IMM_9_OFF 9 +#define RVC_ADDI16SP_IMM_4_OFF 4 +#define RVC_ADDI16SP_IMM_6_OFF 6 +#define RVC_ADDI16SP_IMM_8_7_OFF 7 +#define RVC_ADDI16SP_IMM_5_OFF 5 +#define RVC_ADDI16SP_IMM_9_MASK GENMASK(0, 0) +#define RVC_ADDI16SP_IMM_4_MASK GENMASK(0, 0) +#define RVC_ADDI16SP_IMM_6_MASK GENMASK(0, 0) +#define RVC_ADDI16SP_IMM_8_7_MASK GENMASK(1, 0) +#define RVC_ADDI16SP_IMM_5_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC LUI instruction */ +#define RVC_LUI_IMM_17_OPOFF 0 +#define RVC_LUI_IMM_16_12_OPOFF 0 +#define RVC_LUI_IMM_17_OFF 17 +#define RVC_LUI_IMM_16_12_OFF 12 +#define RVC_LUI_IMM_17_MASK GENMASK(0, 0) +#define RVC_LUI_IMM_16_12_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC SRLI instruction */ +#define RVC_SRLI_IMM_5_OPOFF 3 +#define RVC_SRLI_IMM_FUNC2_OPOFF 0 +#define RVC_SRLI_IMM_4_0_OPOFF 0 +#define RVC_SRLI_IMM_5_OFF 5 +#define RVC_SRLI_IMM_4_0_OFF 0 +#define RVC_SRLI_IMM_5_MASK GENMASK(0, 0) +#define RVC_SRLI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC SRAI instruction */ +#define RVC_SRAI_IMM_5_OPOFF 3 +#define RVC_SRAI_IMM_FUNC2_OPOFF 0 +#define RVC_SRAI_IMM_4_0_OPOFF 0 +#define RVC_SRAI_IMM_5_OFF 5 +#define RVC_SRAI_IMM_4_0_OFF 0 +#define RVC_SRAI_IMM_5_MASK GENMASK(0, 0) +#define RVC_SRAI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC ANDI instruction */ +#define RVC_ANDI_IMM_5_OPOFF 3 +#define RVC_ANDI_IMM_FUNC2_OPOFF 0 +#define RVC_ANDI_IMM_4_0_OPOFF 0 +#define RVC_ANDI_IMM_5_OFF 5 +#define RVC_ANDI_IMM_4_0_OFF 0 +#define RVC_ANDI_IMM_5_MASK GENMASK(0, 0) +#define RVC_ANDI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC J instruction */ +#define RVC_J_IMM_SIGN_OPOFF 12 +#define RVC_J_IMM_4_OPOFF 11 +#define RVC_J_IMM_9_8_OPOFF 9 +#define RVC_J_IMM_10_OPOFF 8 +#define RVC_J_IMM_6_OPOFF 7 +#define RVC_J_IMM_7_OPOFF 6 +#define RVC_J_IMM_3_1_OPOFF 3 +#define RVC_J_IMM_5_OPOFF 2 +#define RVC_J_IMM_SIGN_OFF 11 +#define RVC_J_IMM_4_OFF 4 +#define RVC_J_IMM_9_8_OFF 8 +#define RVC_J_IMM_10_OFF 10 +#define RVC_J_IMM_6_OFF 6 +#define RVC_J_IMM_7_OFF 7 +#define RVC_J_IMM_3_1_OFF 1 +#define RVC_J_IMM_5_OFF 5 +#define RVC_J_IMM_SIGN_MASK GENMASK(0, 0) +#define RVC_J_IMM_4_MASK GENMASK(0, 0) +#define RVC_J_IMM_9_8_MASK GENMASK(1, 0) +#define RVC_J_IMM_10_MASK GENMASK(0, 0) +#define RVC_J_IMM_6_MASK GENMASK(0, 0) +#define RVC_J_IMM_7_MASK GENMASK(0, 0) +#define RVC_J_IMM_3_1_MASK GENMASK(2, 0) +#define RVC_J_IMM_5_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC BEQZ/BNEZ instruction */ +#define RVC_BZ_IMM_SIGN_OPOFF 12 +#define RVC_BZ_IMM_4_3_OPOFF 10 +#define RVC_BZ_IMM_7_6_OPOFF 5 +#define RVC_BZ_IMM_2_1_OPOFF 3 +#define RVC_BZ_IMM_5_OPOFF 2 +#define RVC_BZ_IMM_SIGN_OFF 8 +#define RVC_BZ_IMM_4_3_OFF 3 +#define RVC_BZ_IMM_7_6_OFF 6 +#define RVC_BZ_IMM_2_1_OFF 1 +#define RVC_BZ_IMM_5_OFF 5 +#define RVC_BZ_IMM_SIGN_MASK GENMASK(0, 0) +#define RVC_BZ_IMM_4_3_MASK GENMASK(1, 0) +#define RVC_BZ_IMM_7_6_MASK GENMASK(1, 0) +#define RVC_BZ_IMM_2_1_MASK GENMASK(1, 0) +#define RVC_BZ_IMM_5_MASK GENMASK(0, 0) + +/* The bit field of immediate value in RVC SLLI instruction */ +#define RVC_SLLI_IMM_5_OPOFF 0 +#define RVC_SLLI_IMM_4_0_OPOFF 0 +#define RVC_SLLI_IMM_5_OFF 5 +#define RVC_SLLI_IMM_4_0_OFF 0 +#define RVC_SLLI_IMM_5_MASK GENMASK(0, 0) +#define RVC_SLLI_IMM_4_0_MASK GENMASK(4, 0) + +/* The bit field of immediate value in RVC FLDSP instruction */ +#define RVC_FLDSP_IMM_5_OPOFF 0 +#define RVC_FLDSP_IMM_4_3_OPOFF 3 +#define RVC_FLDSP_IMM_8_6_OPOFF 0 +#define RVC_FLDSP_IMM_5_OFF 5 +#define RVC_FLDSP_IMM_4_3_OFF 3 +#define RVC_FLDSP_IMM_8_6_OFF 6 +#define RVC_FLDSP_IMM_5_MASK GENMASK(0, 0) +#define RVC_FLDSP_IMM_4_3_MASK GENMASK(1, 0) +#define RVC_FLDSP_IMM_8_6_MASK GENMASK(2, 0) + +/* The bit field of immediate value in RVC LWSP instruction */ +#define RVC_LWSP_IMM_5_OPOFF 0 +#define RVC_LWSP_IMM_4_2_OPOFF 2 +#define RVC_LWSP_IMM_7_6_OPOFF 0 +#define RVC_LWSP_IMM_5_OFF 5 +#define RVC_LWSP_IMM_4_2_OFF 2 +#define RVC_LWSP_IMM_7_6_OFF 6 +#define RVC_LWSP_IMM_5_MASK GENMASK(0, 0) +#define RVC_LWSP_IMM_4_2_MASK GENMASK(2, 0) +#define RVC_LWSP_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC FLWSP instruction */ +#define RVC_FLWSP_IMM_5_OPOFF 0 +#define RVC_FLWSP_IMM_4_2_OPOFF 2 +#define RVC_FLWSP_IMM_7_6_OPOFF 0 +#define RVC_FLWSP_IMM_5_OFF 5 +#define RVC_FLWSP_IMM_4_2_OFF 2 +#define RVC_FLWSP_IMM_7_6_OFF 6 +#define RVC_FLWSP_IMM_5_MASK GENMASK(0, 0) +#define RVC_FLWSP_IMM_4_2_MASK GENMASK(2, 0) +#define RVC_FLWSP_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC LDSP instruction */ +#define RVC_LDSP_IMM_5_OPOFF 0 +#define RVC_LDSP_IMM_4_3_OPOFF 3 +#define RVC_LDSP_IMM_8_6_OPOFF 0 +#define RVC_LDSP_IMM_5_OFF 5 +#define RVC_LDSP_IMM_4_3_OFF 3 +#define RVC_LDSP_IMM_8_6_OFF 6 +#define RVC_LDSP_IMM_5_MASK GENMASK(0, 0) +#define RVC_LDSP_IMM_4_3_MASK GENMASK(1, 0) +#define RVC_LDSP_IMM_8_6_MASK GENMASK(2, 0) + +/* The bit field of immediate value in RVC FSDSP instruction */ +#define RVC_FSDSP_IMM_5_3_OPOFF 3 +#define RVC_FSDSP_IMM_8_6_OPOFF 0 +#define RVC_FSDSP_IMM_5_3_OFF 3 +#define RVC_FSDSP_IMM_8_6_OFF 6 +#define RVC_FSDSP_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_FSDSP_IMM_8_6_MASK GENMASK(2, 0) + +/* The bit field of immediate value in RVC SWSP instruction */ +#define RVC_SWSP_IMM_5_2_OPOFF 3 +#define RVC_SWSP_IMM_7_6_OPOFF 0 +#define RVC_SWSP_IMM_5_2_OFF 2 +#define RVC_SWSP_IMM_7_6_OFF 6 +#define RVC_SWSP_IMM_5_2_MASK GENMASK(3, 0) +#define RVC_SWSP_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC FSWSP instruction */ +#define RVC_FSWSP_IMM_5_2_OPOFF 3 +#define RVC_FSWSP_IMM_7_6_OPOFF 0 +#define RVC_FSWSP_IMM_5_2_OFF 2 +#define RVC_FSWSP_IMM_7_6_OFF 6 +#define RVC_FSWSP_IMM_5_2_MASK GENMASK(3, 0) +#define RVC_FSWSP_IMM_7_6_MASK GENMASK(1, 0) + +/* The bit field of immediate value in RVC SDSP instruction */ +#define RVC_SDSP_IMM_5_3_OPOFF 3 +#define RVC_SDSP_IMM_8_6_OPOFF 0 +#define RVC_SDSP_IMM_5_3_OFF 3 +#define RVC_SDSP_IMM_8_6_OFF 6 +#define RVC_SDSP_IMM_5_3_MASK GENMASK(2, 0) +#define RVC_SDSP_IMM_8_6_MASK GENMASK(2, 0) + +/* Bit fields for RVC parts */ +#define RVC_INSN_FUNCT6_MASK GENMASK(5, 0) +#define RVC_INSN_FUNCT6_OPOFF 10 +#define RVC_INSN_FUNCT4_MASK GENMASK(3, 0) +#define RVC_INSN_FUNCT4_OPOFF 12 +#define RVC_INSN_FUNCT3_MASK GENMASK(2, 0) +#define RVC_INSN_FUNCT3_OPOFF 13 +#define RVC_INSN_FUNCT2_MASK GENMASK(1, 0) +#define RVC_INSN_FUNCT2_CB_OPOFF 10 +#define RVC_INSN_FUNCT2_CA_OPOFF 5 +#define RVC_INSN_OPCODE_MASK GENMASK(1, 0) + +/* Compositions of RVC Immediates */ +#define RVC_ADDI4SPN_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_ADDI4SPN_IMM_5_4_OFF, RVC_ADDI4SPN_IMM_5_4_MASK) \ + << RVC_ADDI4SPN_IMM_5_4_OPOFF) | \ + (RV_X(_imm, RVC_ADDI4SPN_IMM_9_6_OFF, RVC_ADDI4SPN_IMM_9_6_MASK) \ + << RVC_ADDI4SPN_IMM_9_6_OPOFF) | \ + (RV_X(_imm, RVC_ADDI4SPN_IMM_2_OFF, RVC_ADDI4SPN_IMM_2_MASK) \ + << RVC_ADDI4SPN_IMM_2_OPOFF) | \ + (RV_X(_imm, RVC_ADDI4SPN_IMM_3_OFF, RVC_ADDI4SPN_IMM_3_MASK) \ + << RVC_ADDI4SPN_IMM_3_OPOFF)); }) + +#define RVC_FLD_IMM_HI(imm) \ + (RV_X(imm, RVC_FLD_IMM_5_3_OPOFF, RVC_FLD_IMM_5_3_OFF) \ + << RVC_FLD_IMM_5_3_MASK) +#define RVC_FLD_IMM_LO(imm) \ + (RV_X(imm, RVC_FLD_IMM_7_6_OPOFF, RVC_FLD_IMM_7_6_OFF) \ + << RVC_FLD_IMM_7_6_MASK) + +#define RVC_LW_IMM_HI(imm) \ + ((RV_X(imm, RVC_LW_IMM_5_3_OFF, RVC_LW_IMM_5_3_MASK) \ + << RVC_LW_IMM_5_3_OPOFF)) +#define RVC_LW_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_LW_IMM_2_OFF, RVC_LW_IMM_2_MASK) \ + << RVC_LW_IMM_2_OPOFF) | \ + (RV_X(_imm, RVC_LW_IMM_6_OFF, RVC_LW_IMM_6_MASK) \ + << RVC_LW_IMM_6_OPOFF)); }) + +#define RVC_FLW_IMM_HI(imm) \ + ((RV_X(imm, RVC_FLW_IMM_5_3_OFF, RVC_FLW_IMM_5_3_MASK) \ + << RVC_FLW_IMM_5_3_OPOFF)) +#define RVC_FLW_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FLW_IMM_2_OFF, RVC_FLW_IMM_2_MASK) \ + << RVC_FLW_IMM_2_OPOFF) | \ + (RV_X(_imm, RVC_FLW_IMM_6_OFF, RVC_FLW_IMM_6_MASK) \ + << RVC_FLW_IMM_6_OPOFF)); }) + +#define RVC_LD_IMM_HI(imm) \ + (RV_X(imm, RVC_LD_IMM_5_3_OPOFF, RVC_LD_IMM_5_3_OFF) \ + << RVC_LD_IMM_5_3_MASK) +#define RVC_LD_IMM_LO(imm) \ + (RV_X(imm, RVC_LD_IMM_7_6_OPOFF, RVC_LD_IMM_7_6_OFF) \ + << RVC_LD_IMM_7_6_MASK) + +#define RVC_FSD_IMM_HI(imm) \ + (RV_X(imm, RVC_FSD_IMM_5_3_OPOFF, RVC_FSD_IMM_5_3_OFF) \ + << RVC_FSD_IMM_5_3_MASK) +#define RVC_FSD_IMM_LO(imm) \ + (RV_X(imm, RVC_FSD_IMM_7_6_OPOFF, RVC_FSD_IMM_7_6_OFF) \ + << RVC_FSD_IMM_7_6_MASK) + +#define RVC_SW_IMM_HI(imm) \ + (RV_X(imm, RVC_SW_IMM_5_3_OFF, RVC_SW_IMM_5_3_MASK) \ + << RVC_SW_IMM_5_3_OPOFF) +#define RVC_SW_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_SW_IMM_2_OFF, RVC_SW_IMM_2_MASK) \ + << RVC_SW_IMM_2_OPOFF) | \ + (RV_X(_imm, RVC_SW_IMM_6_OFF, RVC_SW_IMM_6_MASK) \ + << RVC_SW_IMM_6_OPOFF)); }) + +#define RVC_FSW_IMM_HI(imm) \ + (RV_X(imm, RVC_FSW_IMM_5_3_OFF, RVC_FSW_IMM_5_3_MASK) \ + << RVC_FSW_IMM_5_3_OPOFF) +#define RVC_FSW_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FSW_IMM_2_OFF, RVC_FSW_IMM_2_MASK) \ + << RVC_FSW_IMM_2_OPOFF) | \ + (RV_X(_imm, RVC_FSW_IMM_6_OFF, RVC_FSW_IMM_6_MASK) \ + << RVC_FSW_IMM_6_OPOFF)); }) + +#define RVC_SD_IMM_HI(imm) \ + (RV_X(imm, RVC_SD_IMM_5_3_OPOFF, RVC_SD_IMM_5_3_OFF) \ + << RVC_SD_IMM_5_3_MASK) +#define RVC_SD_IMM_LO(imm) \ + (RV_X(imm, RVC_SD_IMM_7_6_OPOFF, RVC_SD_IMM_7_6_OFF) \ + << RVC_SD_IMM_7_6_MASK) + +#define RVC_ADDI_IMM_HI(imm) \ + (RV_X(imm, RVC_ADDI_IMM_5_OPOFF, RVC_ADDI_IMM_5_OFF) \ + << RVC_ADDI_IMM_5_MASK) +#define RVC_ADDI_IMM_LO(imm) \ + (RV_X(imm, RVC_ADDI_IMM_4_0_OPOFF, RVC_ADDI_IMM_4_0_OFF) \ + << RVC_ADDI_IMM_4_0_MASK) + +#define RVC_JAL_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_JAL_IMM_SIGN_OPOFF, RVC_JAL_IMM_SIGN_OFF) \ + << RVC_JAL_IMM_SIGN_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_4_OPOFF, RVC_JAL_IMM_4_OFF) \ + << RVC_JAL_IMM_4_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_9_8_OPOFF, RVC_JAL_IMM_9_8_OFF) \ + << RVC_JAL_IMM_9_8_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_10_OPOFF, RVC_JAL_IMM_10_OFF) \ + << RVC_JAL_IMM_10_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_6_OPOFF, RVC_JAL_IMM_6_OFF) \ + << RVC_JAL_IMM_6_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_7_OPOFF, RVC_JAL_IMM_7_OFF) \ + << RVC_JAL_IMM_7_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_3_1_OPOFF, RVC_JAL_IMM_3_1_OFF) \ + << RVC_JAL_IMM_3_1_MASK) | \ + (RV_X(_imm, RVC_JAL_IMM_5_OPOFF, RVC_JAL_IMM_5_OFF) \ + << RVC_JAL_IMM_5_MASK)); }) + +#define RVC_ADDIW_IMM_HI(imm) \ + (RV_X(imm, RVC_ADDIW_IMM_5_OPOFF, RVC_ADDIW_IMM_5_OFF) \ + << RVC_ADDIW_IMM_5_MASK) +#define RVC_ADDIW_IMM_LO(imm) \ + (RV_X(imm, RVC_ADDIW_IMM_4_0_OPOFF, RVC_ADDIW_IMM_4_0_OFF) \ + << RVC_ADDIW_IMM_4_0_MASK) + +#define RVC_LI_IMM_HI(imm) \ + (RV_X(imm, RVC_LI_IMM_5_OPOFF, RVC_LI_IMM_5_OFF) \ + << RVC_LI_IMM_5_MASK) +#define RVC_LI_IMM_LO(imm) \ + (RV_X(imm, RVC_LI_IMM_4_0_OPOFF, RVC_LI_IMM_4_0_OFF) \ + << RVC_LI_IMM_4_0_MASK) + +#define RVC_ADDI16SP_IMM_HI(imm) \ + (RV_X(imm, RVC_ADDI16SP_IMM_9_OFF, RVC_ADDI16SP_IMM_9_MASK) \ + << RVC_ADDI16SP_IMM_9_OPOFF) +#define RVC_ADDI16SP_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_ADDI16SP_IMM_4_OFF, RVC_ADDI16SP_IMM_4_MASK) \ + << RVC_ADDI16SP_IMM_4_OPOFF) | \ + (RV_X(_imm, RVC_ADDI16SP_IMM_6_OFF, RVC_ADDI16SP_IMM_6_MASK) \ + << RVC_ADDI16SP_IMM_4_OPOFF) | \ + (RV_X(_imm, RVC_ADDI16SP_IMM_5_OFF, RVC_ADDI16SP_IMM_5_MASK) \ + << RVC_ADDI16SP_IMM_4_OPOFF) | \ + (RV_X(_imm, RVC_ADDI16SP_IMM_8_7_OFF, RVC_ADDI16SP_IMM_8_7_MASK) \ + << RVC_ADDI16SP_IMM_4_OPOFF)); }) + +#define RVC_LUI_IMM_HI(imm) \ + (RV_X(imm, RVC_LUI_IMM_17_OPOFF, RVC_LUI_IMM_17_OFF) \ + << RVC_LUI_IMM_17_MASK) +#define RVC_LUI_IMM_LO(imm) \ + (RV_X(imm, RVC_LUI_IMM_16_12_OPOFF, RVC_LUI_IMM_16_12_OFF) \ + << RVC_LUI_IMM_16_12_MASK) + +#define RVC_SRLI_IMM_HI(imm) \ + (RV_X(imm, RVC_SRLI_IMM_5_OPOFF, RVC_SRLI_IMM_5_OFF) \ + << RVC_SRLI_IMM_5_MASK) +#define RVC_SRLI_IMM_LO(imm) \ + (RV_X(imm, RVC_SRLI_IMM_4_0_OPOFF, RVC_SRLI_IMM_4_0_OFF) \ + << RVC_SRLI_IMM_4_0_MASK) + +#define RVC_SRAI_IMM_HI(imm) \ + (RV_X(imm, RVC_SRAI_IMM_5_OPOFF, RVC_SRAI_IMM_5_OFF) \ + << RVC_SRAI_IMM_5_MASK) +#define RVC_SRAI_IMM_LO(imm) \ + (RV_X(imm, RVC_SRAI_IMM_4_0_OPOFF, RVC_SRAI_IMM_4_0_OFF) \ + << RVC_SRAI_IMM_4_0_MASK) + +#define RVC_ANDI_IMM_HI(imm) \ + (RV_X(imm, RVC_ANDI_IMM_5_OPOFF, RVC_ANDI_IMM_5_OFF) \ + << RVC_ANDI_IMM_5_MASK) +#define RVC_ANDI_IMM_LO(imm) \ + (RV_X(imm, RVC_ANDI_IMM_4_0_OPOFF, RVC_ANDI_IMM_4_0_OFF) \ + << RVC_ANDI_IMM_4_0_MASK) + +#define RVC_J_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_J_IMM_SIGN_OPOFF, RVC_J_IMM_SIGN_OFF) \ + << RVC_J_IMM_SIGN_MASK) | \ + (RV_X(_imm, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_OFF) \ + << RVC_J_IMM_4_MASK) | \ + (RV_X(_imm, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_OFF) \ + << RVC_J_IMM_9_8_MASK) | \ + (RV_X(_imm, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_OFF) \ + << RVC_J_IMM_10_MASK) | \ + (RV_X(_imm, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_OFF) \ + << RVC_J_IMM_6_MASK) | \ + (RV_X(_imm, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_OFF) \ + << RVC_J_IMM_7_MASK) | \ + (RV_X(_imm, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_OFF) \ + << RVC_J_IMM_3_1_MASK) | \ + (RV_X(_imm, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_OFF) \ + << RVC_J_IMM_5_MASK)); }) + +#define RVC_BEQZ_IMM_HI(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_BZ_IMM_SIGN_OPOFF, RVC_BZ_IMM_SIGN_OFF) \ + << RVC_BZ_IMM_SIGN_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_4_3_OPOFF, RVC_BZ_IMM_4_3_OFF) \ + << RVC_BZ_IMM_4_3_MASK)); }) +#define RVC_BEQZ_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_BZ_IMM_7_6_OPOFF, RVC_BZ_IMM_7_6_OFF) \ + << RVC_BZ_IMM_7_6_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_OFF) \ + << RVC_BZ_IMM_2_1_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_5_OPOFF, RVC_BZ_IMM_5_OFF) \ + << RVC_BZ_IMM_5_MASK)); }) + +#define RVC_BNEZ_IMM_HI(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_BZ_IMM_SIGN_OPOFF, RVC_BZ_IMM_SIGN_OFF) \ + << RVC_BZ_IMM_SIGN_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_4_3_OPOFF, RVC_BZ_IMM_4_3_OFF) \ + << RVC_BZ_IMM_4_3_MASK)); }) +#define RVC_BNEZ_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_BZ_IMM_7_6_OPOFF, RVC_BZ_IMM_7_6_OFF) \ + << RVC_BZ_IMM_7_6_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_OFF) \ + << RVC_BZ_IMM_2_1_MASK) | \ + (RV_X(_imm, RVC_BZ_IMM_5_OPOFF, RVC_BZ_IMM_5_OFF) \ + << RVC_BZ_IMM_5_MASK)); }) + +#define RVC_SLLI_IMM_HI(imm) \ + (RV_X(imm, RVC_SLLI_IMM_5_OFF, RVC_SLLI_IMM_5_MASK) \ + << RVC_SLLI_IMM_5_OPOFF) +#define RVC_SLLI_IMM_LO(imm) \ + (RV_X(imm, RVC_SLLI_IMM_4_0_OFF, RVC_SLLI_IMM_4_0_MASK) \ + << RVC_SLLI_IMM_4_0_OPOFF) + +#define RVC_FLDSP_IMM_HI(imm) \ + (RV_X(imm, RVC_FLDSP_IMM_5_OFF, RVC_FLDSP_IMM_5_MASK) \ + << RVC_FLDSP_IMM_5_OPOFF) +#define RVC_FLDSP_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FLDSP_IMM_4_3_OFF, RVC_FLDSP_IMM_4_3_MASK) \ + << RVC_FLDSP_IMM_4_3_OPOFF) | \ + (RV_X(_imm, RVC_FLDSP_IMM_8_6_OFF, RVC_FLDSP_IMM_8_6_MASK) \ + << RVC_FLDSP_IMM_8_6_OPOFF)); }) + +#define RVC_LWSP_IMM_HI(imm) \ + (RV_X(imm, RVC_LWSP_IMM_5_OFF, RVC_LWSP_IMM_5_MASK) \ + << RVC_LWSP_IMM_5_OPOFF) +#define RVC_LWSP_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_LWSP_IMM_4_2_OFF, RVC_LWSP_IMM_4_2_MASK) \ + << RVC_LWSP_IMM_4_2_OPOFF) | \ + (RV_X(_imm, RVC_LWSP_IMM_7_6_OFF, RVC_LWSP_IMM_7_6_MASK) \ + << RVC_LWSP_IMM_7_6_OPOFF)); }) + +#define RVC_FLWSP_IMM_HI(imm) \ + (RV_X(imm, RVC_FLWSP_IMM_5_OFF, RVC_FLWSP_IMM_5_MASK) \ + << RVC_FLWSP_IMM_5_OPOFF) +#define RVC_FLWSP_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FLWSP_IMM_4_2_OFF, RVC_FLWSP_IMM_4_2_MASK) \ + << RVC_FLWSP_IMM_4_2_OPOFF) | \ + (RV_X(_imm, RVC_FLWSP_IMM_7_6_OFF, RVC_FLWSP_IMM_7_6_MASK) \ + << RVC_FLWSP_IMM_7_6_OPOFF)); }) + +#define RVC_LDSP_IMM_HI(imm) \ + (RV_X(imm, RVC_LDSP_IMM_5_OPOFF, RVC_LDSP_IMM_5_OFF) \ + << RVC_LDSP_IMM_5_MASK) +#define RVC_LDSP_IMM_LO(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_LDSP_IMM_4_3_OPOFF, RVC_LDSP_IMM_4_3_OFF) \ + << RVC_LDSP_IMM_4_3_MASK) | \ + (RV_X(_imm, RVC_LDSP_IMM_8_6_OPOFF, RVC_LDSP_IMM_8_6_OFF) \ + << RVC_LDSP_IMM_8_6_MASK)); }) + +#define RVC_FSDSP_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FSDSP_IMM_5_3_OPOFF, RVC_FSDSP_IMM_5_3_OFF) \ + << RVC_FSDSP_IMM_5_3_MASK) | \ + (RV_X(_imm, RVC_FSDSP_IMM_8_6_OPOFF, RVC_FSDSP_IMM_8_6_OFF) \ + << RVC_FSDSP_IMM_8_6_MASK)); }) + +#define RVC_SWSP_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_SWSP_IMM_5_2_OPOFF, RVC_SWSP_IMM_5_2_MASK) \ + << RVC_SWSP_IMM_5_2_OPOFF) | \ + (RV_X(_imm, RVC_SWSP_IMM_7_6_OPOFF, RVC_SWSP_IMM_7_6_MASK) \ + << RVC_SWSP_IMM_7_6_OPOFF)); }) + +#define RVC_FSWSP_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_FSWSP_IMM_5_2_OPOFF, RVC_FSWSP_IMM_5_2_MASK) \ + << RVC_FSWSP_IMM_5_2_OPOFF) | \ + (RV_X(_imm, RVC_FSWSP_IMM_7_6_OPOFF, RVC_FSWSP_IMM_7_6_MASK) \ + << RVC_FSWSP_IMM_7_6_OPOFF)); }) + +#define RVC_SDSP_IMM(imm) \ + ({ typeof(imm) _imm = imm; \ + ((RV_X(_imm, RVC_SDSP_IMM_5_3_OPOFF, RVC_SDSP_IMM_5_3_OFF) \ + << RVC_SDSP_IMM_5_3_MASK) | \ + (RV_X(_imm, RVC_SDSP_IMM_8_6_OPOFF, RVC_SDSP_IMM_8_6_OFF) \ + << RVC_SDSP_IMM_8_6_MASK)); }) /* The register offset in RVC op=C0 instruction */ #define RVC_C0_RS1_OPOFF 7 @@ -130,136 +804,1099 @@ #define RVC_C2_RS2_OPOFF 2 #define RVC_C2_RD_OPOFF 7 -/* parts of opcode for RVG*/ -#define RVG_OPCODE_FENCE 0x0f -#define RVG_OPCODE_AUIPC 0x17 -#define RVG_OPCODE_BRANCH 0x63 -#define RVG_OPCODE_JALR 0x67 -#define RVG_OPCODE_JAL 0x6f -#define RVG_OPCODE_SYSTEM 0x73 -#define RVG_SYSTEM_CSR_OFF 20 -#define RVG_SYSTEM_CSR_MASK GENMASK(12, 0) +/* RVC RD definitions */ +#define RVC_RD_CR(insn) (((insn) >> RVC_C2_RD_OPOFF) & RV_STANDARD_REG_MASK) +#define RVC_RD_CI(insn) (((insn) >> RVC_C2_RD_OPOFF) & RV_STANDARD_REG_MASK) +#define RVC_RD_CIW(insn)(((insn) >> RVC_C0_RD_OPOFF) & RV_COMPRESSED_REG_MASK) +#define RVC_RD_CL(insn) (((insn) >> RVC_C0_RD_OPOFF) & RV_COMPRESSED_REG_MASK) +#define RVC_RD_CA(insn) (((insn) >> RVC_C2_RD_OPOFF) & RV_COMPRESSED_REG_MASK) +#define RVC_RD_CB(insn) (((insn) >> RVC_C2_RD_OPOFF) & RV_COMPRESSED_REG_MASK) -/* parts of opcode for RVF, RVD and RVQ */ -#define RVFDQ_FL_FS_WIDTH_OFF 12 -#define RVFDQ_FL_FS_WIDTH_MASK GENMASK(3, 0) -#define RVFDQ_FL_FS_WIDTH_W 2 -#define RVFDQ_FL_FS_WIDTH_D 3 -#define RVFDQ_LS_FS_WIDTH_Q 4 -#define RVFDQ_OPCODE_FL 0x07 -#define RVFDQ_OPCODE_FS 0x27 - -/* parts of opcode for RVV */ -#define RVV_OPCODE_VECTOR 0x57 -#define RVV_VL_VS_WIDTH_8 0 -#define RVV_VL_VS_WIDTH_16 5 -#define RVV_VL_VS_WIDTH_32 6 -#define RVV_VL_VS_WIDTH_64 7 -#define RVV_OPCODE_VL RVFDQ_OPCODE_FL -#define RVV_OPCODE_VS RVFDQ_OPCODE_FS +/* Special opcodes */ +#define RVG_OPCODE_SYSTEM 0b1110011 +#define RVG_OPCODE_NOP 0b0010011 +#define RVG_OPCODE_BRANCH 0b1100011 +/* RVG opcodes */ +#define RVG_OPCODE_LUI 0b0110111 +#define RVG_OPCODE_AUIPC 0b0010111 +#define RVG_OPCODE_JAL 0b1101111 +#define RVG_OPCODE_JALR 0b1100111 +#define RVG_OPCODE_BEQ 0b1100011 +#define RVG_OPCODE_BNE 0b1100011 +#define RVG_OPCODE_BLT 0b1100011 +#define RVG_OPCODE_BGE 0b1100011 +#define RVG_OPCODE_BLTU 0b1100011 +#define RVG_OPCODE_BGEU 0b1100011 +#define RVG_OPCODE_LB 0b0000011 +#define RVG_OPCODE_LH 0b0000011 +#define RVG_OPCODE_LW 0b0000011 +#define RVG_OPCODE_LBU 0b0000011 +#define RVG_OPCODE_LHU 0b0000011 +#define RVG_OPCODE_SB 0b0100011 +#define RVG_OPCODE_SH 0b0100011 +#define RVG_OPCODE_SW 0b0100011 +#define RVG_OPCODE_ADDI 0b0010011 +#define RVG_OPCODE_SLTI 0b0010011 +#define RVG_OPCODE_SLTIU 0b0010011 +#define RVG_OPCODE_XORI 0b0010011 +#define RVG_OPCODE_ORI 0b0010011 +#define RVG_OPCODE_ANDI 0b0010011 +#define RVG_OPCODE_SLLI 0b0010011 +#define RVG_OPCODE_SRLI 0b0010011 +#define RVG_OPCODE_SRAI 0b0010011 +#define RVG_OPCODE_ADD 0b0110011 +#define RVG_OPCODE_SUB 0b0110011 +#define RVG_OPCODE_SLL 0b0110011 +#define RVG_OPCODE_SLT 0b0110011 +#define RVG_OPCODE_SLTU 0b0110011 +#define RVG_OPCODE_XOR 0b0110011 +#define RVG_OPCODE_SRL 0b0110011 +#define RVG_OPCODE_SRA 0b0110011 +#define RVG_OPCODE_OR 0b0110011 +#define RVG_OPCODE_AND 0b0110011 +#define RVG_OPCODE_FENCE 0b0001111 +#define RVG_OPCODE_FENCETSO 0b0001111 +#define RVG_OPCODE_PAUSE 0b0001111 +#define RVG_OPCODE_ECALL 0b1110011 +#define RVG_OPCODE_EBREAK 0b1110011 +/* F Standard Extension */ +#define RVG_OPCODE_FLW 0b0000111 +#define RVG_OPCODE_FSW 0b0100111 +/* D Standard Extension */ +#define RVG_OPCODE_FLD 0b0000111 +#define RVG_OPCODE_FSD 0b0100111 +/* Q Standard Extension */ +#define RVG_OPCODE_FLQ 0b0000111 +#define RVG_OPCODE_FSQ 0b0100111 +/* Zicsr Standard Extension */ +#define RVG_OPCODE_CSRRW 0b1110011 +#define RVG_OPCODE_CSRRS 0b1110011 +#define RVG_OPCODE_CSRRC 0b1110011 +#define RVG_OPCODE_CSRRWI 0b1110011 +#define RVG_OPCODE_CSRRSI 0b1110011 +#define RVG_OPCODE_CSRRCI 0b1110011 +/* M Standard Extension */ +#define RVG_OPCODE_MUL 0b0110011 +#define RVG_OPCODE_MULH 0b0110011 +#define RVG_OPCODE_MULHSU 0b0110011 +#define RVG_OPCODE_MULHU 0b0110011 +#define RVG_OPCODE_DIV 0b0110011 +#define RVG_OPCODE_DIVU 0b0110011 +#define RVG_OPCODE_REM 0b0110011 +#define RVG_OPCODE_REMU 0b0110011 +/* A Standard Extension */ +#define RVG_OPCODE_LR_W 0b0101111 +#define RVG_OPCODE_SC_W 0b0101111 +#define RVG_OPCODE_AMOSWAP_W 0b0101111 +#define RVG_OPCODE_AMOADD_W 0b0101111 +#define RVG_OPCODE_AMOXOR_W 0b0101111 +#define RVG_OPCODE_AMOAND_W 0b0101111 +#define RVG_OPCODE_AMOOR_W 0b0101111 +#define RVG_OPCODE_AMOMIN_W 0b0101111 +#define RVG_OPCODE_AMOMAX_W 0b0101111 +#define RVG_OPCODE_AMOMINU_W 0b0101111 +#define RVG_OPCODE_AMOMAXU_W 0b0101111 +/* Vector Extension */ +#define RVV_OPCODE_VECTOR 0b1010111 +#define RVV_OPCODE_VL RVG_OPCODE_FLW +#define RVV_OPCODE_VS RVG_OPCODE_FSW + +/* RVG 64-bit only opcodes */ +#define RVG_OPCODE_LWU 0b0000011 +#define RVG_OPCODE_LD 0b0000011 +#define RVG_OPCODE_SD 0b0100011 +#define RVG_OPCODE_ADDIW 0b0011011 +#define RVG_OPCODE_SLLIW 0b0011011 +#define RVG_OPCODE_SRLIW 0b0011011 +#define RVG_OPCODE_SRAIW 0b0011011 +#define RVG_OPCODE_ADDW 0b0111011 +#define RVG_OPCODE_SUBW 0b0111011 +#define RVG_OPCODE_SLLW 0b0111011 +#define RVG_OPCODE_SRLW 0b0111011 +#define RVG_OPCODE_SRAW 0b0111011 +/* M Standard Extension */ +#define RVG_OPCODE_MULW 0b0111011 +#define RVG_OPCODE_DIVW 0b0111011 +#define RVG_OPCODE_DIVUW 0b0111011 +#define RVG_OPCODE_REMW 0b0111011 +#define RVG_OPCODE_REMUW 0b0111011 +/* A Standard Extension */ +#define RVG_OPCODE_LR_D 0b0101111 +#define RVG_OPCODE_SC_D 0b0101111 +#define RVG_OPCODE_AMOSWAP_D 0b0101111 +#define RVG_OPCODE_AMOADD_D 0b0101111 +#define RVG_OPCODE_AMOXOR_D 0b0101111 +#define RVG_OPCODE_AMOAND_D 0b0101111 +#define RVG_OPCODE_AMOOR_D 0b0101111 +#define RVG_OPCODE_AMOMIN_D 0b0101111 +#define RVG_OPCODE_AMOMAX_D 0b0101111 +#define RVG_OPCODE_AMOMINU_D 0b0101111 +#define RVG_OPCODE_AMOMAXU_D 0b0101111 + +/* RVG func3 codes */ +#define RVG_FUNCT3_JALR 0b000 +#define RVG_FUNCT3_BEQ 0b000 +#define RVG_FUNCT3_BNE 0b001 +#define RVG_FUNCT3_BLT 0b100 +#define RVG_FUNCT3_BGE 0b101 +#define RVG_FUNCT3_BLTU 0b110 +#define RVG_FUNCT3_BGEU 0b111 +#define RVG_FUNCT3_LB 0b000 +#define RVG_FUNCT3_LH 0b001 +#define RVG_FUNCT3_LW 0b010 +#define RVG_FUNCT3_LBU 0b100 +#define RVG_FUNCT3_LHU 0b101 +#define RVG_FUNCT3_SB 0b000 +#define RVG_FUNCT3_SH 0b001 +#define RVG_FUNCT3_SW 0b010 +#define RVG_FUNCT3_ADDI 0b000 +#define RVG_FUNCT3_SLTI 0b010 +#define RVG_FUNCT3_SLTIU 0b011 +#define RVG_FUNCT3_XORI 0b100 +#define RVG_FUNCT3_ORI 0b110 +#define RVG_FUNCT3_ANDI 0b111 +#define RVG_FUNCT3_SLLI 0b001 +#define RVG_FUNCT3_SRLI 0b101 +#define RVG_FUNCT3_SRAI 0b101 +#define RVG_FUNCT3_ADD 0b000 +#define RVG_FUNCT3_SUB 0b000 +#define RVG_FUNCT3_SLL 0b001 +#define RVG_FUNCT3_SLT 0b010 +#define RVG_FUNCT3_SLTU 0b011 +#define RVG_FUNCT3_XOR 0b100 +#define RVG_FUNCT3_SRL 0b101 +#define RVG_FUNCT3_SRA 0b101 +#define RVG_FUNCT3_OR 0b110 +#define RVG_FUNCT3_AND 0b111 +#define RVG_FUNCT3_NOP RVG_FUNCT3_ADDI +#define RVG_FUNCT3_FENCE 0b000 +#define RVG_FUNCT3_FENCETSO 0b000 +#define RVG_FUNCT3_PAUSE 0b000 +#define RVG_FUNCT3_ECALL 0b000 +#define RVG_FUNCT3_EBREAK 0b000 +/* F Standard Extension */ +#define RVG_FUNCT3_FLW 0b010 +#define RVG_FUNCT3_FSW 0b010 +/* D Standard Extension */ +#define RVG_FUNCT3_FLD 0b011 +#define RVG_FUNCT3_FSD 0b011 +/* Q Standard Extension */ +#define RVG_FUNCT3_FLQ 0b100 +#define RVG_FUNCT3_FSQ 0b100 +/* Zicsr Standard Extension */ +#define RVG_FUNCT3_CSRRW 0b001 +#define RVG_FUNCT3_CSRRS 0b010 +#define RVG_FUNCT3_CSRRC 0b011 +#define RVG_FUNCT3_CSRRWI 0b101 +#define RVG_FUNCT3_CSRRSI 0b110 +#define RVG_FUNCT3_CSRRCI 0b111 +/* M Standard Extension */ +#define RVG_FUNCT3_MUL 0b000 +#define RVG_FUNCT3_MULH 0b001 +#define RVG_FUNCT3_MULHSU 0b010 +#define RVG_FUNCT3_MULHU 0b011 +#define RVG_FUNCT3_DIV 0b100 +#define RVG_FUNCT3_DIVU 0b101 +#define RVG_FUNCT3_REM 0b110 +#define RVG_FUNCT3_REMU 0b111 +/* A Standard Extension */ +#define RVG_FUNCT3_LR_W 0b010 +#define RVG_FUNCT3_SC_W 0b010 +#define RVG_FUNCT3_AMOSWAP_W 0b010 +#define RVG_FUNCT3_AMOADD_W 0b010 +#define RVG_FUNCT3_AMOXOR_W 0b010 +#define RVG_FUNCT3_AMOAND_W 0b010 +#define RVG_FUNCT3_AMOOR_W 0b010 +#define RVG_FUNCT3_AMOMIN_W 0b010 +#define RVG_FUNCT3_AMOMAX_W 0b010 +#define RVG_FUNCT3_AMOMINU_W 0b010 +#define RVG_FUNCT3_AMOMAXU_W 0b010 + +/* RVG 64-bit only func3 codes */ +#define RVG_FUNCT3_LWU 0b110 +#define RVG_FUNCT3_LD 0b011 +#define RVG_FUNCT3_SD 0b011 +#define RVG_FUNCT3_ADDIW 0b000 +#define RVG_FUNCT3_SLLIW 0b001 +#define RVG_FUNCT3_SRLIW 0b101 +#define RVG_FUNCT3_SRAIW 0b101 +#define RVG_FUNCT3_ADDW 0b000 +#define RVG_FUNCT3_SUBW 0b000 +#define RVG_FUNCT3_SLLW 0b001 +#define RVG_FUNCT3_SRLW 0b101 +#define RVG_FUNCT3_SRAW 0b101 +/* M Standard Extension */ +#define RVG_FUNCT3_MULW 0b000 +#define RVG_FUNCT3_DIVW 0b100 +#define RVG_FUNCT3_DIVUW 0b101 +#define RVG_FUNCT3_REMW 0b110 +#define RVG_FUNCT3_REMUW 0b111 +/* A Standard Extension */ +#define RVG_FUNCT3_LR_D 0b011 +#define RVG_FUNCT3_SC_D 0b011 +#define RVG_FUNCT3_AMOSWAP_D 0b011 +#define RVG_FUNCT3_AMOADD_D 0b011 +#define RVG_FUNCT3_AMOXOR_D 0b011 +#define RVG_FUNCT3_AMOAND_D 0b011 +#define RVG_FUNCT3_AMOOR_D 0b011 +#define RVG_FUNCT3_AMOMIN_D 0b011 +#define RVG_FUNCT3_AMOMAX_D 0b011 +#define RVG_FUNCT3_AMOMINU_D 0b011 +#define RVG_FUNCT3_AMOMAXU_D 0b011 + +#if __riscv_xlen == 32 +/* RV-32 Shift Instruction Upper Bits */ +#define RVG_SLLI_UPPER 0b0000000 +#define RVG_SRLI_UPPER 0b0000000 +#define RVG_SRAI_UPPER 0b0100000 +#elif __riscv_xlen == 64 +/* RV-64 Shift Instruction Upper Bits */ +#define RVG_SLLI_UPPER 0b000000 +#define RVG_SRLI_UPPER 0b000000 +#define RVG_SRAI_UPPER 0b010000 +#endif /* __riscv_xlen */ + +/* RVG funct5 codes */ +/* A Standard Extension */ +#define RVG_FUNCT5_LR_W 0b00010 +#define RVG_FUNCT5_SC_W 0b00011 +#define RVG_FUNCT5_AMOSWAP_W 0b00001 +#define RVG_FUNCT5_AMOADD_W 0b00000 +#define RVG_FUNCT5_AMOXOR_W 0b00100 +#define RVG_FUNCT5_AMOAND_W 0b01100 +#define RVG_FUNCT5_AMOOR_W 0b01000 +#define RVG_FUNCT5_AMOMIN_W 0b10000 +#define RVG_FUNCT5_AMOMAX_W 0b10100 +#define RVG_FUNCT5_AMOMINU_W 0b11000 +#define RVG_FUNCT5_AMOMAXU_W 0b11100 -/* parts of opcode for RVC*/ +/* RVG 64-bit only funct5 codes */ +/* A Standard Extension */ +#define RVG_FUNCT5_LR_D 0b00010 +#define RVG_FUNCT5_SC_D 0b00011 +#define RVG_FUNCT5_AMOSWAP_D 0b00001 +#define RVG_FUNCT5_AMOADD_D 0b00000 +#define RVG_FUNCT5_AMOXOR_D 0b00100 +#define RVG_FUNCT5_AMOAND_D 0b01100 +#define RVG_FUNCT5_AMOOR_D 0b01000 +#define RVG_FUNCT5_AMOMIN_D 0b10000 +#define RVG_FUNCT5_AMOMAX_D 0b10100 +#define RVG_FUNCT5_AMOMINU_D 0b11000 +#define RVG_FUNCT5_AMOMAXU_D 0b11100 + +/* RVG funct7 codes */ +#define RVG_FUNCT7_SLLI 0b0000000 +#define RVG_FUNCT7_SRLI 0b0000000 +#define RVG_FUNCT7_SRAI 0b0100000 +#define RVG_FUNCT7_ADD 0b0000000 +#define RVG_FUNCT7_SUB 0b0100000 +#define RVG_FUNCT7_SLL 0b0000000 +#define RVG_FUNCT7_SLT 0b0000000 +#define RVG_FUNCT7_SLTU 0b0000000 +#define RVG_FUNCT7_XOR 0b0000000 +#define RVG_FUNCT7_SRL 0b0000000 +#define RVG_FUNCT7_SRA 0b0100000 +#define RVG_FUNCT7_OR 0b0000000 +#define RVG_FUNCT7_AND 0b0000000 +/* M Standard Extension */ +#define RVG_FUNCT7_MUL 0b0000001 +#define RVG_FUNCT7_MULH 0b0000001 +#define RVG_FUNCT7_MULHSU 0b0000001 +#define RVG_FUNCT7_MULHU 0b0000001 +#define RVG_FUNCT7_DIV 0b0000001 +#define RVG_FUNCT7_DIVU 0b0000001 +#define RVG_FUNCT7_REM 0b0000001 +#define RVG_FUNCT7_REMU 0b0000001 + +/* RVG 64-bit only funct7 codes */ +#define RVG_FUNCT7_SLLIW 0b0000000 +#define RVG_FUNCT7_SRLIW 0b0000000 +#define RVG_FUNCT7_SRAIW 0b0100000 +#define RVG_FUNCT7_ADDW 0b0000000 +#define RVG_FUNCT7_SUBW 0b0100000 +#define RVG_FUNCT7_SLLW 0b0000000 +#define RVG_FUNCT7_SRLW 0b0000000 +#define RVG_FUNCT7_SRAW 0b0100000 +/* M Standard Extension */ +#define RVG_FUNCT7_MULW 0b0000001 +#define RVG_FUNCT7_DIVW 0b0000001 +#define RVG_FUNCT7_DIVUW 0b0000001 +#define RVG_FUNCT7_REMW 0b0000001 +#define RVG_FUNCT7_REMUW 0b0000001 + +/* RVG funct12 codes */ +#define RVG_FUNCT12_ECALL 0b000000000000 +#define RVG_FUNCT12_EBREAK 0b000000000001 + +/* RVG instruction match types */ +#define RVG_MATCH_R(f_) \ + (RVG_FUNCT7_##f_ << RV_INSN_FUNCT7_OPOFF | \ + RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF | RVG_OPCODE_##f_) +#define RVG_MATCH_I(f_) \ + (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF | RVG_OPCODE_##f_) +#define RVG_MATCH_S(f_) \ + (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF | RVG_OPCODE_##f_) +#define RVG_MATCH_B(f_) \ + (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF | RVG_OPCODE_##f_) +#define RVG_MATCH_U(f_) (RVG_OPCODE_##f_) +#define RVG_MATCH_J(f_) (RVG_OPCODE_##f_) +#define RVG_MATCH_AMO(f_) \ + (RVG_FUNCT5_##f_ << RV_INSN_FUNCT7_OPOFF | \ + RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF | RVG_OPCODE_##f_) + +/* RVG instruction matches */ +#define RVG_MATCH_LUI (RVG_MATCH_U(LUI)) +#define RVG_MATCH_AUIPC (RVG_MATCH_U(AUIPC)) +#define RVG_MATCH_JAL (RVG_MATCH_J(JAL)) +#define RVG_MATCH_JALR (RVG_MATCH_I(JALR)) +#define RVG_MATCH_BEQ (RVG_MATCH_B(BEQ)) +#define RVG_MATCH_BNE (RVG_MATCH_B(BNE)) +#define RVG_MATCH_BLT (RVG_MATCH_B(BLT)) +#define RVG_MATCH_BGE (RVG_MATCH_B(BGE)) +#define RVG_MATCH_BLTU (RVG_MATCH_B(BLTU)) +#define RVG_MATCH_BGEU (RVG_MATCH_B(BGEU)) +#define RVG_MATCH_LB (RVG_MATCH_I(LB)) +#define RVG_MATCH_LH (RVG_MATCH_I(LH)) +#define RVG_MATCH_LW (RVG_MATCH_I(LW)) +#define RVG_MATCH_LBU (RVG_MATCH_I(LBU)) +#define RVG_MATCH_LHU (RVG_MATCH_I(LHU)) +#define RVG_MATCH_SB (RVG_MATCH_S(SB)) +#define RVG_MATCH_SH (RVG_MATCH_S(SH)) +#define RVG_MATCH_SW (RVG_MATCH_S(SW)) +#define RVG_MATCH_ADDI (RVG_MATCH_I(ADDI)) +#define RVG_MATCH_SLTI (RVG_MATCH_I(SLTI)) +#define RVG_MATCH_SLTIU (RVG_MATCH_I(SLTIU)) +#define RVG_MATCH_XORI (RVG_MATCH_I(XORI)) +#define RVG_MATCH_ORI (RVG_MATCH_I(ORI)) +#define RVG_MATCH_ANDI (RVG_MATCH_I(ANDI)) +#define RVG_MATCH_SLLI (RVG_SLLI_UPPER | RVG_MATCH_I(SLLI)) +#define RVG_MATCH_SRLI (RVG_SRLI_UPPER | RVG_MATCH_I(SRLI)) +#define RVG_MATCH_SRAI (RVG_SRAI_UPPER | RVG_MATCH_I(SRAI)) +#define RVG_MATCH_ADD (RVG_MATCH_R(ADD)) +#define RVG_MATCH_SUB (RVG_MATCH_R(SUB)) +#define RVG_MATCH_SLL (RVG_MATCH_R(SLL)) +#define RVG_MATCH_SLT (RVG_MATCH_R(SLT)) +#define RVG_MATCH_SLTU (RVG_MATCH_R(SLTU)) +#define RVG_MATCH_XOR (RVG_MATCH_R(XOR)) +#define RVG_MATCH_SRL (RVG_MATCH_R(SRL)) +#define RVG_MATCH_SRA (RVG_MATCH_R(SRA)) +#define RVG_MATCH_OR (RVG_MATCH_R(OR)) +#define RVG_MATCH_AND (RVG_MATCH_R(AND)) +#define RVG_MATCH_NOP (RVG_MATCH_I(NOP)) +#define RVG_MATCH_FENCE (RVG_FUNCT3_FENCE | RVG_OPCODE_FENCE) +#define RVG_MATCH_FENCETSO 0b1000001100110000000000000 +#define RVG_MATCH_PAUSE 0b0000000100000000000000000 +#define RVG_MATCH_ECALL 0b0000000000000000000000000 +#define RVG_MATCH_EBREAK 0b0000000000010000000000000 +/* F Standard Extension */ +#define RVG_MATCH_FLW (RVG_MATCH_I(FLW)) +#define RVG_MATCH_FSW (RVG_MATCH_S(FSW)) +/* D Standard Extension */ +#define RVG_MATCH_FLD (RVG_MATCH_I(FLD)) +#define RVG_MATCH_FSD (RVG_MATCH_S(FSD)) +/* Q Standard Extension */ +#define RVG_MATCH_FLQ (RVG_MATCH_I(FLQ)) +#define RVG_MATCH_FSQ (RVG_MATCH_S(FSQ)) +/* Zicsr Standard Extension */ +#define RVG_MATCH_CSRRW (RVG_MATCH_I(CSRRW)) +#define RVG_MATCH_CSRRS (RVG_MATCH_I(CSRRS)) +#define RVG_MATCH_CSRRC (RVG_MATCH_I(CSRRC)) +#define RVG_MATCH_CSRRWI (RVG_MATCH_I(CSRRWI)) +#define RVG_MATCH_CSRRSI (RVG_MATCH_I(CSRRSI)) +#define RVG_MATCH_CSRRCI (RVG_MATCH_I(CSRRCI)) +/* M Standard Extension */ +#define RVG_MATCH_MUL (RVG_MATCH_R(MUL)) +#define RVG_MATCH_MULH (RVG_MATCH_R(MULH)) +#define RVG_MATCH_MULHSU (RVG_MATCH_R(MULHSU)) +#define RVG_MATCH_MULHU (RVG_MATCH_R(MULHU)) +#define RVG_MATCH_DIV (RVG_MATCH_R(DIV)) +#define RVG_MATCH_DIVU (RVG_MATCH_R(DIVU)) +#define RVG_MATCH_REM (RVG_MATCH_R(REM)) +#define RVG_MATCH_REMU (RVG_MATCH_R(REMU)) +/* A Standard Extension */ +#define RVG_MATCH_LR_W (RVG_MATCH_AMO(LR_W)) +#define RVG_MATCH_SC_W (RVG_MATCH_AMO(SC_W)) +#define RVG_MATCH_AMOSWAP_W (RVG_MATCH_AMO(AMOSWAP_W)) +#define RVG_MATCH_AMOADD_W (RVG_MATCH_AMO(AMOADD_W)) +#define RVG_MATCH_AMOXOR_W (RVG_MATCH_AMO(AMOXOR_W)) +#define RVG_MATCH_AMOAND_W (RVG_MATCH_AMO(AMOAND_W)) +#define RVG_MATCH_AMOOR_W (RVG_MATCH_AMO(AMOOR_W)) +#define RVG_MATCH_AMOMIN_W (RVG_MATCH_AMO(AMOMIN_W)) +#define RVG_MATCH_AMOMAX_W (RVG_MATCH_AMO(AMOMAX_W)) +#define RVG_MATCH_AMOMINU_W (RVG_MATCH_AMO(AMOMINU_W)) +#define RVG_MATCH_AMOMAXU_W (RVG_MATCH_AMO(AMOMAXU_W)) + +/* RVG 64-bit only matches */ +#define RVG_MATCH_LWU (RVG_MATCH_I(LWU)) +#define RVG_MATCH_LD (RVG_MATCH_I(LD)) +#define RVG_MATCH_SD (RVG_MATCH_S(SD)) +#define RVG_MATCH_ADDIW (RVG_MATCH_I(ADDIW)) +#define RVG_MATCH_SLLIW (RVG_MATCH_R(SLLIW)) +#define RVG_MATCH_SRLIW (RVG_MATCH_R(SRLIW)) +#define RVG_MATCH_SRAIW (RVG_MATCH_R(SRAIW)) +#define RVG_MATCH_ADDW (RVG_MATCH_R(ADDW)) +#define RVG_MATCH_SUBW (RVG_MATCH_R(SUBW)) +#define RVG_MATCH_SLLW (RVG_MATCH_R(SLLW)) +#define RVG_MATCH_SRLW (RVG_MATCH_R(SRLW)) +#define RVG_MATCH_SRAW (RVG_MATCH_R(SRAW)) +/* M Standard Extension */ +#define RVG_MATCH_MULW (RVG_MATCH_R(MULW)) +#define RVG_MATCH_DIVW (RVG_MATCH_R(DIVW)) +#define RVG_MATCH_DIVUW (RVG_MATCH_R(DIVUW)) +#define RVG_MATCH_REMW (RVG_MATCH_R(REMW)) +#define RVG_MATCH_REMUW (RVG_MATCH_R(REMUW)) +/* A Standard Extension */ +#define RVG_MATCH_LR_D (RVG_MATCH_AMO(LR_W)) +#define RVG_MATCH_SC_D (RVG_MATCH_AMO(SC_W)) +#define RVG_MATCH_AMOSWAP_D (RVG_MATCH_AMO(AMOSWAP_W)) +#define RVG_MATCH_AMOADD_D (RVG_MATCH_AMO(AMOADD_W)) +#define RVG_MATCH_AMOXOR_D (RVG_MATCH_AMO(AMOXOR_W)) +#define RVG_MATCH_AMOAND_D (RVG_MATCH_AMO(AMOAND_W)) +#define RVG_MATCH_AMOOR_D (RVG_MATCH_AMO(AMOOR_W)) +#define RVG_MATCH_AMOMIN_D (RVG_MATCH_AMO(AMOMIN_W)) +#define RVG_MATCH_AMOMAX_D (RVG_MATCH_AMO(AMOMAX_W)) +#define RVG_MATCH_AMOMINU_D (RVG_MATCH_AMO(AMOMINU_W)) +#define RVG_MATCH_AMOMAXU_D (RVG_MATCH_AMO(AMOMAXU_W)) + +/* Privileged instruction match */ +#define RV_MATCH_SRET 0b00010000001000000000000001110011 +#define RV_MATCH_WFI 0b00010000010100000000000001110011 + +/* Bit masks for each type of RVG instruction */ +#define RVG_MASK_R \ + ((RV_INSN_FUNCT7_MASK << RV_INSN_FUNCT7_OPOFF) | \ + (RV_INSN_FUNCT3_MASK << RV_INSN_FUNCT3_OPOFF) | RV_INSN_OPCODE_MASK) +#define RVG_MASK_I \ + ((RV_INSN_FUNCT3_MASK << RV_INSN_FUNCT3_OPOFF) | RV_INSN_OPCODE_MASK) +#define RVG_MASK_S \ + ((RV_INSN_FUNCT3_MASK << RV_INSN_FUNCT3_OPOFF) | RV_INSN_OPCODE_MASK) +#define RVG_MASK_B \ + ((RV_INSN_FUNCT3_MASK << RV_INSN_FUNCT3_OPOFF) | RV_INSN_OPCODE_MASK) +#define RVG_MASK_U (RV_INSN_OPCODE_MASK) +#define RVG_MASK_J (RV_INSN_OPCODE_MASK) +#define RVG_MASK_AMO \ + ((RV_INSN_FUNCT5_MASK << RV_INSN_FUNCT5_OPOFF) | \ + (RV_INSN_FUNCT3_MASK << RV_INSN_FUNCT3_OPOFF) | RV_INSN_OPCODE_MASK) + +#if __riscv_xlen == 32 +#define RVG_MASK_SHIFT (GENMASK(6, 0) << 25) +#elif __riscv_xlen == 64 +#define RVG_MASK_SHIFT (GENMASK(5, 0) << 26) +#endif /* __riscv_xlen */ + +/* RVG instruction masks */ +#define RVG_MASK_LUI (RVG_MASK_U) +#define RVG_MASK_AUIPC (RVG_MASK_U) +#define RVG_MASK_JAL (RVG_MASK_J) +#define RVG_MASK_JALR (RVG_MASK_I) +#define RVG_MASK_BEQ (RVG_MASK_B) +#define RVG_MASK_BNE (RVG_MASK_B) +#define RVG_MASK_BLT (RVG_MASK_B) +#define RVG_MASK_BGE (RVG_MASK_B) +#define RVG_MASK_BLTU (RVG_MASK_B) +#define RVG_MASK_BGEU (RVG_MASK_B) +#define RVG_MASK_LB (RVG_MASK_I) +#define RVG_MASK_LH (RVG_MASK_I) +#define RVG_MASK_LW (RVG_MASK_I) +#define RVG_MASK_LBU (RVG_MASK_I) +#define RVG_MASK_LHU (RVG_MASK_I) +#define RVG_MASK_SB (RVG_MASK_S) +#define RVG_MASK_SH (RVG_MASK_S) +#define RVG_MASK_SW (RVG_MASK_S) +#define RVG_MASK_ADDI (RVG_MASK_I) +#define RVG_MASK_SLTI (RVG_MASK_I) +#define RVG_MASK_SLTIU (RVG_MASK_I) +#define RVG_MASK_XORI (RVG_MASK_I) +#define RVG_MASK_ORI (RVG_MASK_I) +#define RVG_MASK_ANDI (RVG_MASK_I) +#define RVG_MASK_SLLI (RVG_MASK_SHIFT | RVG_MASK_I) +#define RVG_MASK_SRLI (RVG_MASK_SHIFT | RVG_MASK_I) +#define RVG_MASK_SRAI (RVG_MASK_SHIFT | RVG_MASK_I) +#define RVG_MASK_ADD (RVG_MASK_R) +#define RVG_MASK_SUB (RVG_MASK_R) +#define RVG_MASK_SLL (RVG_MASK_R) +#define RVG_MASK_SLT (RVG_MASK_R) +#define RVG_MASK_SLTU (RVG_MASK_R) +#define RVG_MASK_XOR (RVG_MASK_R) +#define RVG_MASK_SRL (RVG_MASK_R) +#define RVG_MASK_SRA (RVG_MASK_R) +#define RVG_MASK_OR (RVG_MASK_R) +#define RVG_MASK_AND (RVG_MASK_R) +#define RVG_MASK_NOP (RVG_MASK_I) +#define RVG_MASK_FENCE (RVG_MASK_I) +#define RVG_MASK_FENCETSO 0xffffffff +#define RVG_MASK_PAUSE 0xffffffff +#define RVG_MASK_ECALL 0xffffffff +#define RVG_MASK_EBREAK 0xffffffff +/* F Standard Extension */ +#define RVG_MASK_FLW (RVG_MASK_I) +#define RVG_MASK_FSW (RVG_MASK_S) +/* D Standard Extension */ +#define RVG_MASK_FLD (RVG_MASK_I) +#define RVG_MASK_FSD (RVG_MASK_S) +/* Q Standard Extension */ +#define RVG_MASK_FLQ (RVG_MASK_I) +#define RVG_MASK_FSQ (RVG_MASK_S) +/* Zicsr Standard Extension */ +#define RVG_MASK_CSRRW (RVG_MASK_I) +#define RVG_MASK_CSRRS (RVG_MASK_I) +#define RVG_MASK_CSRRC (RVG_MASK_I) +#define RVG_MASK_CSRRWI (RVG_MASK_I) +#define RVG_MASK_CSRRSI (RVG_MASK_I) +#define RVG_MASK_CSRRCI (RVG_MASK_I) +/* M Standard Extension */ +#define RVG_MASK_MUL (RVG_MASK_R) +#define RVG_MASK_MULH (RVG_MASK_R) +#define RVG_MASK_MULHSU (RVG_MASK_R) +#define RVG_MASK_MULHU (RVG_MASK_R) +#define RVG_MASK_DIV (RVG_MASK_R) +#define RVG_MASK_DIVU (RVG_MASK_R) +#define RVG_MASK_REM (RVG_MASK_R) +#define RVG_MASK_REMU (RVG_MASK_R) +/* A Standard Extension */ +#define RVG_MASK_LR_W (RVG_MASK_AMO) +#define RVG_MASK_SC_W (RVG_MASK_AMO) +#define RVG_MASK_AMOSWAP_W (RVG_MASK_AMO) +#define RVG_MASK_AMOADD_W (RVG_MASK_AMO) +#define RVG_MASK_AMOXOR_W (RVG_MASK_AMO) +#define RVG_MASK_AMOAND_W (RVG_MASK_AMO) +#define RVG_MASK_AMOOR_W (RVG_MASK_AMO) +#define RVG_MASK_AMOMIN_W (RVG_MASK_AMO) +#define RVG_MASK_AMOMAX_W (RVG_MASK_AMO) +#define RVG_MASK_AMOMINU_W (RVG_MASK_AMO) +#define RVG_MASK_AMOMAXU_W (RVG_MASK_AMO) + +/* RVG 64-bit only masks */ +#define RVG_MASK_LWU (RVG_MASK_I) +#define RVG_MASK_LD (RVG_MASK_I) +#define RVG_MASK_SD (RVG_MASK_S) +#define RVG_MASK_ADDIW (RVG_MASK_I) +#define RVG_MASK_SLLIW (RVG_MASK_R) +#define RVG_MASK_SRLIW (RVG_MASK_R) +#define RVG_MASK_SRAIW (RVG_MASK_R) +#define RVG_MASK_ADDW (RVG_MASK_R) +#define RVG_MASK_SUBW (RVG_MASK_R) +#define RVG_MASK_SLLW (RVG_MASK_R) +#define RVG_MASK_SRLW (RVG_MASK_R) +#define RVG_MASK_SRAW (RVG_MASK_R) +/* M Standard Extension */ +#define RVG_MASK_MULW (RVG_MASK_R) +#define RVG_MASK_DIVW (RVG_MASK_R) +#define RVG_MASK_DIVUW (RVG_MASK_R) +#define RVG_MASK_REMW (RVG_MASK_R) +#define RVG_MASK_REMUW (RVG_MASK_R) +/* A Standard Extension */ +#define RVG_MASK_LR_D (RVG_MASK_AMO) +#define RVG_MASK_SC_D (RVG_MASK_AMO) +#define RVG_MASK_AMOSWAP_D (RVG_MASK_AMO) +#define RVG_MASK_AMOADD_D (RVG_MASK_AMO) +#define RVG_MASK_AMOXOR_D (RVG_MASK_AMO) +#define RVG_MASK_AMOAND_D (RVG_MASK_AMO) +#define RVG_MASK_AMOOR_D (RVG_MASK_AMO) +#define RVG_MASK_AMOMIN_D (RVG_MASK_AMO) +#define RVG_MASK_AMOMAX_D (RVG_MASK_AMO) +#define RVG_MASK_AMOMINU_D (RVG_MASK_AMO) +#define RVG_MASK_AMOMAXU_D (RVG_MASK_AMO) + +/* Privileged instruction masks */ +#define RV_MASK_SRET 0xffffffff +#define RV_MASK_WFI 0xffffffff + +/* RVC opcodes */ #define RVC_OPCODE_C0 0x0 #define RVC_OPCODE_C1 0x1 #define RVC_OPCODE_C2 0x2 -/* parts of funct3 code for I, M, A extension*/ -#define RVG_FUNCT3_JALR 0x0 -#define RVG_FUNCT3_BEQ 0x0 -#define RVG_FUNCT3_BNE 0x1 -#define RVG_FUNCT3_BLT 0x4 -#define RVG_FUNCT3_BGE 0x5 -#define RVG_FUNCT3_BLTU 0x6 -#define RVG_FUNCT3_BGEU 0x7 - -/* parts of funct3 code for C extension*/ -#define RVC_FUNCT3_C_BEQZ 0x6 -#define RVC_FUNCT3_C_BNEZ 0x7 -#define RVC_FUNCT3_C_J 0x5 -#define RVC_FUNCT3_C_JAL 0x1 -#define RVC_FUNCT4_C_JR 0x8 -#define RVC_FUNCT4_C_JALR 0x9 -#define RVC_FUNCT4_C_EBREAK 0x9 - -#define RVG_FUNCT12_EBREAK 0x1 -#define RVG_FUNCT12_SRET 0x102 - -#define RVG_MATCH_AUIPC (RVG_OPCODE_AUIPC) -#define RVG_MATCH_JALR (RV_ENCODE_FUNCT3(JALR) | RVG_OPCODE_JALR) -#define RVG_MATCH_JAL (RVG_OPCODE_JAL) -#define RVG_MATCH_FENCE (RVG_OPCODE_FENCE) -#define RVG_MATCH_BEQ (RV_ENCODE_FUNCT3(BEQ) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_BNE (RV_ENCODE_FUNCT3(BNE) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_BLT (RV_ENCODE_FUNCT3(BLT) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_BGE (RV_ENCODE_FUNCT3(BGE) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_BLTU (RV_ENCODE_FUNCT3(BLTU) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_BGEU (RV_ENCODE_FUNCT3(BGEU) | RVG_OPCODE_BRANCH) -#define RVG_MATCH_EBREAK (RV_ENCODE_FUNCT12(EBREAK) | RVG_OPCODE_SYSTEM) -#define RVG_MATCH_SRET (RV_ENCODE_FUNCT12(SRET) | RVG_OPCODE_SYSTEM) -#define RVC_MATCH_C_BEQZ (RVC_ENCODE_FUNCT3(C_BEQZ) | RVC_OPCODE_C1) -#define RVC_MATCH_C_BNEZ (RVC_ENCODE_FUNCT3(C_BNEZ) | RVC_OPCODE_C1) -#define RVC_MATCH_C_J (RVC_ENCODE_FUNCT3(C_J) | RVC_OPCODE_C1) -#define RVC_MATCH_C_JAL (RVC_ENCODE_FUNCT3(C_JAL) | RVC_OPCODE_C1) -#define RVC_MATCH_C_JR (RVC_ENCODE_FUNCT4(C_JR) | RVC_OPCODE_C2) -#define RVC_MATCH_C_JALR (RVC_ENCODE_FUNCT4(C_JALR) | RVC_OPCODE_C2) -#define RVC_MATCH_C_EBREAK (RVC_ENCODE_FUNCT4(C_EBREAK) | RVC_OPCODE_C2) - -#define RVG_MASK_AUIPC (RV_INSN_OPCODE_MASK) -#define RVG_MASK_JALR (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_JAL (RV_INSN_OPCODE_MASK) -#define RVG_MASK_FENCE (RV_INSN_OPCODE_MASK) -#define RVC_MASK_C_JALR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK) -#define RVC_MASK_C_JR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK) -#define RVC_MASK_C_JAL (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) -#define RVC_MASK_C_J (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) -#define RVG_MASK_BEQ (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_BNE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_BLT (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_BGE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_BLTU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVG_MASK_BGEU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK) -#define RVC_MASK_C_BEQZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) -#define RVC_MASK_C_BNEZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK) -#define RVC_MASK_C_EBREAK 0xffff -#define RVG_MASK_EBREAK 0xffffffff -#define RVG_MASK_SRET 0xffffffff +/* RVC Segments */ +#define RVC_6_2 (GENMASK(4, 0) << 2) +#define RVC_11_7 (GENMASK(4, 0) << 7) +#define RVC_TWO_11_7 (BIT(8)) -#define __INSN_LENGTH_MASK _UL(0x3) -#define __INSN_LENGTH_GE_32 _UL(0x3) -#define __INSN_OPCODE_MASK _UL(0x7F) -#define __INSN_BRANCH_OPCODE _UL(RVG_OPCODE_BRANCH) +/* RVC Quadrant 1 FUNCT2 */ +#define RVC_FUNCT2_C_SRLI 0b00 +#define RVC_FUNCT2_C_SRAI 0b01 +#define RVC_FUNCT2_C_ANDI 0b10 +#define RVC_FUNCT2_C_SUB 0b00 +#define RVC_FUNCT2_C_XOR 0b01 +#define RVC_FUNCT2_C_OR 0b10 +#define RVC_FUNCT2_C_AND 0b11 +#define RVC_FUNCT2_C_SUBW 0b00 +#define RVC_FUNCT2_C_ADDW 0b01 + +/* RVC Quadrant 0 FUNCT3 */ +#define RVC_FUNCT3_C_ADDI4SPN 0b000 +#define RVC_FUNCT3_C_FLD 0b001 +#define RVC_FUNCT3_C_LW 0b010 +#define RVC_FUNCT3_C_FLW 0b011 +#define RVC_FUNCT3_C_LD 0b011 +#define RVC_FUNCT3_C_FSD 0b101 +#define RVC_FUNCT3_C_SW 0b110 +#define RVC_FUNCT3_C_FSW 0b111 +#define RVC_FUNCT3_C_SD 0b111 +/* RVC Quadrant 1 FUNCT3 */ +#define RVC_FUNCT3_C_NOP 0b000 +#define RVC_FUNCT3_C_ADDI 0b000 +#define RVC_FUNCT3_C_JAL 0b001 +#define RVC_FUNCT3_C_ADDIW 0b001 +#define RVC_FUNCT3_C_LI 0b010 +#define RVC_FUNCT3_C_ADDI16SP 0b011 +#define RVC_FUNCT3_C_LUI 0b011 +#define RVC_FUNCT3_C_SRLI 0b100 +#define RVC_FUNCT3_C_SRAI 0b100 +#define RVC_FUNCT3_C_ANDI 0b100 +#define RVC_FUNCT3_C_J 0b101 +#define RVC_FUNCT3_C_BEQZ 0b110 +#define RVC_FUNCT3_C_BNEZ 0b111 +/* RVC Quadrant 2 FUNCT3 */ +#define RVC_FUNCT3_C_SLLI 0b000 +#define RVC_FUNCT3_C_FLDSP 0b001 +#define RVC_FUNCT3_C_LWSP 0b010 +#define RVC_FUNCT3_C_FLWSP 0b011 +#define RVC_FUNCT3_C_LDSP 0b011 +#define RVC_FUNCT3_C_FSDSP 0b101 +#define RVC_FUNCT3_C_SWSP 0b110 +#define RVC_FUNCT3_C_FSWSP 0b111 +#define RVC_FUNCT3_C_SDSP 0b111 + +/* RVC Quadrant 2 FUNCT4 */ +#define RVC_FUNCT4_C_JR 0b1000 +#define RVC_FUNCT4_C_MV 0b1000 +#define RVC_FUNCT4_C_EBREAK 0b1001 +#define RVC_FUNCT4_C_JALR 0b1001 +#define RVC_FUNCT4_C_ADD 0b1001 + +/* RVC Quadrant 1 FUNCT6 */ +#define RVC_FUNCT6_C_SUB 0b100011 +#define RVC_FUNCT6_C_XOR 0b100011 +#define RVC_FUNCT6_C_OR 0b100011 +#define RVC_FUNCT6_C_AND 0b100011 +#define RVC_FUNCT6_C_SUBW 0b100111 +#define RVC_FUNCT6_C_ADDW 0b100111 + +/* RVC instruction match types */ +#define RVC_MATCH_CR(f_) (RVC_FUNCT4_C_##f_ << RVC_INSN_FUNCT4_OPOFF) +#define RVC_MATCH_CI(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CSS(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CIW(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CL(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CS(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CA(f_) (RVC_FUNCT6_C_##f_ << RVC_INSN_FUNCT6_OPOFF | \ + RVC_FUNCT2_C_##f_ << RVC_INSN_FUNCT2_CA_OPOFF) +#define RVC_MATCH_CB(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) +#define RVC_MATCH_CJ(f_) (RVC_FUNCT3_C_##f_ << RVC_INSN_FUNCT3_OPOFF) + +/* RVC Quadrant 0 matches */ +#define RVC_MATCH_C_ADDI4SPN (RVC_MATCH_CIW(ADDI4SPN) | RVC_OPCODE_C0) +#define RVC_MATCH_C_FLD (RVC_MATCH_CL(FLD) | RVC_OPCODE_C0) +#define RVC_MATCH_C_LW (RVC_MATCH_CL(LW) | RVC_OPCODE_C0) +#define RVC_MATCH_C_FLW (RVC_MATCH_CL(FLW) | RVC_OPCODE_C0) +#define RVC_MATCH_C_LD (RVC_MATCH_CL(LD) | RVC_OPCODE_C0) +#define RVC_MATCH_C_FSD (RVC_MATCH_CS(FSD) | RVC_OPCODE_C0) +#define RVC_MATCH_C_SW (RVC_MATCH_CS(SW) | RVC_OPCODE_C0) +#define RVC_MATCH_C_FSW (RVC_MATCH_CS(FSW) | RVC_OPCODE_C0) +#define RVC_MATCH_C_SD (RVC_MATCH_CS(SD) | RVC_OPCODE_C0) +/* RVC Quadrant 1 matches */ +#define RVC_MATCH_C_NOP (RVC_MATCH_CI(NOP) | RVC_OPCODE_C1) +#define RVC_MATCH_C_ADDI (RVC_MATCH_CI(ADDI) | RVC_OPCODE_C1) +#define RVC_MATCH_C_JAL (RVC_MATCH_CJ(JAL) | RVC_OPCODE_C1) +#define RVC_MATCH_C_ADDIW (RVC_MATCH_CI(ADDIW) | RVC_OPCODE_C1) +#define RVC_MATCH_C_LI (RVC_MATCH_CI(LI) | RVC_OPCODE_C1) +#define RVC_MATCH_C_ADDI16SP \ + (RVC_MATCH_CI(ADDI16SP) | RVC_TWO_11_7 | RVC_OPCODE_C1) +#define RVC_MATCH_C_LUI (RVC_MATCH_CI(LUI) | RVC_OPCODE_C1) +#define RVC_MATCH_C_SRLI \ + (RVC_MATCH_CB(SRLI) | RVC_FUNCT2_C_SRLI << RVC_INSN_FUNCT2_CB_OPOFF | \ + RVC_OPCODE_C1) +#define RVC_MATCH_C_SRAI \ + (RVC_MATCH_CB(SRAI) | RVC_FUNCT2_C_SRAI << RVC_INSN_FUNCT2_CB_OPOFF | \ + RVC_OPCODE_C1) +#define RVC_MATCH_C_ANDI \ + (RVC_MATCH_CB(ANDI) | RVC_FUNCT2_C_ANDI << RVC_INSN_FUNCT2_CB_OPOFF | \ + RVC_OPCODE_C1) +#define RVC_MATCH_C_SUB (RVC_MATCH_CA(SUB) | RVC_OPCODE_C1) +#define RVC_MATCH_C_XOR (RVC_MATCH_CA(XOR) | RVC_OPCODE_C1) +#define RVC_MATCH_C_OR (RVC_MATCH_CA(OR) | RVC_OPCODE_C1) +#define RVC_MATCH_C_AND (RVC_MATCH_CA(AND) | RVC_OPCODE_C1) +#define RVC_MATCH_C_SUBW (RVC_MATCH_CA(SUBW) | RVC_OPCODE_C1) +#define RVC_MATCH_C_ADDW (RVC_MATCH_CA(ADDW) | RVC_OPCODE_C1) +#define RVC_MATCH_C_J (RVC_MATCH_CJ(J) | RVC_OPCODE_C1) +#define RVC_MATCH_C_BEQZ (RVC_MATCH_CB(BEQZ) | RVC_OPCODE_C1) +#define RVC_MATCH_C_BNEZ (RVC_MATCH_CB(BNEZ) | RVC_OPCODE_C1) +/* RVC Quadrant 2 matches */ +#define RVC_MATCH_C_SLLI (RVC_MATCH_CI(SLLI) | RVC_OPCODE_C2) +#define RVC_MATCH_C_FLDSP (RVC_MATCH_CI(FLDSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_LWSP (RVC_MATCH_CI(LWSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_FLWSP (RVC_MATCH_CI(FLWSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_LDSP (RVC_MATCH_CI(LDSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_JR (RVC_MATCH_CR(JR) | RVC_OPCODE_C2) +#define RVC_MATCH_C_MV (RVC_MATCH_CR(MV) | RVC_OPCODE_C2) +#define RVC_MATCH_C_EBREAK (RVC_MATCH_CR(EBREAK) | RVC_OPCODE_C2) +#define RVC_MATCH_C_JALR (RVC_MATCH_CR(JALR) | RVC_OPCODE_C2) +#define RVC_MATCH_C_ADD (RVC_MATCH_CR(ADD) | RVC_OPCODE_C2) +#define RVC_MATCH_C_FSDSP (RVC_MATCH_CSS(FSDSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_SWSP (RVC_MATCH_CSS(SWSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_FSWSP (RVC_MATCH_CSS(FSWSP) | RVC_OPCODE_C2) +#define RVC_MATCH_C_SDSP (RVC_MATCH_CSS(SDSP) | RVC_OPCODE_C2) + +/* Bit masks for each type of RVC instruction */ +#define RVC_MASK_CR \ + (RVC_INSN_FUNCT4_MASK << RVC_INSN_FUNCT4_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CI \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CSS \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CIW \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CL \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CS \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CA \ + (RVC_INSN_FUNCT6_MASK << RVC_INSN_FUNCT6_OPOFF | \ + RVC_INSN_FUNCT2_MASK << RVC_INSN_FUNCT2_CA_OPOFF | \ + RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CB \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) +#define RVC_MASK_CJ \ + (RVC_INSN_FUNCT3_MASK << RVC_INSN_FUNCT3_OPOFF | RVC_INSN_OPCODE_MASK) + +/* RVC Quadrant 0 masks */ +#define RVC_MASK_C_ADDI4SPN (RVC_MASK_CIW) +#define RVC_MASK_C_FLD (RVC_MASK_CL) +#define RVC_MASK_C_LW (RVC_MASK_CL) +#define RVC_MASK_C_FLW (RVC_MASK_CL) +#define RVC_MASK_C_LD (RVC_MASK_CL) +#define RVC_MASK_C_FSD (RVC_MASK_CS) +#define RVC_MASK_C_SW (RVC_MASK_CS) +#define RVC_MASK_C_FSW (RVC_MASK_CS) +#define RVC_MASK_C_SD (RVC_MASK_CS) +/* RVC Quadrant 1 masks */ +#define RVC_MASK_C_NOP (RVC_MASK_CI) +#define RVC_MASK_C_ADDI (RVC_MASK_CI) +#define RVC_MASK_C_JAL (RVC_MASK_CJ) +#define RVC_MASK_C_ADDIW (RVC_MASK_CI) +#define RVC_MASK_C_LI (RVC_MASK_CI) +#define RVC_MASK_C_ADDI16SP (RVC_MASK_CI | RVC_TWO_11_7) +#define RVC_MASK_C_LUI (RVC_MASK_CI) +#define RVC_MASK_C_SRLI \ + (RVC_MASK_CB | RVC_INSN_FUNCT2_MASK << RVC_INSN_FUNCT2_CB_OPOFF) +#define RVC_MASK_C_SRAI \ + (RVC_MASK_CB | RVC_INSN_FUNCT2_MASK << RVC_INSN_FUNCT2_CB_OPOFF) +#define RVC_MASK_C_ANDI \ + (RVC_MASK_CB | RVC_INSN_FUNCT2_MASK << RVC_INSN_FUNCT2_CB_OPOFF) +#define RVC_MASK_C_SUB (RVC_MASK_CA) +#define RVC_MASK_C_XOR (RVC_MASK_CA) +#define RVC_MASK_C_OR (RVC_MASK_CA) +#define RVC_MASK_C_AND (RVC_MASK_CA) +#define RVC_MASK_C_SUBW (RVC_MASK_CA) +#define RVC_MASK_C_ADDW (RVC_MASK_CA) +#define RVC_MASK_C_J (RVC_MASK_CJ) +#define RVC_MASK_C_BEQZ (RVC_MASK_CB) +#define RVC_MASK_C_BNEZ (RVC_MASK_CB) +/* RVC Quadrant 2 masks */ +#define RVC_MASK_C_SLLI (RVC_MASK_CI) +#define RVC_MASK_C_FLDSP (RVC_MASK_CI) +#define RVC_MASK_C_LWSP (RVC_MASK_CI) +#define RVC_MASK_C_FLWSP (RVC_MASK_CI) +#define RVC_MASK_C_LDSP (RVC_MASK_CI) +#define RVC_MASK_C_JR (RVC_MASK_CR | RVC_6_2) +#define RVC_MASK_C_MV (RVC_MASK_CR) +#define RVC_MASK_C_EBREAK (RVC_MASK_CR | RVC_11_7 | RVC_6_2) +#define RVC_MASK_C_JALR (RVC_MASK_CR | RVC_6_2) +#define RVC_MASK_C_ADD (RVC_MASK_CR) +#define RVC_MASK_C_FSDSP (RVC_MASK_CSS) +#define RVC_MASK_C_SWSP (RVC_MASK_CSS) +#define RVC_MASK_C_FSWSP (RVC_MASK_CSS) +#define RVC_MASK_C_SDSP (RVC_MASK_CSS) + +#define INSN_C_MASK 0x3 +#define INSN_IS_C(insn) (((insn) & INSN_C_MASK) != INSN_C_MASK) +#define INSN_LEN(insn) (INSN_IS_C(insn) ? 2 : 4) #define __RISCV_INSN_FUNCS(name, mask, val) \ static __always_inline bool riscv_insn_is_##name(u32 code) \ { \ BUILD_BUG_ON(~(mask) & (val)); \ return (code & (mask)) == (val); \ +} + +/* R-Type Instructions */ +#define __RISCV_RTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rd, u8 rs1, u8 rs2) \ +{ \ + return rv_r_insn(RVG_FUNCT7_##upper_name, rs2, rs1, \ + RVG_FUNCT3_##upper_name, rd, \ + RVG_OPCODE_##upper_name); \ +} + +/* I-Type Instructions */ +#define __RISCV_ITYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rd, u8 rs1, u16 imm11_0) \ +{ \ + return rv_i_insn(imm11_0, rs1, RVG_FUNCT3_##upper_name, \ + rd, RVG_OPCODE_##upper_name); \ +} + +/* S-Type Instructions */ +#define __RISCV_STYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rs1, u16 imm11_0, u8 rs2) \ +{ \ + return rv_s_insn(imm11_0, rs2, rs1, RVG_FUNCT3_##upper_name, \ + RVG_OPCODE_##upper_name); \ +} + +/* B-Type Instructions */ +#define __RISCV_BTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rs1, u8 rs2, u16 imm12_1) \ +{ \ + return rv_b_insn(imm12_1, rs2, rs1, RVG_FUNCT3_##upper_name, \ + RVG_OPCODE_##upper_name); \ +} + +/* Reversed B-Type Instructions */ +#define __RISCV_REV_BTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rs1, u8 rs2, u16 imm12_1) \ +{ \ + return rv_b_insn(imm12_1, rs1, rs2, RVG_FUNCT3_##upper_name, \ + RVG_OPCODE_##upper_name); \ +} + +/* U-Type Instructions */ +#define __RISCV_UTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rd, u32 imm31_12) \ +{ \ + return rv_u_insn(imm31_12, rd, RVG_OPCODE_##upper_name); \ +} + +/* J-Type Instructions */ +#define __RISCV_JTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rd, u32 imm20_1) \ +{ \ + return rv_j_insn(imm20_1, rd, RVG_OPCODE_##upper_name); \ +} + +/* AMO-Type Instructions */ +#define __RISCV_AMOTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_##name(u8 rd, u8 rs2, u8 rs1, u8 aq, \ + u8 rl) \ +{ \ + return rv_amo_insn(RVG_FUNCT5_##upper_name, aq, rl, rs2, rs1, \ + RVG_FUNCT3_##upper_name, rd, RVG_OPCODE_##upper_name); \ +} + +/* FENCE Instruction */ +#define __RISCV_NOPTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_nop(void) \ +{ \ + return RVG_MATCH_NOP; \ +} + +/* FENCE Instruction */ +#define __RISCV_FENCETYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_fence(u8 pred, u8 succ) \ +{ \ + u16 imm11_0 = pred << 4 | succ; \ + return rv_i_insn(imm11_0, 0, 0, 0, RVG_OPCODE_FENCE); \ +} + +/* FENCETSO Instruction */ +#define __RISCV_FENCETSOTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_fencetso(void) \ +{ \ + return RVG_MATCH_FENCETSO; \ +} + +/* PAUSE Instruction */ +#define __RISCV_PAUSETYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_pause(void) \ +{ \ + return RVG_MATCH_PAUSE; \ +} + +/* ECALL Instruction */ +#define __RISCV_ECALLTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_ecall(void) \ +{ \ + return RVG_MATCH_ECALL; \ +} + +/* EBREAK Instruction */ +#define __RISCV_EBREAKTYPE_FUNCS(name, upper_name) \ +static __always_inline bool rv_ebreak(void) \ +{ \ + return RVG_MATCH_EBREAK; \ +} + +#define __RVG_INSN_FUNCS(name, upper_name, type) \ +static __always_inline bool riscv_insn_is_##name(u32 code) \ +{ \ + BUILD_BUG_ON(~(RVG_MASK_##upper_name) & (RVG_MATCH_##upper_name)); \ + return (code & (RVG_MASK_##upper_name)) == (RVG_MATCH_##upper_name); \ } \ +__RISCV_##type##TYPE_FUNCS(name, upper_name) -#if __riscv_xlen == 32 -/* C.JAL is an RV32C-only instruction */ -__RISCV_INSN_FUNCS(c_jal, RVC_MASK_C_JAL, RVC_MATCH_C_JAL) -#else -#define riscv_insn_is_c_jal(opcode) 0 -#endif -__RISCV_INSN_FUNCS(auipc, RVG_MASK_AUIPC, RVG_MATCH_AUIPC) -__RISCV_INSN_FUNCS(jalr, RVG_MASK_JALR, RVG_MATCH_JALR) -__RISCV_INSN_FUNCS(jal, RVG_MASK_JAL, RVG_MATCH_JAL) -__RISCV_INSN_FUNCS(c_jr, RVC_MASK_C_JR, RVC_MATCH_C_JR) -__RISCV_INSN_FUNCS(c_jalr, RVC_MASK_C_JALR, RVC_MATCH_C_JALR) -__RISCV_INSN_FUNCS(c_j, RVC_MASK_C_J, RVC_MATCH_C_J) -__RISCV_INSN_FUNCS(beq, RVG_MASK_BEQ, RVG_MATCH_BEQ) -__RISCV_INSN_FUNCS(bne, RVG_MASK_BNE, RVG_MATCH_BNE) -__RISCV_INSN_FUNCS(blt, RVG_MASK_BLT, RVG_MATCH_BLT) -__RISCV_INSN_FUNCS(bge, RVG_MASK_BGE, RVG_MATCH_BGE) -__RISCV_INSN_FUNCS(bltu, RVG_MASK_BLTU, RVG_MATCH_BLTU) -__RISCV_INSN_FUNCS(bgeu, RVG_MASK_BGEU, RVG_MATCH_BGEU) -__RISCV_INSN_FUNCS(c_beqz, RVC_MASK_C_BEQZ, RVC_MATCH_C_BEQZ) -__RISCV_INSN_FUNCS(c_bnez, RVC_MASK_C_BNEZ, RVC_MATCH_C_BNEZ) -__RISCV_INSN_FUNCS(c_ebreak, RVC_MASK_C_EBREAK, RVC_MATCH_C_EBREAK) -__RISCV_INSN_FUNCS(ebreak, RVG_MASK_EBREAK, RVG_MATCH_EBREAK) -__RISCV_INSN_FUNCS(sret, RVG_MASK_SRET, RVG_MATCH_SRET) -__RISCV_INSN_FUNCS(fence, RVG_MASK_FENCE, RVG_MATCH_FENCE); +/* Compressed instruction types */ + +/* CR-Type Instructions */ +#define __RISCV_CRTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u8 rs) \ +{ \ + return rv_cr_insn(RVC_FUNCT4_##upper_name, rd, rs, \ + RVC_OPCODE_##opcode); \ +} + +#define __RISCV_CR_ZERO_RSTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rs1) \ +{ \ + return rv_cr_insn(RVC_FUNCT4_##upper_name, rs1, RV_REG_ZERO, \ + RVC_OPCODE_##opcode); \ +} + +/* CI-Type Instructions */ +#define __RISCV_CITYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u32 imm) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + return rv_ci_insn(RVC_FUNCT3_##upper_name, imm_hi, rd, \ + imm_lo, RVC_OPCODE_##opcode); \ +} + +#define __RISCV_CI_SPTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u32 imm) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + return rv_ci_insn(RVC_FUNCT3_##upper_name, imm_hi, 2, \ + imm_lo, RVC_OPCODE_##opcode); \ +} + +/* CSS-Type Instructions */ +#define __RISCV_CSSTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u32 imm, u8 rs2) \ +{ \ + imm = RV##upper_name##_IMM(imm); \ + return rv_css_insn(RVC_FUNCT3_##upper_name, imm, rs2, \ + RVC_OPCODE_##opcode); \ +} + +/* CIW-Type Instructions */ +#define __RISCV_CIWTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u32 imm) \ +{ \ + imm = RV##upper_name##_IMM(imm); \ + return rv_ciw_insn(RVC_FUNCT3_##upper_name, imm, rd, \ + RVC_OPCODE_##opcode); \ +} + +/* CL-Type Instructions */ +#define __RISCV_CLTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u32 imm, u8 rs1) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + return rv_cl_insn(RVC_FUNCT3_##upper_name, imm_hi, rs1, rd, \ + imm_lo, RVC_OPCODE_##opcode); \ +} + +/* CS-Type Instructions */ +#define __RISCV_CSTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rs1, u32 imm, u8 rs2) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + return rv_cs_insn(RVC_FUNCT3_##upper_name, imm_hi, rs1, imm_lo, \ + rs2, RVC_OPCODE_##opcode); \ +} + +/* CA-Type Instructions */ +#define __RISCV_CATYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u8 rs2) \ +{ \ + return rv_ca_insn(RVC_FUNCT6_##upper_name, rd, \ + RVC_FUNCT2_##upper_name, rs2, \ + RVC_OPCODE_##opcode); \ +} + +/* CB-Type Instructions */ +#define __RISCV_CBTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u32 imm) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + return rv_cb_insn(RVC_FUNCT3_##upper_name, imm_hi, rd, imm_lo, \ + RVC_OPCODE_##opcode); \ +} + +#define __RISCV_CB_FUNCT2TYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u8 rd, u32 imm) \ +{ \ + u32 imm_hi = RV##upper_name##_IMM_HI(imm); \ + u32 imm_lo = RV##upper_name##_IMM_LO(imm); \ + imm_hi = (imm_hi << 2) | RVC_FUNCT2_##upper_name; \ + return rv_cb_insn(RVC_FUNCT3_##upper_name, imm_hi, rd, imm_lo, \ + RVC_OPCODE_##opcode); \ +} + +/* CJ-Type Instructions */ +#define __RISCV_CJTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u32 imm) \ +{ \ + imm = RV##upper_name##_IMM(imm); \ + return rv_cj_insn(RVC_FUNCT3_##upper_name, imm, \ + RVC_OPCODE_##opcode); \ +} + +/* CEBREAK instruction */ +#define __RISCV_CEBREAKTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u32 imm) \ +{ \ + return RVC_MATCH_C_EBREAK; \ +} + +/* CNOP instruction */ +#define __RISCV_CNOPTYPE_FUNCS(name, upper_name, opcode) \ +static __always_inline bool rv##name(u32 imm) \ +{ \ + return RVC_MATCH_C_NOP; \ +} + +#define __RVC_INSN_IS_DEFAULTTYPE(name, upper_name) \ +static __always_inline bool riscv_insn_is_##name(u32 code) \ +{ \ + BUILD_BUG_ON(~(RVC_MASK_##upper_name) & (RVC_MATCH_##upper_name)); \ + return (code & (RVC_MASK_##upper_name)) == (RVC_MATCH_##upper_name); \ +} + +#define __RVC_INSN_IS_NON_ZERO_RS1_RDTYPE(name, upper_name) \ +static __always_inline bool riscv_insn_is_##name(u32 code) \ +{ \ + BUILD_BUG_ON(~(RVC_MASK_##upper_name) & (RVC_MATCH_##upper_name)); \ + return ((code & (RVC_MASK_##upper_name)) == (RVC_MATCH_##upper_name)) \ + && (RVC_X(code, RVC_C0_RS1_OPOFF, RV_STANDARD_REG_MASK) != 0); \ +} + +#define __RVC_INSN_IS_NON_ZERO_TWO_RDTYPE(name, upper_name) \ +static __always_inline bool riscv_insn_is_##name(u32 code) \ +{ \ + BUILD_BUG_ON(~(RVC_MASK_##upper_name) & (RVC_MATCH_##upper_name)); \ + return ((code & (RVC_MASK_##upper_name)) == (RVC_MATCH_##upper_name)) \ + && (RVC_X(code, RVC_C0_RS1_OPOFF, RV_STANDARD_REG_MASK) != 0) \ + && (RVC_X(code, RVC_C0_RS1_OPOFF, RV_STANDARD_REG_MASK) != 2); \ +} + +#define __RVC_INSN_IS_NON_ZERO_RD_RS2TYPE(name, upper_name) \ +static __always_inline bool riscv_insn_is_##name(u32 code) \ +{ \ + BUILD_BUG_ON(~(RVC_MASK_##upper_name) & (RVC_MATCH_##upper_name)); \ + return ((code & (RVC_MASK_##upper_name)) == (RVC_MATCH_##upper_name)) \ + && (RVC_X(code, RVC_C0_RS1_OPOFF, RV_STANDARD_REG_MASK) != 0) \ + && (RVC_X(code, RVC_C0_RD_OPOFF, RV_STANDARD_REG_MASK) != 0); \ +} + +#define __RVC_INSN_FUNCS(name, upper_name, type, opcode, equality_type) \ +__RVC_INSN_IS_##equality_type##TYPE(name, upper_name) \ +__RISCV_##type##TYPE_FUNCS(name, upper_name, opcode) /* special case to catch _any_ system instruction */ static __always_inline bool riscv_insn_is_system(u32 code) @@ -278,61 +1915,196 @@ static __always_inline bool riscv_insn_is_branch(u32 code) #define RV_X(X, s, mask) (((X) >> (s)) & (mask)) #define RVC_X(X, s, mask) RV_X(X, s, mask) +#define RV_EXTRACT_RS1_REG(x) \ + ({typeof(x) x_ = (x); \ + (RV_X(x_, RVG_RS1_OPOFF, RVG_RS1_MASK)); }) + +#define RV_EXTRACT_RS2_REG(x) \ + ({typeof(x) x_ = (x); \ + (RV_X(x_, RVG_RS2_OPOFF, RVG_RS2_MASK)); }) + #define RV_EXTRACT_RD_REG(x) \ ({typeof(x) x_ = (x); \ (RV_X(x_, RVG_RD_OPOFF, RVG_RD_MASK)); }) +#define RVC_EXTRACT_R_RS2_REG(x) \ + ({typeof(x) x_ = (x); \ + (RV_X(x_, RVC_C0_RS2_OPOFF, RV_COMPRESSED_REG_MASK)); }) + +#define RVC_EXTRACT_SA_RS2_REG(x) \ + ({typeof(x) x_ = (x); \ + (RV_X(x_, RVC_C2_RS2_OPOFF, RV_STANDARD_REG_MASK)); }) + +#define RV_EXTRACT_FUNCT3(x) \ + ({typeof(x) x_ = (x); \ + (RV_X(x_, RV_INSN_FUNCT3_OPOFF, RV_INSN_FUNCT3_MASK)); }) + #define RV_EXTRACT_UTYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RV_X(x_, RV_U_IMM_31_12_OPOFF, RV_U_IMM_31_12_MASK)); }) + (RV_X(x_, RV_U_IMM_31_12_OPOFF, RV_U_IMM_31_12_MASK) \ + << RV_U_IMM_31_12_OFF) | \ + (RV_IMM_SIGN(x_) << RV_U_IMM_SIGN_OFF); }) #define RV_EXTRACT_JTYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RV_X(x_, RV_J_IMM_10_1_OPOFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OFF) | \ - (RV_X(x_, RV_J_IMM_11_OPOFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OFF) | \ - (RV_X(x_, RV_J_IMM_19_12_OPOFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OFF) | \ + (RV_X(x_, RV_J_IMM_10_1_OPOFF, RV_J_IMM_10_1_MASK) \ + << RV_J_IMM_10_1_OFF) | \ + (RV_X(x_, RV_J_IMM_11_OPOFF, RV_J_IMM_11_MASK) \ + << RV_J_IMM_11_OFF) | \ + (RV_X(x_, RV_J_IMM_19_12_OPOFF, RV_J_IMM_19_12_MASK) \ + << RV_J_IMM_19_12_OFF) | \ (RV_IMM_SIGN(x_) << RV_J_IMM_SIGN_OFF); }) #define RV_EXTRACT_ITYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RV_X(x_, RV_I_IMM_11_0_OPOFF, RV_I_IMM_11_0_MASK)) | \ + (RV_X(x_, RV_I_IMM_11_0_OPOFF, RV_I_IMM_11_0_MASK) \ + << RV_I_IMM_11_0_OFF) | \ (RV_IMM_SIGN(x_) << RV_I_IMM_SIGN_OFF); }) #define RV_EXTRACT_BTYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RV_X(x_, RV_B_IMM_4_1_OPOFF, RV_B_IMM_4_1_MASK) << RV_B_IMM_4_1_OFF) | \ - (RV_X(x_, RV_B_IMM_10_5_OPOFF, RV_B_IMM_10_5_MASK) << RV_B_IMM_10_5_OFF) | \ + (RV_X(x_, RV_B_IMM_4_1_OPOFF, RV_B_IMM_4_1_MASK) \ + << RV_B_IMM_4_1_OFF) | \ + (RV_X(x_, RV_B_IMM_10_5_OPOFF, RV_B_IMM_10_5_MASK) \ + << RV_B_IMM_10_5_OFF) | \ (RV_X(x_, RV_B_IMM_11_OPOFF, RV_B_IMM_11_MASK) << RV_B_IMM_11_OFF) | \ (RV_IMM_SIGN(x_) << RV_B_IMM_SIGN_OFF); }) #define RVC_EXTRACT_JTYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) << RVC_J_IMM_3_1_OFF) | \ + (RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) \ + << RVC_J_IMM_3_1_OFF) | \ (RVC_X(x_, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_MASK) << RVC_J_IMM_4_OFF) | \ (RVC_X(x_, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_MASK) << RVC_J_IMM_5_OFF) | \ (RVC_X(x_, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_MASK) << RVC_J_IMM_6_OFF) | \ (RVC_X(x_, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_MASK) << RVC_J_IMM_7_OFF) | \ - (RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) << RVC_J_IMM_9_8_OFF) | \ - (RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) << RVC_J_IMM_10_OFF) | \ + (RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) \ + << RVC_J_IMM_9_8_OFF) | \ + (RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) \ + << RVC_J_IMM_10_OFF) | \ (RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); }) #define RVC_EXTRACT_BTYPE_IMM(x) \ ({typeof(x) x_ = (x); \ - (RVC_X(x_, RVC_B_IMM_2_1_OPOFF, RVC_B_IMM_2_1_MASK) << RVC_B_IMM_2_1_OFF) | \ - (RVC_X(x_, RVC_B_IMM_4_3_OPOFF, RVC_B_IMM_4_3_MASK) << RVC_B_IMM_4_3_OFF) | \ - (RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \ - (RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \ - (RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); }) + (RVC_X(x_, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_MASK) \ + << RVC_BZ_IMM_2_1_OFF) | \ + (RVC_X(x_, RVC_BZ_IMM_4_3_OPOFF, RVC_BZ_IMM_4_3_MASK) \ + << RVC_BZ_IMM_4_3_OFF) | \ + (RVC_X(x_, RVC_BZ_IMM_5_OPOFF, RVC_BZ_IMM_5_MASK) \ + << RVC_BZ_IMM_5_OFF) | \ + (RVC_X(x_, RVC_BZ_IMM_7_6_OPOFF, RVC_BZ_IMM_7_6_MASK) \ + << RVC_BZ_IMM_7_6_OFF) | \ + (RVC_IMM_SIGN(x_) << RVC_BZ_IMM_SIGN_OFF); }) #define RVG_EXTRACT_SYSTEM_CSR(x) \ - ({typeof(x) x_ = (x); RV_X(x_, RVG_SYSTEM_CSR_OFF, RVG_SYSTEM_CSR_MASK); }) + ({typeof(x) x_ = (x); \ + RV_X(x_, RVG_SYSTEM_CSR_OPOFF, RVG_SYSTEM_CSR_MASK); }) #define RVFDQ_EXTRACT_FL_FS_WIDTH(x) \ - ({typeof(x) x_ = (x); RV_X(x_, RVFDQ_FL_FS_WIDTH_OFF, \ - RVFDQ_FL_FS_WIDTH_MASK); }) + ({typeof(x) x_ = (x); RV_X(x_, RVG_FL_FS_WIDTH_OFF, \ + RVG_FL_FS_WIDTH_MASK); }) #define RVV_EXRACT_VL_VS_WIDTH(x) RVFDQ_EXTRACT_FL_FS_WIDTH(x) +/* + * Get the rd from an RVG instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_rd(u32 insn) +{ + return RV_EXTRACT_RD_REG(insn); +} + +/* + * Get the rs1 from an RVG instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_rs1(u32 insn) +{ + return RV_EXTRACT_RS1_REG(insn); +} + +/* + * Get the rs2 from an RVG instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_rs2(u32 insn) +{ + return RV_EXTRACT_RS2_REG(insn); +} + +/* + * Get the rs2 from a CR instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_cr_rs2(u32 insn) +{ + return RVC_EXTRACT_R_RS2_REG(insn); +} + +/* + * Get the rs2 from a CS or a CA instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_csca_rs2(u32 insn) +{ + return RVC_EXTRACT_SA_RS2_REG(insn); +} + +/* + * Get the funct3 from an RVG instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline u32 riscv_insn_extract_funct3(u32 insn) +{ + return RV_EXTRACT_FUNCT3(insn); +} + +/* + * Get the immediate from a I-type instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline s32 riscv_insn_extract_itype_imm(u32 insn) +{ + return RV_EXTRACT_ITYPE_IMM(insn); +} + +/* + * Get the immediate from a U-type instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline s32 riscv_insn_extract_utype_imm(u32 insn) +{ + return RV_EXTRACT_UTYPE_IMM(insn); +} + +/* + * Get the immediate from a B-type instruction. + * + * @insn: instruction to process + * Return: immediate + */ +static inline s32 riscv_insn_extract_btype_imm(u32 insn) +{ + return RV_EXTRACT_BTYPE_IMM(insn); +} + /* * Get the immediate from a J-type instruction. * @@ -344,6 +2116,70 @@ static inline s32 riscv_insn_extract_jtype_imm(u32 insn) return RV_EXTRACT_JTYPE_IMM(insn); } +/* + * Update a I-type instruction with an immediate value. + * + * @insn: pointer to the itype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_itype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all I-type IMM bits sit at 31:20 */ + *insn &= ~GENMASK(31, 20); + *insn |= (RV_X(imm, RV_I_IMM_11_0_OFF, RV_I_IMM_11_0_MASK) + << RV_I_IMM_11_0_OPOFF); +} + +/* + * Update a S-type instruction with an immediate value. + * + * @insn: pointer to the stype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_stype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all S-type IMM bits sit at 31:25 and 11:7 */ + *insn &= ~(GENMASK(31, 25) | GENMASK(11, 7)); + *insn |= (RV_X(imm, RV_S_IMM_4_0_OFF, RV_S_IMM_4_0_MASK) + << RV_S_IMM_4_0_OPOFF) | + (RV_X(imm, RV_S_IMM_11_5_OFF, RV_S_IMM_11_5_MASK) + << RV_S_IMM_11_5_OPOFF); +} + +/* + * Update a B-type instruction with an immediate value. + * + * @insn: pointer to the btype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_btype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all B-type IMM bits sit at 31:25 and 11:7 */ + *insn &= ~(GENMASK(31, 25) | GENMASK(11, 7)); + *insn |= (RV_X(imm, RV_B_IMM_4_1_OFF, RV_B_IMM_4_1_MASK) + << RV_B_IMM_4_1_OPOFF) | + (RV_X(imm, RV_B_IMM_10_5_OFF, RV_B_IMM_10_5_MASK) + << RV_B_IMM_10_5_OPOFF) | + (RV_X(imm, RV_B_IMM_11_OFF, RV_B_IMM_11_MASK) + << RV_B_IMM_11_OPOFF) | + (RV_X(imm, RV_B_IMM_SIGN_OFF, RV_B_IMM_SIGN_MASK) + << RV_B_IMM_SIGN_OPOFF); +} + +/* + * Update a U-type instruction with an immediate value. + * + * @insn: pointer to the jtype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_utype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all U-type IMM bits sit at 31:12 */ + *insn &= ~GENMASK(31, 12); + *insn |= (RV_X(imm, RV_S_IMM_31_12_OFF, RV_S_IMM_31_12_MASK) + << RV_S_IMM_31_12_OPOFF); +} + /* * Update a J-type instruction with an immediate value. * @@ -352,14 +2188,147 @@ static inline s32 riscv_insn_extract_jtype_imm(u32 insn) */ static inline void riscv_insn_insert_jtype_imm(u32 *insn, s32 imm) { - /* drop the old IMMs, all jal IMM bits sit at 31:12 */ + /* drop the old IMMs, all J-type IMM bits sit at 31:12 */ *insn &= ~GENMASK(31, 12); - *insn |= (RV_X(imm, RV_J_IMM_10_1_OFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OPOFF) | - (RV_X(imm, RV_J_IMM_11_OFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OPOFF) | - (RV_X(imm, RV_J_IMM_19_12_OFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OPOFF) | + *insn |= (RV_X(imm, RV_J_IMM_10_1_OFF, RV_J_IMM_10_1_MASK) + << RV_J_IMM_10_1_OPOFF) | + (RV_X(imm, RV_J_IMM_11_OFF, RV_J_IMM_11_MASK) + << RV_J_IMM_11_OPOFF) | + (RV_X(imm, RV_J_IMM_19_12_OFF, RV_J_IMM_19_12_MASK) + << RV_J_IMM_19_12_OPOFF) | (RV_X(imm, RV_J_IMM_SIGN_OFF, 1) << RV_J_IMM_SIGN_OPOFF); } +/* + * Update a CI-type instruction with an immediate value. + * slot. + * + * @insn: pointer to the citype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_citype_imm(u32 *insn, u32 imm_hi, + u32 imm_lo) +{ + /* drop the old IMMs, all CI-type IMM bits sit at 12 and 6:2 */ + *insn &= ~(12 | GENMASK(6, 2)); + *insn |= (RV_X(imm_lo, RVC_I_IMM_LO_OFF, RVC_I_IMM_LO_MASK) + << RVC_I_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_I_IMM_HI_OFF, RVC_I_IMM_HI_MASK) + << RVC_I_IMM_HI_OPOFF); +} + +/* + * Update a CSS-type instruction with an immediate value. + * + * @insn: pointer to the csstype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_csstype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all CSS-type IMM bits sit at 11:6 */ + *insn &= ~GENMASK(11, 6); + *insn |= (RV_X(imm, RVC_SS_IMM_OFF, RVC_SS_IMM_MASK) + << RVC_SS_IMM_OPOFF); +} + +/* + * Update a CIW-type instruction with an immediate value. + * + * @insn: pointer to the ciwtype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_ciwtype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all CIW-type IMM bits sit at 11:6 */ + *insn &= ~GENMASK(11, 6); + *insn |= (RV_X(imm, RVC_IW_IMM_OFF, RVC_IW_IMM_MASK) + << RVC_IW_IMM_OPOFF); +} + +/* + * Update a CL-type instruction with an immediate value. + * + * @insn: pointer to the cltype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_cltype_imm(u32 *insn, u32 imm_hi, + u32 imm_lo) +{ + /* drop the old IMMs, all CL-type IMM bits sit at 11:6 */ + *insn &= ~GENMASK(11, 6); + *insn |= (RV_X(imm_lo, RVC_L_IMM_LO_OFF, RVC_L_IMM_LO_MASK) + << RVC_L_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_L_IMM_HI_OFF, RVC_L_IMM_HI_MASK) + << RVC_L_IMM_HI_OPOFF); +} + +/* + * Update a CS-type instruction with an immediate value. + * + * @insn: pointer to the cstype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_cstype_imm(u32 *insn, u32 imm_hi, + u32 imm_lo) +{ + /* drop the old IMMs, all CS-type IMM bits sit at 11:6 */ + *insn &= ~GENMASK(11, 6); + *insn |= (RV_X(imm_lo, RVC_S_IMM_LO_OFF, RVC_S_IMM_LO_MASK) + << RVC_S_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_S_IMM_HI_OFF, RVC_S_IMM_HI_MASK) + << RVC_S_IMM_HI_OPOFF); +} + +/* + * Update a RVC BEQZ/BNEZ instruction with an immediate value. + * + * @insn: pointer to the cbtype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_cbztype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all CB-type IMM bits sit at 12:10 and 6:2 */ + *insn &= ~(GENMASK(12, 10) | GENMASK(6, 2)); + *insn |= (RV_X(imm, RVC_BZ_IMM_SIGN_OFF, RVC_BZ_IMM_SIGN_MASK) + << RVC_BZ_IMM_SIGN_OPOFF) | + (RV_X(imm, RVC_BZ_IMM_4_3_OFF, RVC_BZ_IMM_4_3_MASK) + << RVC_BZ_IMM_4_3_OPOFF) | + (RV_X(imm, RVC_BZ_IMM_7_6_OFF, RVC_BZ_IMM_7_6_MASK) + << RVC_BZ_IMM_7_6_OPOFF) | + (RV_X(imm, RVC_BZ_IMM_2_1_OFF, RVC_BZ_IMM_2_1_MASK) + << RVC_BZ_IMM_2_1_OPOFF) | + (RV_X(imm, RVC_BZ_IMM_5_OFF, RVC_BZ_IMM_5_MASK) + << RVC_BZ_IMM_5_OPOFF); +} + +/* + * Update a CJ-type instruction with an immediate value. + * + * @insn: pointer to the cjtype instruction + * @imm: the immediate to insert into the instruction + */ +static inline void riscv_insn_insert_cjtype_imm(u32 *insn, s32 imm) +{ + /* drop the old IMMs, all CJ-type IMM bits sit at 12:2 */ + *insn &= ~GENMASK(12, 2); + *insn |= (RV_X(imm, RVC_J_IMM_SIGN_OFF, RVC_J_IMM_SIGN_MASK) + << RVC_J_IMM_SIGN_OPOFF) | + (RV_X(imm, RVC_J_IMM_4_OFF, RVC_J_IMM_4_MASK) + << RVC_J_IMM_4_OPOFF) | + (RV_X(imm, RVC_J_IMM_9_8_OFF, RVC_J_IMM_9_8_MASK) + << RVC_J_IMM_9_8_OPOFF) | + (RV_X(imm, RVC_J_IMM_10_OFF, RVC_J_IMM_10_MASK) + << RVC_J_IMM_10_OPOFF) | + (RV_X(imm, RVC_J_IMM_6_OFF, RVC_J_IMM_6_MASK) + << RVC_J_IMM_6_OPOFF) | + (RV_X(imm, RVC_J_IMM_7_OFF, RVC_J_IMM_7_MASK) + << RVC_J_IMM_7_OPOFF) | + (RV_X(imm, RVC_J_IMM_3_1_OFF, RVC_J_IMM_3_1_MASK) + << RVC_J_IMM_3_1_OPOFF) | + (RV_X(imm, RVC_J_IMM_5_OFF, RVC_J_IMM_5_MASK) + << RVC_J_IMM_5_OPOFF); +} + /* * Put together one immediate from a U-type and I-type instruction pair. * @@ -372,7 +2341,8 @@ static inline void riscv_insn_insert_jtype_imm(u32 *insn, s32 imm) * @itype_insn: instruction * Return: combined immediate */ -static inline s32 riscv_insn_extract_utype_itype_imm(u32 utype_insn, u32 itype_insn) +static inline s32 riscv_insn_extract_utype_itype_imm(u32 utype_insn, + u32 itype_insn) { s32 imm; @@ -391,20 +2361,370 @@ static inline s32 riscv_insn_extract_utype_itype_imm(u32 utype_insn, u32 itype_i * * This also takes into account that both separate immediates are * considered as signed values, so if the I-type immediate becomes - * negative (BIT(11) set) the U-type part gets adjusted. + * negative (BIT(11) aka 0x800 set) the U-type part gets adjusted. * * @utype_insn: pointer to the utype instruction of the pair * @itype_insn: pointer to the itype instruction of the pair * @imm: the immediate to insert into the two instructions */ -static inline void riscv_insn_insert_utype_itype_imm(u32 *utype_insn, u32 *itype_insn, s32 imm) +static inline void riscv_insn_insert_utype_itype_imm(u32 *utype_insn, + u32 *itype_insn, s32 imm) { /* drop possible old IMM values */ - *utype_insn &= ~(RV_U_IMM_31_12_MASK); + *utype_insn &= ~(RV_U_IMM_31_12_MASK << RV_U_IMM_31_12_OPOFF); *itype_insn &= ~(RV_I_IMM_11_0_MASK << RV_I_IMM_11_0_OPOFF); /* add the adapted IMMs */ - *utype_insn |= (imm & RV_U_IMM_31_12_MASK) + ((imm & BIT(11)) << 1); + *utype_insn |= + ((imm + 0x800) & (RV_U_IMM_31_12_MASK << RV_U_IMM_31_12_OPOFF)); *itype_insn |= ((imm & RV_I_IMM_11_0_MASK) << RV_I_IMM_11_0_OPOFF); } + +static inline bool rvc_enabled(void) +{ + return IS_ENABLED(CONFIG_RISCV_ISA_C); +} + +/* RISC-V instruction formats. */ + +static inline u32 rv_r_insn(u8 funct7, u8 rs2, u8 rs1, u8 funct3, u8 rd, + u8 opcode) +{ + return (funct7 << RV_INSN_FUNCT7_OPOFF) | (rs2 << RV_INSN_RS2_OPOFF) | + (rs1 << RV_INSN_RS1_OPOFF) | (funct3 << RV_INSN_FUNCT3_OPOFF) | + (rd << RV_INSN_RD_OPOFF) | opcode; +} + +static inline u32 rv_i_insn(u16 imm11_0, u8 rs1, u8 funct3, u8 rd, u8 opcode) +{ + u32 imm = 0; + + riscv_insn_insert_stype_imm(&imm, imm11_0); + return imm | (rs1 << RV_INSN_RS1_OPOFF) | + (funct3 << RV_INSN_FUNCT3_OPOFF) | (rd << RV_INSN_RD_OPOFF) | + opcode; +} + +static inline u32 rv_s_insn(u16 imm11_0, u8 rs2, u8 rs1, u8 funct3, u8 opcode) +{ + u32 imm = 0; + + riscv_insn_insert_stype_imm(&imm, imm11_0); + return imm | (rs2 << RV_INSN_RS2_OPOFF) | (rs1 << RV_INSN_RS1_OPOFF) | + (funct3 << RV_INSN_FUNCT3_OPOFF) | opcode; +} + +static inline u32 rv_b_insn(u16 imm12_1, u8 rs2, u8 rs1, u8 funct3, u8 opcode) +{ + u32 imm = 0; + + riscv_insn_insert_btype_imm(&imm, imm12_1); + return imm | (rs2 << RV_INSN_RS2_OPOFF) | (rs1 << RV_INSN_RS1_OPOFF) | + (funct3 << RV_INSN_FUNCT3_OPOFF) | opcode; +} + +static inline u32 rv_u_insn(u32 imm31_12, u8 rd, u8 opcode) +{ + u32 imm = 0; + + riscv_insn_insert_btype_imm(&imm, imm31_12); + return imm | (rd << RV_INSN_RD_OPOFF) | opcode; +} + +static inline u32 rv_j_insn(u32 imm20_1, u8 rd, u8 opcode) +{ + u32 imm = 0; + + riscv_insn_insert_jtype_imm(&imm, imm20_1); + return imm | (rd << RV_INSN_RD_OPOFF) | opcode; +} + +static inline u32 rv_amo_insn(u8 funct5, u8 aq, u8 rl, u8 rs2, u8 rs1, + u8 funct3, u8 rd, u8 opcode) +{ + u8 funct7 = (funct5 << RV_INSN_FUNCT5_IN_OPOFF) | + (aq << RV_INSN_AQ_IN_OPOFF) | (rl << RV_INSN_RL_IN_OPOFF); + + return rv_r_insn(funct7, rs2, rs1, funct3, rd, opcode); +} + +/* RISC-V compressed instruction formats. */ + +static inline u16 rv_cr_insn(u8 funct4, u8 rd, u8 rs2, u8 op) +{ + return (funct4 << RVC_INSN_FUNCT4_OPOFF) | (rd << RVC_C2_RD_OPOFF) | + (rs2 << RVC_C2_RS2_OPOFF) | op; +} + +static inline u16 rv_ci_insn(u8 funct3, u32 imm_hi, u8 rd, u32 imm_lo, u8 op) +{ + u32 imm; + + imm = (RV_X(imm_lo, RVC_I_IMM_LO_OFF, RVC_I_IMM_LO_MASK) + << RVC_I_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_I_IMM_HI_OFF, RVC_I_IMM_HI_MASK) + << RVC_I_IMM_HI_OPOFF); + + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rd << RVC_C1_RD_OPOFF) | op; +} + +static inline u16 rv_css_insn(u8 funct3, u32 uimm, u8 rs2, u8 op) +{ + u32 imm; + + imm = (RV_X(uimm, RVC_SS_IMM_OFF, RVC_SS_IMM_MASK) << RVC_SS_IMM_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rs2 << RVC_C2_RS2_OPOFF) | op; +} + +static inline u16 rv_ciw_insn(u8 funct3, u32 uimm, u8 rd, u8 op) +{ + u32 imm; + + imm = (RV_X(uimm, RVC_IW_IMM_OFF, RVC_IW_IMM_MASK) << RVC_IW_IMM_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rd << RVC_C0_RD_OPOFF) | op; +} + +static inline u16 rv_cl_insn(u8 funct3, u32 imm_hi, u8 rs1, u8 rd, u32 imm_lo, + u8 op) +{ + u32 imm; + + imm = (RV_X(imm_lo, RVC_L_IMM_LO_OFF, RVC_L_IMM_LO_MASK) + << RVC_L_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_L_IMM_HI_OFF, RVC_L_IMM_HI_MASK) + << RVC_L_IMM_HI_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rs1 << RVC_C0_RS1_OPOFF) | (rd << RVC_C0_RD_OPOFF) | op; +} + +static inline u16 rv_cs_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rs2, + u8 op) +{ + u32 imm; + + imm = (RV_X(imm_lo, RVC_S_IMM_LO_OFF, RVC_S_IMM_LO_MASK) + << RVC_S_IMM_LO_OPOFF) | + (RV_X(imm_hi, RVC_S_IMM_HI_OFF, RVC_S_IMM_HI_MASK) + << RVC_S_IMM_HI_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rs1 << RVC_C0_RS1_OPOFF) | (rs2 << RVC_C0_RS2_OPOFF) | op; +} + +static inline u16 rv_ca_insn(u8 funct6, u8 rd, u8 funct2, u8 rs2, u8 op) +{ + return (funct6 << RVC_INSN_FUNCT6_OPOFF) | (rd << RVC_C1_RD_OPOFF) | + (funct2 << RVC_INSN_FUNCT2_CA_OPOFF) | + (rs2 << RVC_C0_RS2_OPOFF) | op; +} + +static inline u16 rv_cb_insn(u8 funct3, u32 off_hi, u8 rd, u32 off_lo, u8 op) +{ + u32 imm; + + imm = (RV_X(off_lo, RVC_B_IMM_LO_OFF, RVC_B_IMM_LO_MASK) + << RVC_B_IMM_LO_OPOFF) | + (RV_X(off_hi, RVC_B_IMM_HI_OFF, RVC_B_IMM_HI_MASK) + << RVC_B_IMM_HI_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | + (rd << RVC_C1_RD_OPOFF) | op; +} + +static inline u16 rv_cj_insn(u8 funct3, u32 uimm, u8 op) +{ + u32 imm; + + imm = (RV_X(uimm, RVC_J_IMM_OFF, RVC_J_IMM_MASK) << RVC_J_IMM_OPOFF); + return imm | (funct3 << RVC_INSN_FUNCT3_OPOFF) | op; +} + +/* RVG instructions */ +__RVG_INSN_FUNCS(lui, LUI, U) +__RVG_INSN_FUNCS(auipc, AUIPC, U) +__RVG_INSN_FUNCS(jal, JAL, J) +__RVG_INSN_FUNCS(jalr, JALR, I) +__RVG_INSN_FUNCS(beq, BEQ, B) +__RVG_INSN_FUNCS(bne, BNE, B) +__RVG_INSN_FUNCS(blt, BLT, B) +__RVG_INSN_FUNCS(bge, BGE, B) +__RVG_INSN_FUNCS(bltu, BLTU, B) +__RVG_INSN_FUNCS(bgeu, BGEU, B) +__RVG_INSN_FUNCS(lb, LB, I) +__RVG_INSN_FUNCS(lh, LH, I) +__RVG_INSN_FUNCS(lw, LW, I) +__RVG_INSN_FUNCS(lbu, LBU, I) +__RVG_INSN_FUNCS(lhu, LHU, I) +__RVG_INSN_FUNCS(sb, SB, S) +__RVG_INSN_FUNCS(sh, SH, S) +__RVG_INSN_FUNCS(sw, SW, S) +__RVG_INSN_FUNCS(addi, ADDI, I) +__RVG_INSN_FUNCS(slti, SLTI, I) +__RVG_INSN_FUNCS(sltiu, SLTIU, I) +__RVG_INSN_FUNCS(xori, XORI, I) +__RVG_INSN_FUNCS(ori, ORI, I) +__RVG_INSN_FUNCS(andi, ANDI, I) +__RVG_INSN_FUNCS(slli, SLLI, I) +__RVG_INSN_FUNCS(srli, SRLI, I) +__RVG_INSN_FUNCS(srai, SRAI, I) +__RVG_INSN_FUNCS(add, ADD, R) +__RVG_INSN_FUNCS(sub, SUB, R) +__RVG_INSN_FUNCS(sll, SLL, R) +__RVG_INSN_FUNCS(slt, SLT, R) +__RVG_INSN_FUNCS(sltu, SLTU, R) +__RVG_INSN_FUNCS(xor, XOR, R) +__RVG_INSN_FUNCS(srl, SRL, R) +__RVG_INSN_FUNCS(sra, SRA, R) +__RVG_INSN_FUNCS(or, OR, R) +__RVG_INSN_FUNCS(and, AND, R) +__RVG_INSN_FUNCS(nop, NOP, NOP) +__RVG_INSN_FUNCS(fence, FENCE, FENCE) +__RVG_INSN_FUNCS(fencetso, FENCETSO, FENCETSO) +__RVG_INSN_FUNCS(pause, PAUSE, PAUSE) +__RVG_INSN_FUNCS(ecall, ECALL, ECALL) +__RVG_INSN_FUNCS(ebreak, EBREAK, EBREAK) +/* Extra Instructions */ +__RVG_INSN_FUNCS(bgtu, BLTU, REV_B) +__RVG_INSN_FUNCS(bleu, BGEU, REV_B) +__RVG_INSN_FUNCS(bgt, BLT, REV_B) +__RVG_INSN_FUNCS(ble, BGE, REV_B) +/* F Standard Extension */ +__RVG_INSN_FUNCS(flw, FLW, I) +__RVG_INSN_FUNCS(fsw, FSW, S) +/* D Standard Extension */ +__RVG_INSN_FUNCS(fld, FLD, I) +__RVG_INSN_FUNCS(fsd, FSD, S) +/* Q Standard Extension */ +__RVG_INSN_FUNCS(flq, FLQ, I) +__RVG_INSN_FUNCS(fsq, FSQ, S) +/* Zicsr Standard Extension */ +__RVG_INSN_FUNCS(csrrw, CSRRW, I) +__RVG_INSN_FUNCS(csrrs, CSRRS, I) +__RVG_INSN_FUNCS(csrrc, CSRRC, I) +__RVG_INSN_FUNCS(csrrwi, CSRRWI, I) +__RVG_INSN_FUNCS(csrrsi, CSRRSI, I) +__RVG_INSN_FUNCS(csrrci, CSRRCI, I) +/* M Standard Extension */ +__RVG_INSN_FUNCS(mul, MUL, R) +__RVG_INSN_FUNCS(mulh, MULH, R) +__RVG_INSN_FUNCS(mulhsu, MULHSU, R) +__RVG_INSN_FUNCS(mulhu, MULHU, R) +__RVG_INSN_FUNCS(div, DIV, R) +__RVG_INSN_FUNCS(divu, DIVU, R) +__RVG_INSN_FUNCS(rem, REM, R) +__RVG_INSN_FUNCS(remu, REMU, R) +/* A Standard Extension */ +__RVG_INSN_FUNCS(lr_w, LR_W, AMO) +__RVG_INSN_FUNCS(sc_w, SC_W, AMO) +__RVG_INSN_FUNCS(amoswap_w, AMOSWAP_W, AMO) +__RVG_INSN_FUNCS(amoadd_w, AMOADD_W, AMO) +__RVG_INSN_FUNCS(amoxor_w, AMOXOR_W, AMO) +__RVG_INSN_FUNCS(amoand_w, AMOAND_W, AMO) +__RVG_INSN_FUNCS(amoor_w, AMOOR_W, AMO) +__RVG_INSN_FUNCS(amomin_w, AMOMIN_W, AMO) +__RVG_INSN_FUNCS(amomax_w, AMOMAX_W, AMO) +__RVG_INSN_FUNCS(amominu_w, AMOMINU_W, AMO) +__RVG_INSN_FUNCS(amomaxu_w, AMOMAXU_W, AMO) + +/* RVG 64-bit only instructions*/ +__RVG_INSN_FUNCS(lwu, LWU, I) +__RVG_INSN_FUNCS(ld, LD, I) +__RVG_INSN_FUNCS(sd, SD, S) +__RVG_INSN_FUNCS(addiw, ADDIW, I) +__RVG_INSN_FUNCS(slliw, SLLIW, I) +__RVG_INSN_FUNCS(srliw, SRLIW, I) +__RVG_INSN_FUNCS(sraiw, SRAIW, I) +__RVG_INSN_FUNCS(addw, ADDW, R) +__RVG_INSN_FUNCS(subw, SUBW, R) +__RVG_INSN_FUNCS(sllw, SLLW, R) +__RVG_INSN_FUNCS(srlw, SRLW, R) +__RVG_INSN_FUNCS(sraw, SRAW, R) +/* M Standard Extension */ +__RVG_INSN_FUNCS(divw, DIVW, R) +__RVG_INSN_FUNCS(mulw, MULW, R) +__RVG_INSN_FUNCS(divuw, DIVUW, R) +__RVG_INSN_FUNCS(remw, REMW, R) +__RVG_INSN_FUNCS(remuw, REMUW, R) +/* A Standard Extension */ +__RVG_INSN_FUNCS(lr_d, LR_D, AMO) +__RVG_INSN_FUNCS(sc_d, SC_D, AMO) +__RVG_INSN_FUNCS(amoswap_d, AMOSWAP_D, AMO) +__RVG_INSN_FUNCS(amoadd_d, AMOADD_D, AMO) +__RVG_INSN_FUNCS(amoxor_d, AMOXOR_D, AMO) +__RVG_INSN_FUNCS(amoand_d, AMOAND_D, AMO) +__RVG_INSN_FUNCS(amoor_d, AMOOR_D, AMO) +__RVG_INSN_FUNCS(amomin_d, AMOMIN_D, AMO) +__RVG_INSN_FUNCS(amomax_d, AMOMAX_D, AMO) +__RVG_INSN_FUNCS(amominu_d, AMOMINU_D, AMO) +__RVG_INSN_FUNCS(amomaxu_d, AMOMAXU_D, AMO) +/* Privileged instructions */ +__RISCV_INSN_FUNCS(sret, RV_MASK_SRET, RV_MATCH_SRET) + +/* RVC Quadrant 0 instructions */ +__RVC_INSN_FUNCS(c_addi4spn, C_ADDI4SPN, CIW, C0, DEFAULT) +__RVC_INSN_FUNCS(c_fld, C_FLD, CL, C0, DEFAULT) +__RVC_INSN_FUNCS(c_lw, C_LW, CL, C0, DEFAULT) +__RVC_INSN_FUNCS(c_fsd, C_FSD, CS, C0, DEFAULT) +__RVC_INSN_FUNCS(c_sw, C_SW, CS, C0, DEFAULT) +/* RVC Quadrant 1 instructions */ +__RVC_INSN_FUNCS(c_nop, C_NOP, CNOP, C1, DEFAULT) +__RVC_INSN_FUNCS(c_addi, C_ADDI, CI, C1, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_li, C_LI, CI, C1, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_addi16sp, C_ADDI16SP, CI_SP, C1, DEFAULT) +__RVC_INSN_FUNCS(c_lui, C_LUI, CI, C1, NON_ZERO_TWO_RD) +__RVC_INSN_FUNCS(c_srli, C_SRLI, CB, C1, DEFAULT) +__RVC_INSN_FUNCS(c_srai, C_SRAI, CB, C1, DEFAULT) +__RVC_INSN_FUNCS(c_andi, C_ANDI, CB, C1, DEFAULT) +__RVC_INSN_FUNCS(c_sub, C_SUB, CA, C1, DEFAULT) +__RVC_INSN_FUNCS(c_or, C_OR, CA, C1, DEFAULT) +__RVC_INSN_FUNCS(c_and, C_AND, CA, C1, DEFAULT) +__RVC_INSN_FUNCS(c_xor, C_XOR, CA, C1, DEFAULT) +__RVC_INSN_FUNCS(c_j, C_J, CJ, C1, DEFAULT) +__RVC_INSN_FUNCS(c_beqz, C_BEQZ, CB, C1, DEFAULT) +__RVC_INSN_FUNCS(c_bnez, C_BNEZ, CB, C1, DEFAULT) +/* RVC Quadrant 2 instructions */ +__RVC_INSN_FUNCS(c_slli, C_SLLI, CI, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_fldsp, C_FLDSP, CI, C2, DEFAULT) +__RVC_INSN_FUNCS(c_lwsp, C_LWSP, CI, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_jr, C_JR, CR_ZERO_RS, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_mv, C_MV, CR, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_ebreak, C_EBREAK, CEBREAK, C2, DEFAULT) +__RVC_INSN_FUNCS(c_jalr, C_JALR, CR_ZERO_RS, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_add, C_ADD, CR, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_fsdsp, C_FSDSP, CSS, C2, DEFAULT) +__RVC_INSN_FUNCS(c_swsp, C_SWSP, CSS, C2, DEFAULT) + +#if __riscv_xlen == 32 +/* RV32C-only instructions */ +__RVC_INSN_FUNCS(c_flw, C_FLW, CL, C0, DEFAULT) +__RVC_INSN_FUNCS(c_fsw, C_FSW, CS, C0, DEFAULT) +__RVC_INSN_FUNCS(c_jal, C_JAL, CJ, C1, DEFAULT) +__RVC_INSN_FUNCS(c_flwsp, C_FLWSP, CI, C2, DEFAULT) +__RVC_INSN_FUNCS(c_fswsp, C_FSWSP, CSS, C2, DEFAULT) +#else +#define riscv_insn_is_c_flw(opcode) 0 +#define riscv_insn_is_c_fsw(opcode) 0 +#define riscv_insn_is_c_jal(opcode) 0 +#define riscv_insn_is_c_flwsp(opcode) 0 +#define riscv_insn_is_c_fswsp(opcode) 0 +#endif + +#if __riscv_xlen == 64 +/* RV64C-only instructions */ +__RVC_INSN_FUNCS(c_ld, C_LD, CL, C0, DEFAULT) +__RVC_INSN_FUNCS(c_sd, C_SD, CS, C0, DEFAULT) +__RVC_INSN_FUNCS(c_addiw, C_ADDIW, CI, C1, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_subw, C_SUBW, CA, C1, DEFAULT) +__RVC_INSN_FUNCS(c_ldsp, C_LDSP, CI, C2, NON_ZERO_RS1_RD) +__RVC_INSN_FUNCS(c_sdsp, C_SDSP, CSS, C2, DEFAULT) +#else +#define riscv_insn_is_c_ld(opcode) 0 +#define riscv_insn_is_c_sd(opcode) 0 +#define riscv_insn_is_c_addi(opcode) 0 +#define riscv_insn_is_c_subw(opcode) 0 +#define riscv_insn_is_c_ldsp(opcode) 0 +#define riscv_insn_is_c_sdsp(opcode) 0 +#endif + #endif /* _ASM_RISCV_INSN_H */ diff --git a/arch/riscv/include/asm/reg.h b/arch/riscv/include/asm/reg.h new file mode 100644 index 000000000000..b653d5e4906d --- /dev/null +++ b/arch/riscv/include/asm/reg.h @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Definitions of RISC-V registers + * + * Copyright (c) 2023 Rivos Inc + * + */ + +#ifndef _ASM_RISCV_REG_H +#define _ASM_RISCV_REG_H + +#include +#include + +enum { + RV_REG_ZERO = 0, /* The constant value 0 */ + RV_REG_RA = 1, /* Return address */ + RV_REG_SP = 2, /* Stack pointer */ + RV_REG_GP = 3, /* Global pointer */ + RV_REG_TP = 4, /* Thread pointer */ + RV_REG_T0 = 5, /* Temporaries */ + RV_REG_T1 = 6, + RV_REG_T2 = 7, + RV_REG_FP = 8, /* Saved register/frame pointer */ + RV_REG_S1 = 9, /* Saved register */ + RV_REG_A0 = 10, /* Function argument/return values */ + RV_REG_A1 = 11, /* Function arguments */ + RV_REG_A2 = 12, + RV_REG_A3 = 13, + RV_REG_A4 = 14, + RV_REG_A5 = 15, + RV_REG_A6 = 16, + RV_REG_A7 = 17, + RV_REG_S2 = 18, /* Saved registers */ + RV_REG_S3 = 19, + RV_REG_S4 = 20, + RV_REG_S5 = 21, + RV_REG_S6 = 22, + RV_REG_S7 = 23, + RV_REG_S8 = 24, + RV_REG_S9 = 25, + RV_REG_S10 = 26, + RV_REG_S11 = 27, + RV_REG_T3 = 28, /* Temporaries */ + RV_REG_T4 = 29, + RV_REG_T5 = 30, + RV_REG_T6 = 31, +}; + +static inline bool is_creg(u8 reg) +{ + return (1 << reg) & (BIT(RV_REG_FP) | + BIT(RV_REG_S1) | + BIT(RV_REG_A0) | + BIT(RV_REG_A1) | + BIT(RV_REG_A2) | + BIT(RV_REG_A3) | + BIT(RV_REG_A4) | + BIT(RV_REG_A5)); +} + +static inline bool rv_insn_reg_get_val(unsigned long *regs, u32 index, + unsigned long *ptr) +{ + if (index == 0) + *ptr = 0; + else if (index <= 31) + *ptr = *((unsigned long *)regs + index); + else + return false; + + return true; +} + +static inline bool rv_insn_reg_set_val(unsigned long *regs, u32 index, + unsigned long val) +{ + if (index == 0) + return false; + else if (index <= 31) + *((unsigned long *)regs + index) = val; + else + return false; + + return true; +} + +#endif /* _ASM_RISCV_REG_H */ diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c index 2e0266ae6bd7..2393342ab362 100644 --- a/arch/riscv/kernel/kgdb.c +++ b/arch/riscv/kernel/kgdb.c @@ -43,7 +43,7 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) if (get_kernel_nofault(op_code, (void *)pc)) return -EINVAL; - if ((op_code & __INSN_LENGTH_MASK) != __INSN_LENGTH_GE_32) { + if ((op_code & __INSN_LENGTH_MASK) != INSN_C_MASK) { if (riscv_insn_is_c_jalr(op_code) || riscv_insn_is_c_jr(op_code)) { rs1_num = decode_register_index(op_code, RVC_C2_RS1_OPOFF); @@ -69,7 +69,7 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) *next_addr = pc + 2; } } else { - if ((op_code & __INSN_OPCODE_MASK) == __INSN_BRANCH_OPCODE) { + if (riscv_insn_is_branch(op_code)) { bool result = false; long imm = RV_EXTRACT_BTYPE_IMM(op_code); unsigned long rs1_val = 0, rs2_val = 0; diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c index 7441ac8a6843..994edb4bd16a 100644 --- a/arch/riscv/kernel/probes/simulate-insn.c +++ b/arch/riscv/kernel/probes/simulate-insn.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0+ +#include #include #include #include @@ -7,32 +8,6 @@ #include "decode-insn.h" #include "simulate-insn.h" -static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index, - unsigned long *ptr) -{ - if (index == 0) - *ptr = 0; - else if (index <= 31) - *ptr = *((unsigned long *)regs + index); - else - return false; - - return true; -} - -static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index, - unsigned long val) -{ - if (index == 0) - return false; - else if (index <= 31) - *((unsigned long *)regs + index) = val; - else - return false; - - return true; -} - bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs) { /* @@ -44,7 +19,7 @@ bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs u32 imm; u32 index = (opcode >> 7) & 0x1f; - ret = rv_insn_reg_set_val(regs, index, addr + 4); + ret = rv_insn_reg_set_val((unsigned long *)regs, index, addr + 4); if (!ret) return ret; @@ -71,11 +46,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg u32 rd_index = (opcode >> 7) & 0x1f; u32 rs1_index = (opcode >> 15) & 0x1f; - ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr); + ret = rv_insn_reg_get_val((unsigned long *)regs, rs1_index, &base_addr); if (!ret) return ret; - ret = rv_insn_reg_set_val(regs, rd_index, addr + 4); + ret = rv_insn_reg_set_val((unsigned long *)regs, rd_index, addr + 4); if (!ret) return ret; @@ -110,7 +85,7 @@ bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *re u32 rd_idx = auipc_rd_idx(opcode); unsigned long rd_val = addr + auipc_offset(opcode); - if (!rv_insn_reg_set_val(regs, rd_idx, rd_val)) + if (!rv_insn_reg_set_val((unsigned long *)regs, rd_idx, rd_val)) return false; instruction_pointer_set(regs, addr + 4); @@ -156,8 +131,8 @@ bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *r unsigned long rs1_val; unsigned long rs2_val; - if (!rv_insn_reg_get_val(regs, branch_rs1_idx(opcode), &rs1_val) || - !rv_insn_reg_get_val(regs, branch_rs2_idx(opcode), &rs2_val)) + if (!rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs1(opcode), &rs1_val) || + !rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(opcode), &rs2_val)) return false; offset_tmp = branch_offset(opcode); diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 8d92fb6c522c..d67a60369e02 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -49,7 +49,7 @@ int riscv_v_setup_vsize(void) static bool insn_is_vector(u32 insn_buf) { - u32 opcode = insn_buf & __INSN_OPCODE_MASK; + u32 opcode = insn_buf & RV_INSN_OPCODE_MASK; u32 width, csr; /* From patchwork Fri Aug 4 02:10:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F16AAEB64DD for ; Fri, 4 Aug 2023 02:11:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vj7mM/uUAUCrkTvNcQWFxxya44RCLZXZpy/t/5aDr8Y=; b=LScL/JpmAdH52s 63l1aQ3pr2buDUVno4HNL8QrBMz1kTlC203eTpGmvrl5dhTzPMtDhc3ZZX8wMrJdVYqoQBqhk5EUt Ad5Bi0XXTA4kcOvyue8EskuXPNoOjZGEVo1ZupiKhQuiNjHmAR9E1+Yy714i9xwiggYB4kcFmWcds c36F1dMMejDzTkiQ82h8qlk7k8CGzJ+q8JPR4vpo16byrHOzEmVQVwZXEaCju1fhDKtpqtK5gWn/H Cl1UvSQSuruUMw1wcptL5SfzBHDFD50qyjS+zLoF6pRifLVMVIvAhnojyollFJbzYTgCrnqcNd0XS ZZ7apDRdEp744T+AhUuQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHL-00BKw5-0S; Fri, 04 Aug 2023 02:10:59 +0000 Received: from mail-oo1-xc36.google.com ([2607:f8b0:4864:20::c36]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHI-00BKu8-1w for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:10:58 +0000 Received: by mail-oo1-xc36.google.com with SMTP id 006d021491bc7-56d455462c2so404360eaf.2 for ; Thu, 03 Aug 2023 19:10:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115054; x=1691719854; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=efS36dkbCzja1DEoOkKk5twnSGIQYzSPz7GFgwV0rps=; b=sL5L+XGZXxz3uilHLZ9996rABpzv96OCLn70c9kreiKgPg+WH3KJroL9cVLj2fBjI4 NQe8+w7CLAox+ot9c6Sx0ktR3iYP5P+c8pN85u54Rsy06KDa4GcfJccDlv6X4vB+zHUn fOSxh1hOUBlo6UgFcJDC47IQVz1f4EDcD0TlBOGU/8cvax961M2jcrcuGJPlBRjyBApH Yr9yABf/BnVF1c20z4GgGSV/40DAWyj5SVhDT+9mC7k6UVziYmYdgL3moyY6QrCvZd73 4FZ3hUQCNUuO/ok3JHJqIRfA/yxktc+meR9K37rZmH55b5iR6aKlCC4Gg1xr0MwsaNDG Pekw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115054; x=1691719854; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=efS36dkbCzja1DEoOkKk5twnSGIQYzSPz7GFgwV0rps=; b=dnc5NJaKuPDGHga7T3QyNzR9S+2AORZKKdtFYot763k8g4rQnyLoDKrFsw0b5xUNKY i4osf6RBWd8E/hBp9vc6g3GY18WXLhS00kamv+YlKWtiluHhxsUA6eg1sxEhtydKBSam frh440Z4k/2edBbiIxRqunpSO3EWdTjGp1srltdvRT4XfA539BIuv9Zh7q2EkWK/bO27 p+Qw/x5C6mPGm8OTeuvlQHOslY2qUWtCh+Rx0Bo0/RWzvQ2vcny4sC7y0CI7g/urnRAi tb4RCT3rodFoABt3Rk70xJNAr64H6TUoEFCS4so3IsdBbCbYF4iTz0NF5O34NTpr0Hi9 WufQ== X-Gm-Message-State: AOJu0YzF2Z2kaINmV5/WstICP0h6obvkrRYkZmp6Dq75KfjuY/F3AY5P jRjqJoJf8Qx4879K5fPpwuvSDA== X-Google-Smtp-Source: AGHT+IHkkquc+vtnTAEnaVFOIeS3KymV7yesjBryf/XUZfueFQfWUT4yuX+ZoQdYenvjMp3+aT7A/A== X-Received: by 2002:a05:6358:c603:b0:134:ded4:294 with SMTP id fd3-20020a056358c60300b00134ded40294mr441125rwb.17.1691115054417; Thu, 03 Aug 2023 19:10:54 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:53 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:27 -0700 Subject: [PATCH 02/10] RISC-V: vector: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-2-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191056_639293_D8053371 X-CRM114-Status: GOOD ( 10.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use instructions in insn.h Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/vector.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index d67a60369e02..1433d70abdd7 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -18,7 +18,6 @@ #include #include #include -#include static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); @@ -56,7 +55,7 @@ static bool insn_is_vector(u32 insn_buf) * All V-related instructions, including CSR operations are 4-Byte. So, * do not handle if the instruction length is not 4-Byte. */ - if (unlikely(GET_INSN_LENGTH(insn_buf) != 4)) + if (unlikely(INSN_LEN(insn_buf) != 4)) return false; switch (opcode) { From patchwork Fri Aug 4 02:10:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D18CDEB64DD for ; Fri, 4 Aug 2023 02:11:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nPYoUXWXGZd9rExP+pfdENrjKlcGgrTMC3DUVfEhgHg=; b=h94vXe7poXYwaA +mlq5VjH786i4QVvDg3Dk+hQloS3QB+2PForKveB4s1JLk7DksuLJt98jkOj3U6EXBKo1SpLKLlkc cd1kUXcnBNNt7elQEDE8567rkWaiJuCaiQYrVWzipuBOFZi3hJ0Se7P9IEJz9atZ9GvAXXoBGJtGN vyx03u3gvss2/elauKj2sly+qb1R7gNgVcO+2ehVAnh5qeBZT9qKPS1dap5kg8WJ5H9pqElMOsy9Q qp3AYT46AF/hq5JeQ/NjO5tRtMBulmzxQuseQ0NPMSa5VYZtxttvHQ+gmqYoXg4OUC1pGXEneGp+N 7QppvSpzuwQYDpc43BXw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHO-00BKyy-3A; Fri, 04 Aug 2023 02:11:02 +0000 Received: from mail-qk1-x72a.google.com ([2607:f8b0:4864:20::72a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHL-00BKum-2d for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:01 +0000 Received: by mail-qk1-x72a.google.com with SMTP id af79cd13be357-76ad8892d49so132768785a.1 for ; Thu, 03 Aug 2023 19:10:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115056; x=1691719856; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=+3U++cEq45L7iITV+zLwLaldo0Na07T+o+hHDfRRKC4=; b=ayC0Day7CXdP53wwIDs0tv4bKX6RYBEbt+WDQyLb8tpMqzB8hUHeS+pZ/fmXrx4qpA 3JkEy7VvFQFPwmFiZrOcgMEo3fzpW+ldBC+JO6eugwsbDr7vJSjP1/35CxXsN0MB0Irt mkx9hB89+CUahHSi5pi8nO9KgVUWFgp8tr4jlCEdJmhvewPu4OWQqZ3baJYZwvTPdBfy +HdxqmvQtuuS7UKmZq8wZJhY55hKFLVGi3/KRn4eaZy7GtYIOEbVuJ5qIJOIr+oSnw4f I7cK7giGeGjDDUCgXif5SspsldvYp/giTIrzSG6SdHe7tpD9OaruV+cA9piG0QSIk91/ G56g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115056; x=1691719856; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+3U++cEq45L7iITV+zLwLaldo0Na07T+o+hHDfRRKC4=; b=Z7m6NxJPFtnAfq8/cQ/QUCISmtrth1QFz1h3QepOvUTnWAYcp+VmZjr1Squ4RhO3Dq B7dfOBUCTwOqzDtM2WMgi9pCVl9Ph+6V5aAYILWY+w49+MwppI1Y4M7lmeH6/L7MQcBU xE9HWwQdIg8QOKvRj0mVA2h8Xq8Xw5h5nZzSsBk4erW83uoHmdNoVVhPeaY1Srx6cEIb AoPoCuR0lUUUwTRNDhoBpZTAsRFxZW6vysqv5gUf8aLTx6TqoFMCZbXoVvRP/K2YxJ96 5XGJR/aMQ/qu5VDJw4S6E8DN63aJWCzuGJdnvvS+VCk08NyypwZg9++IwBSG3J4eP/HI PjwQ== X-Gm-Message-State: AOJu0Yxong8j7shG+onEEjDATZ1kg8Ab2J4/E16B819eLJUBAhGWPAMf pM1lG8NYfnqYP6sxJFjr9DnwFw== X-Google-Smtp-Source: AGHT+IG+6EaYMDmn7R52Jc4ascni7z5WQFwfR/PJzZqWfgRRzwdONmN63Q30hGcwUi7GhkBgKxKI9A== X-Received: by 2002:a05:620a:1918:b0:76c:a187:13be with SMTP id bj24-20020a05620a191800b0076ca18713bemr687807qkb.33.1691115056542; Thu, 03 Aug 2023 19:10:56 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:56 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:28 -0700 Subject: [PATCH 03/10] RISC-V: Refactor jump label instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-3-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191059_874265_17A83701 X-CRM114-Status: GOOD ( 10.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/insn.h | 2 +- arch/riscv/kernel/jump_label.c | 13 ++++--------- 2 files changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h index 04f7649e1add..124ab02973a7 100644 --- a/arch/riscv/include/asm/insn.h +++ b/arch/riscv/include/asm/insn.h @@ -1984,7 +1984,7 @@ static __always_inline bool riscv_insn_is_branch(u32 code) << RVC_J_IMM_10_OFF) | \ (RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); }) -#define RVC_EXTRACT_BTYPE_IMM(x) \ +#define RVC_EXTRACT_BZ_IMM(x) \ ({typeof(x) x_ = (x); \ (RVC_X(x_, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_MASK) \ << RVC_BZ_IMM_2_1_OFF) | \ diff --git a/arch/riscv/kernel/jump_label.c b/arch/riscv/kernel/jump_label.c index e6694759dbd0..fdaac2a13eac 100644 --- a/arch/riscv/kernel/jump_label.c +++ b/arch/riscv/kernel/jump_label.c @@ -9,11 +9,9 @@ #include #include #include +#include #include -#define RISCV_INSN_NOP 0x00000013U -#define RISCV_INSN_JAL 0x0000006fU - void arch_jump_label_transform(struct jump_entry *entry, enum jump_label_type type) { @@ -26,13 +24,10 @@ void arch_jump_label_transform(struct jump_entry *entry, if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288)) return; - insn = RISCV_INSN_JAL | - (((u32)offset & GENMASK(19, 12)) << (12 - 12)) | - (((u32)offset & GENMASK(11, 11)) << (20 - 11)) | - (((u32)offset & GENMASK(10, 1)) << (21 - 1)) | - (((u32)offset & GENMASK(20, 20)) << (31 - 20)); + insn = RVG_OPCODE_JAL; + riscv_insn_insert_jtype_imm(&insn, (s32)offset); } else { - insn = RISCV_INSN_NOP; + insn = RVG_OPCODE_NOP; } mutex_lock(&text_mutex); From patchwork Fri Aug 4 02:10:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9E6E5EB64DD for ; Fri, 4 Aug 2023 02:11:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xJrc+TzGzUKlEMmmf3xWBQOb+r/3yoG58eKGy8f8ZgY=; b=HzUA4r6ZQq+JZD kjjzizjie7ohyGZct245UfQSAW6LJx7/Uxud8RUf7sUJUilx265yuQQNHTS+ngl6b18aBXhNkRz8q kPkTk12n3wSIudH8S/aBW1SviRX6xMHcx+WmVe8wtf64/eUx99VIUP/aTSwmiHbDkWqDTDBCW7Njm 3qeZtpjyW1qIQdOgjwiDFY8DMqiuyiHoeKuZ6iGOXXtIlZChAGPeiMWFD4YMT3zPP9NbxcOmgN7Cb aPx8e7Jhu7Uts0Hjxe4lcCepSvMnTBgGJTSY6KS+oN9x/0+sRPXBVL3nwX4bRP1DGoVtjhg3KIKoY VzBUlYxFuB7FzXg/zAkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHR-00BL1Q-0v; Fri, 04 Aug 2023 02:11:05 +0000 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHM-00BKw4-2M for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:03 +0000 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1bc3bec2c95so11704525ad.0 for ; Thu, 03 Aug 2023 19:10:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115058; x=1691719858; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=k6HgGLGtG7M3LESpN5nTWoHfsSRrv8ioEMKugiOnOTw=; b=tu7avlotyozMV/l8NRFA0DJ9nnNC6Iq43LymSGWeqaIFo+4SOLemWteCRxuxVqX1vN v5bb3l0plvTmmo2yGXB6uIC1F/QI4lTwiO69NqjpKxuMRfL+xYAnS4u+tegfJm0NuEKP 0kChJuxf5ozsMipWf8xNnVMnHYeK7CIXvxUiE3DuPC17GkDDWyzdDHCq7v8KNNMZN3aR E+nUQYwRE0MLaL/LwTWHQGSQHIoxQ1RLPR4iRDtmnbokti7RepO5Ri/GxUt3bdFA6/dP pyxnjsEJpHF1MQS3xf8YsV6SjjvC5Eyd4tgBsMQxdwZBNR6xOFws8g38Zn3skjbBCBfO Im3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115059; x=1691719859; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k6HgGLGtG7M3LESpN5nTWoHfsSRrv8ioEMKugiOnOTw=; b=aNM0fypLHrpI2ieLndU1FQOtOJ1U4BGqcds+wwpPwOJfxbikkn5RRkfa0UbkypY12/ DBOFTfZhwHVSnC+22sInDqMLkLKILZucY2GZ9JRpVx4GQjAQ4P5Iqi2kZam/endGubOU pPJ7FDcuRvnfkZ4Y7Hga4L7dfHkedNkkTahaovNLoqhNcis5SJWdhBBRfgzkItfGHDhZ pqMdv7ueNHqBTqMhwLsKF5jpCa6tMiRogdrf8Y/kqLOrXoZj25jUYcw47tmVxvWEH+D9 c52/ZRF0nSayqNfF5bRePIfNmHqa+kvG14OzC8niwlXcwn9RYs8Qkc5ZAvRMjELBcu4w 1XqQ== X-Gm-Message-State: AOJu0Yxkm0nfzStt892Bm1HdFig2oWQcWRteJfHpPKBDpfKu+HiqTL94 BK2Ay7CiKpbR8j8kovcgNhgqEw== X-Google-Smtp-Source: AGHT+IEN0Jt5wwNuskL0qGGhhGkOeUg3t8WkQm4o7XfOAfqqqzUJqOVX9dt6wxX88cfzTtEYQSuJcg== X-Received: by 2002:a17:902:ab48:b0:1bc:e37:aa76 with SMTP id ij8-20020a170902ab4800b001bc0e37aa76mr516147plb.6.1691115058660; Thu, 03 Aug 2023 19:10:58 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:58 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:29 -0700 Subject: [PATCH 04/10] RISC-V: KGDB: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-4-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191100_771354_1474BF71 X-CRM114-Status: GOOD ( 12.55 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/kgdb.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c index 2393342ab362..e1305706120e 100644 --- a/arch/riscv/kernel/kgdb.c +++ b/arch/riscv/kernel/kgdb.c @@ -5,7 +5,6 @@ #include #include -#include #include #include #include @@ -25,12 +24,12 @@ static unsigned int stepped_opcode; static int decode_register_index(unsigned long opcode, int offset) { - return (opcode >> offset) & 0x1F; + return (opcode >> offset) & RV_STANDARD_REG_MASK; } static int decode_register_index_short(unsigned long opcode, int offset) { - return ((opcode >> offset) & 0x7) + 8; + return ((opcode >> offset) & RV_COMPRESSED_REG_MASK) + 8; } /* Calculate the new address for after a step */ @@ -43,7 +42,7 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) if (get_kernel_nofault(op_code, (void *)pc)) return -EINVAL; - if ((op_code & __INSN_LENGTH_MASK) != INSN_C_MASK) { + if (INSN_IS_C(op_code)) { if (riscv_insn_is_c_jalr(op_code) || riscv_insn_is_c_jr(op_code)) { rs1_num = decode_register_index(op_code, RVC_C2_RS1_OPOFF); @@ -55,14 +54,14 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) rs1_num = decode_register_index_short(op_code, RVC_C1_RS1_OPOFF); if (!rs1_num || regs_ptr[rs1_num] == 0) - *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; + *next_addr = RVC_EXTRACT_BZ_IMM(op_code) + pc; else *next_addr = pc + 2; } else if (riscv_insn_is_c_bnez(op_code)) { rs1_num = decode_register_index_short(op_code, RVC_C1_RS1_OPOFF); if (rs1_num && regs_ptr[rs1_num] != 0) - *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; + *next_addr = RVC_EXTRACT_BZ_IMM(op_code) + pc; else *next_addr = pc + 2; } else { From patchwork Fri Aug 4 02:10:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1C320C0015E for ; Fri, 4 Aug 2023 02:11:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wYJkxfO3PdKWNmjhGXS9OPcgpx1z67LB2FOZ+Kt6izg=; b=zn8pGbbAbb8xB9 8YTG9asaETWSD0Or9oTxtJ9EBR5G/E7Yba1eTmo7OFE8dqi3zdeuZmVQ+u94hw6Lj3zeiRREpOKZt r87nB8+ldUbAaaHqUkxGHQrzIYqNi45saL7loUXOXh5wcFCVoNxv3MR1tfrRBYCTOs4XuAFkexwEQ qlv84J/qp8TJIwE2Zh8fQqahY6FBMIAptorv1hP9TDeSYar2Xw6UjRnEzJQD7xswUVuQAd2nY2q6H crMNBZX0ZDwFfz7AFw69/JEsCAoS9pRC+8qXT6085edxwM+fZh64kaF4i7rocKOHIfsXUKn/6/oJe qWXx7sqUg2+azZkQ8w/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHV-00BL5h-1J; Fri, 04 Aug 2023 02:11:09 +0000 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHP-00BKxQ-1O for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:06 +0000 Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-686b91c2744so1180314b3a.0 for ; Thu, 03 Aug 2023 19:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115061; x=1691719861; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=TvocQ3kzvUtZx7OcqBm6IQToMDU6XVm4cjeyn7D1owI=; b=189wskav2mNhYG4PGFGcjURhFsVNfX4Y2mmf3tyouPd0xHiF8Zy1G9prJ2A1rZSVQs VEDyD9B+biyNNSs1RMPW9xFW/EekJBfiPkqkJGTZCaMQXa2mxF70yjBFQ6hp+sHQPsJO gjYTifDvOG3vFwqrZFXWH0OiKV0FxYXC7tFoj3Iq+aJETZTq8d2IZaCZD0spwPY1qGZF AGFXjHOArK3zFuEfsGU9hw1SWkPcZca9aRY/xDyWw0f0Ibn0agLtGVG59zM2XT16YGp/ 6g7WmhnPKQk5sYktnoKZE4HXqdRLhMXm22Kkgch4Sh4doYU0Hm2+wcvwdq0TP6tUIr5G tUHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115061; x=1691719861; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TvocQ3kzvUtZx7OcqBm6IQToMDU6XVm4cjeyn7D1owI=; b=fK4mYsHl4Pu9UIOjcNoPP0GEIar89NPX51q4uqp1zFWiNGqKowcOv5Ef19QGzreH9D T3hdRgqT3/8c3SjymNU8fX0Ep3fcaxtj5A/T+x5Va9NmuGqvfFFoYzM6Xp9RhxgCpddf NJdLc3V4bCFOdFKRJq6bYJJhp+xquZjQ83Dp9LM74Ws5JiQ5Z70VtTShJCdbTbAGbD/I LQnb0LxEeFTVwDyFgiVdJzNbWnA49z57Q8ikQYDn+z1H/mnb21uuTNrsFUJypUn4tPsD PKwHHUOznsjgbFb0gmt3Y1Da9KRZogaf8T/gyCFqPqV8e/TteSJZR7+K4xhffLicsdCF ANzg== X-Gm-Message-State: AOJu0Yy2/8cA6G2G4t/qTM8NlApjMXq4Q4aeS6FAOXR0hcxWjqt2mNEk suidQPLcygJUxG0axQnfrw6hMw== X-Google-Smtp-Source: AGHT+IGW5fUbLeHB1q3HXvX2gzuNau1K9/d2OOrCFZdKIeflYlEuvm3pmsHong703HZymw3Xc9M04Q== X-Received: by 2002:a05:6a20:8f13:b0:13a:52ce:13cc with SMTP id b19-20020a056a208f1300b0013a52ce13ccmr380626pzk.51.1691115061016; Thu, 03 Aug 2023 19:11:01 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:00 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:30 -0700 Subject: [PATCH 05/10] RISC-V: module: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-5-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191103_488604_BE5EE965 X-CRM114-Status: GOOD ( 13.57 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Additionally, extra work was being done in apply_r_riscv_lo12_s_rela to ensure that the bits were set up properly for the lo12, but because -(a-b)=b-a it wasn't actually doing anything. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/module.c | 80 +++++++++++----------------------------------- 1 file changed, 18 insertions(+), 62 deletions(-) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 7c651d55fcbd..950783e5b5ae 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -12,8 +12,11 @@ #include #include #include +#include #include +#define HI20_OFFSET 0x800 + /* * The auipc+jalr instruction pair can reach any PC-relative offset * in the range [-2^31 - 2^11, 2^31 - 2^11) @@ -48,12 +51,8 @@ static int apply_r_riscv_branch_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 imm12 = (offset & 0x1000) << (31 - 12); - u32 imm11 = (offset & 0x800) >> (11 - 7); - u32 imm10_5 = (offset & 0x7e0) << (30 - 10); - u32 imm4_1 = (offset & 0x1e) << (11 - 4); - *location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1; + riscv_insn_insert_btype_imm(location, ((s32)offset)); return 0; } @@ -61,12 +60,8 @@ static int apply_r_riscv_jal_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 imm20 = (offset & 0x100000) << (31 - 20); - u32 imm19_12 = (offset & 0xff000); - u32 imm11 = (offset & 0x800) << (20 - 11); - u32 imm10_1 = (offset & 0x7fe) << (30 - 10); - *location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1; + riscv_insn_insert_jtype_imm(location, ((s32)offset)); return 0; } @@ -74,14 +69,8 @@ static int apply_r_riscv_rvc_branch_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u16 imm8 = (offset & 0x100) << (12 - 8); - u16 imm7_6 = (offset & 0xc0) >> (6 - 5); - u16 imm5 = (offset & 0x20) >> (5 - 2); - u16 imm4_3 = (offset & 0x18) << (12 - 5); - u16 imm2_1 = (offset & 0x6) << (12 - 10); - - *(u16 *)location = (*(u16 *)location & 0xe383) | - imm8 | imm7_6 | imm5 | imm4_3 | imm2_1; + + riscv_insn_insert_cbztype_imm(location, (s32)offset); return 0; } @@ -89,17 +78,8 @@ static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u16 imm11 = (offset & 0x800) << (12 - 11); - u16 imm10 = (offset & 0x400) >> (10 - 8); - u16 imm9_8 = (offset & 0x300) << (12 - 11); - u16 imm7 = (offset & 0x80) >> (7 - 6); - u16 imm6 = (offset & 0x40) << (12 - 11); - u16 imm5 = (offset & 0x20) >> (5 - 2); - u16 imm4 = (offset & 0x10) << (12 - 5); - u16 imm3_1 = (offset & 0xe) << (12 - 10); - - *(u16 *)location = (*(u16 *)location & 0xe003) | - imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1; + + riscv_insn_insert_cjtype_imm(location, (s32)offset); return 0; } @@ -107,7 +87,6 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - s32 hi20; if (!riscv_insn_valid_32bit_offset(offset)) { pr_err( @@ -116,8 +95,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, (offset + HI20_OFFSET)); return 0; } @@ -128,7 +106,7 @@ static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location, * v is the lo12 value to fill. It is calculated before calling this * handler. */ - *location = (*location & 0xfffff) | ((v & 0xfff) << 20); + riscv_insn_insert_itype_imm(location, ((s32)v)); return 0; } @@ -139,18 +117,13 @@ static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location, * v is the lo12 value to fill. It is calculated before calling this * handler. */ - u32 imm11_5 = (v & 0xfe0) << (31 - 11); - u32 imm4_0 = (v & 0x1f) << (11 - 4); - - *location = (*location & 0x1fff07f) | imm11_5 | imm4_0; + riscv_insn_insert_stype_imm(location, ((s32)v)); return 0; } static int apply_r_riscv_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { - s32 hi20; - if (IS_ENABLED(CONFIG_CMODEL_MEDLOW)) { pr_err( "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", @@ -158,8 +131,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = ((s32)v + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, ((s32)v + HI20_OFFSET)); return 0; } @@ -167,9 +139,7 @@ static int apply_r_riscv_lo12_i_rela(struct module *me, u32 *location, Elf_Addr v) { /* Skip medlow checking because of filtering by HI20 already */ - s32 hi20 = ((s32)v + 0x800) & 0xfffff000; - s32 lo12 = ((s32)v - hi20); - *location = (*location & 0xfffff) | ((lo12 & 0xfff) << 20); + riscv_insn_insert_itype_imm(location, (s32)v); return 0; } @@ -177,11 +147,7 @@ static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location, Elf_Addr v) { /* Skip medlow checking because of filtering by HI20 already */ - s32 hi20 = ((s32)v + 0x800) & 0xfffff000; - s32 lo12 = ((s32)v - hi20); - u32 imm11_5 = (lo12 & 0xfe0) << (31 - 11); - u32 imm4_0 = (lo12 & 0x1f) << (11 - 4); - *location = (*location & 0x1fff07f) | imm11_5 | imm4_0; + riscv_insn_insert_stype_imm(location, (s32)v); return 0; } @@ -189,7 +155,6 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - s32 hi20; /* Always emit the got entry */ if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) { @@ -202,8 +167,7 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, (s32)(offset + HI20_OFFSET)); return 0; } @@ -211,7 +175,6 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 hi20, lo12; if (!riscv_insn_valid_32bit_offset(offset)) { /* Only emit the plt entry if offset over 32-bit range */ @@ -226,10 +189,7 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, } } - hi20 = (offset + 0x800) & 0xfffff000; - lo12 = (offset - hi20) & 0xfff; - *location = (*location & 0xfff) | hi20; - *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20); + riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset); return 0; } @@ -237,7 +197,6 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 hi20, lo12; if (!riscv_insn_valid_32bit_offset(offset)) { pr_err( @@ -246,10 +205,7 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - lo12 = (offset - hi20) & 0xfff; - *location = (*location & 0xfff) | hi20; - *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20); + riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset); return 0; } From patchwork Fri Aug 4 02:10:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AF81C04A6A for ; Fri, 4 Aug 2023 02:11:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+JBekGlVK/euOXhkaMuawsGXVJY/mkgRPWeJ06QjpOE=; b=4CwB0LDje69H2E iBgMEbo3m928jc8NaOfXsaQwUGdR3wngG3oUpR/r3NpmrLi/hRE15TY6PtdqutK5Exsi4iw9fYSLu nesHSRKbXmWC/+ulyt3blCikK89M9URMtijNdzQelxryMbM6KFd56Je9USE0PXZooUdwa3aSad6Dk qe3KiIigz4vvSjV1m9fK+N+beg62yjNqMrd+P9yhlxhTYj+B2x86fRM9GgdfIXc7kLzUbha/WpSaR ZakVoNPLvMIco4GNVFDnWksiyqHNzSGIr/XGu/reXO6IFfq8VUtu5DxJWLbADajyE4FM5F0y6BXLX /HnK/85PaPDc9SnyEq4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHU-00BL4Q-2B; Fri, 04 Aug 2023 02:11:08 +0000 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHP-00BKzO-2W for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:06 +0000 Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-686c06b806cso1153564b3a.2 for ; Thu, 03 Aug 2023 19:11:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115063; x=1691719863; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=xL4SUrFC0SEjwT2Aq0+zll5WmXviuZuzcvI35+EgLbI=; b=pl/wAP3GSnmzUj0rIIA7pNZsP1pu/x7DAz3UUDociqK/9ym2Q/7lCLccZpyr/oGnOq mazjRlgtR9P6+ZibckgI161WdRt3n/X0e4AyL3oo6I9i3eGF0JfSd+Qu5lrFriErxi4u eA09Q6fGr4ajJpvqenrJ8LYLM17C9sM55cijEYZBSFpwzQik+AqRUbz4uGL2W5TiYd3E s5XjBCiBrptpAxA12KHT+YPsl/nKK/hqYXObmPljinTzetAoAON5K7krRBGRTUXKcpan x4ezXngz/kpvx3dDGBgRcigjlgo9pkBfD4Yz7/V9NdppOdCEcN6rRhHL+ZmGvt1yYP8R LssA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115063; x=1691719863; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xL4SUrFC0SEjwT2Aq0+zll5WmXviuZuzcvI35+EgLbI=; b=QjUxd0GCQuolPbeweTte0+sRRbvnIm9sM9nkwM1Tpt5m6BHRYp4P5vdCA/8vxp4i6q ly9qVnvhtyrWWj/RWl8k1mTDCsuNDprA94cjChjmU/3XBXHBn1vD2o+HmWK5UdGNdQNm aeEFyTz2luLEpAvi5s9ONaaGduqSODrqrys3Qq3kyCjSZSw9TwwXf1v6R26RzGCFYsT8 jl3dINtM8+Ib3qgnaPby0zy2kGhuWWKB4d2VQoGCuIjn9sbyZW5syZry3sjADxTRavzK qEtqa97NIu7AwnaGX6gg3o3JspWKnQooXObjIfcTetRgXWNLSNmMuXom0G1bod3YSv/j 2+YA== X-Gm-Message-State: AOJu0YycSTi0LZyNhle3bfJsS+Aqwnd8OxLJzNGSmr7tgINfsPtAf1gm EdlIxWSh+6lst9yf+QkoVL4IXA== X-Google-Smtp-Source: AGHT+IEWMGG8I7Bx8vxPRq5vbS6bzyhi8Am9MK0LkQRTff8vwoXnpjEoBIXTfO5sLYuNOOcKckXAfw== X-Received: by 2002:a05:6a00:18a2:b0:666:81ae:fec0 with SMTP id x34-20020a056a0018a200b0066681aefec0mr463394pfh.25.1691115062994; Thu, 03 Aug 2023 19:11:02 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:02 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:31 -0700 Subject: [PATCH 06/10] RISC-V: Refactor patch instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-6-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191103_846361_86885580 X-CRM114-Status: GOOD ( 16.80 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/patch.c | 3 +- arch/riscv/kernel/probes/kprobes.c | 13 +++---- arch/riscv/kernel/probes/simulate-insn.c | 61 +++++++------------------------- arch/riscv/kernel/probes/uprobes.c | 5 +-- 4 files changed, 25 insertions(+), 57 deletions(-) diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 575e71d6c8ae..df51f5155673 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -12,6 +12,7 @@ #include #include #include +#include #include struct patch_insn { @@ -118,7 +119,7 @@ static int patch_text_cb(void *data) if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { for (i = 0; ret == 0 && i < patch->ninsns; i++) { - len = GET_INSN_LENGTH(patch->insns[i]); + len = INSN_LEN(patch->insns[i]); ret = patch_text_nosync(patch->addr + i * len, &patch->insns[i], len); } diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index 2f08c14a933d..501c6ae4d803 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "decode-insn.h" @@ -24,7 +25,7 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *); static void __kprobes arch_prepare_ss_slot(struct kprobe *p) { u32 insn = __BUG_INSN_32; - unsigned long offset = GET_INSN_LENGTH(p->opcode); + unsigned long offset = INSN_LEN(p->opcode); p->ainsn.api.restore = (unsigned long)p->addr + offset; @@ -58,7 +59,7 @@ static bool __kprobes arch_check_kprobe(struct kprobe *p) if (tmp == addr) return true; - tmp += GET_INSN_LENGTH(*(u16 *)tmp); + tmp += INSN_LEN(*(u16 *)tmp); } return false; @@ -76,7 +77,7 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) /* copy instruction */ p->opcode = (kprobe_opcode_t)(*insn++); - if (GET_INSN_LENGTH(p->opcode) == 4) + if (INSN_LEN(p->opcode) == 4) p->opcode |= (kprobe_opcode_t)(*insn) << 16; /* decode instruction */ @@ -117,8 +118,8 @@ void *alloc_insn_page(void) /* install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { - u32 insn = (p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32 ? - __BUG_INSN_32 : __BUG_INSN_16; + u32 insn = INSN_IS_C(p->opcode) ? + __BUG_INSN_16 : __BUG_INSN_32; patch_text(p->addr, &insn, 1); } @@ -344,7 +345,7 @@ kprobe_single_step_handler(struct pt_regs *regs) struct kprobe *cur = kprobe_running(); if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) && - ((unsigned long)&cur->ainsn.api.insn[0] + GET_INSN_LENGTH(cur->opcode) == addr)) { + ((unsigned long)&cur->ainsn.api.insn[0] + INSN_LEN(cur->opcode) == addr)) { kprobes_restore_local_irqflag(kcb, regs); post_kprobe_handler(cur, kcb, regs); return true; diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c index 994edb4bd16a..f9671bb864a3 100644 --- a/arch/riscv/kernel/probes/simulate-insn.c +++ b/arch/riscv/kernel/probes/simulate-insn.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0+ +#include #include #include #include @@ -16,19 +17,16 @@ bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs * 1 10 1 8 5 JAL/J */ bool ret; - u32 imm; - u32 index = (opcode >> 7) & 0x1f; + s32 imm; + u32 index = riscv_insn_extract_rd(opcode); ret = rv_insn_reg_set_val((unsigned long *)regs, index, addr + 4); if (!ret) return ret; - imm = ((opcode >> 21) & 0x3ff) << 1; - imm |= ((opcode >> 20) & 0x1) << 11; - imm |= ((opcode >> 12) & 0xff) << 12; - imm |= ((opcode >> 31) & 0x1) << 20; + imm = riscv_insn_extract_jtype_imm(opcode); - instruction_pointer_set(regs, addr + sign_extend32((imm), 20)); + instruction_pointer_set(regs, addr + imm); return ret; } @@ -42,9 +40,9 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg */ bool ret; unsigned long base_addr; - u32 imm = (opcode >> 20) & 0xfff; - u32 rd_index = (opcode >> 7) & 0x1f; - u32 rs1_index = (opcode >> 15) & 0x1f; + s32 imm = riscv_insn_extract_itype_imm(opcode); + u32 rd_index = riscv_insn_extract_rd(opcode); + u32 rs1_index = riscv_insn_extract_rs1(opcode); ret = rv_insn_reg_get_val((unsigned long *)regs, rs1_index, &base_addr); if (!ret) @@ -54,25 +52,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg if (!ret) return ret; - instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1); + instruction_pointer_set(regs, (base_addr + imm) & ~1); return ret; } -#define auipc_rd_idx(opcode) \ - ((opcode >> 7) & 0x1f) - -#define auipc_imm(opcode) \ - ((((opcode) >> 12) & 0xfffff) << 12) - -#if __riscv_xlen == 64 -#define auipc_offset(opcode) sign_extend64(auipc_imm(opcode), 31) -#elif __riscv_xlen == 32 -#define auipc_offset(opcode) auipc_imm(opcode) -#else -#error "Unexpected __riscv_xlen" -#endif - bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *regs) { /* @@ -82,35 +66,16 @@ bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *re * 20 5 7 */ - u32 rd_idx = auipc_rd_idx(opcode); - unsigned long rd_val = addr + auipc_offset(opcode); + u32 rd_idx = riscv_insn_extract_rd(opcode); + unsigned long rd_val = addr + riscv_insn_extract_utype_imm(opcode); if (!rv_insn_reg_set_val((unsigned long *)regs, rd_idx, rd_val)) return false; instruction_pointer_set(regs, addr + 4); - return true; } -#define branch_rs1_idx(opcode) \ - (((opcode) >> 15) & 0x1f) - -#define branch_rs2_idx(opcode) \ - (((opcode) >> 20) & 0x1f) - -#define branch_funct3(opcode) \ - (((opcode) >> 12) & 0x7) - -#define branch_imm(opcode) \ - (((((opcode) >> 8) & 0xf ) << 1) | \ - ((((opcode) >> 25) & 0x3f) << 5) | \ - ((((opcode) >> 7) & 0x1 ) << 11) | \ - ((((opcode) >> 31) & 0x1 ) << 12)) - -#define branch_offset(opcode) \ - sign_extend32((branch_imm(opcode)), 12) - bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs) { /* @@ -135,8 +100,8 @@ bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *r !rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(opcode), &rs2_val)) return false; - offset_tmp = branch_offset(opcode); - switch (branch_funct3(opcode)) { + offset_tmp = riscv_insn_extract_btype_imm(opcode); + switch (riscv_insn_extract_funct3(opcode)) { case RVG_FUNCT3_BEQ: offset = (rs1_val == rs2_val) ? offset_tmp : 4; break; diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c index 194f166b2cc4..f2511cbaf931 100644 --- a/arch/riscv/kernel/probes/uprobes.c +++ b/arch/riscv/kernel/probes/uprobes.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only +#include #include #include #include @@ -29,7 +30,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, opcode = *(probe_opcode_t *)(&auprobe->insn[0]); - auprobe->insn_size = GET_INSN_LENGTH(opcode); + auprobe->insn_size = INSN_LEN(opcode); switch (riscv_probe_decode_insn(&opcode, &auprobe->api)) { case INSN_REJECTED: @@ -166,7 +167,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, /* Add ebreak behind opcode to simulate singlestep */ if (vaddr) { - dst += GET_INSN_LENGTH(*(probe_opcode_t *)src); + dst += INSN_LEN(*(probe_opcode_t *)src); *(uprobe_opcode_t *)dst = __BUG_INSN_32; } From patchwork Fri Aug 4 02:10:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94C51EB64DD for ; Fri, 4 Aug 2023 02:11:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LIarRsVMkj3sFa2rAd0b0ZaAnW+Ayu5Mx8XabEgVtvw=; b=SEOUrpuyv8/lOK SoAjRDXXhSmBrno/O4/wKFySVp0ZGv2Ee7lsoh5rNUS2M/2q/Q3XsmSeT1Zqig9Nzfi7xCpVAccEu IaPhGIa9YCQ4/YUEXYNlozVYInEPS95SfSwHgY2mp4BjW52a88BGrBvTDtXZyIbmiZnyFGtwt9SdZ 078fPo+EfQMBDmJA2lOwN3WFrnAZwcKGJVxB34bQThFngpY98CgSiBB8vuYy5tUycGbJVcc6tdNNG 9jC7TU9lde67wJfZMdxYMd1XEcRBf/nFaVypYCWd5K/+V08pNqeEINeCcDjZFdgY62bytqxgI5sM4 iIPJRExOkrxiiVHLdUpg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHa-00BLBA-1p; Fri, 04 Aug 2023 02:11:14 +0000 Received: from mail-oi1-x229.google.com ([2607:f8b0:4864:20::229]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHV-00BL1n-0H for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:13 +0000 Received: by mail-oi1-x229.google.com with SMTP id 5614622812f47-3a74d759be4so1222791b6e.2 for ; Thu, 03 Aug 2023 19:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115065; x=1691719865; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=utAxHXGCk0s0MGtyBcy0wGG+fjIXniu7w9sbMi99EII=; b=2Ud2NQBS5pxrNSE/4sEBtKmvdzrwzD0DNrpLL4Jpk5IVgAuLYD2Qb8lIcXGXHdFUTe ZJ5/RbdpTyO/DoseixXdyQrCuicaGE/ufBY7isL11QxvVjoBeWOZiHDWBWVuR4ouKOGn gUFR73gMIiTAl7wiGpoRbNR9dUWiqqOhSYcd3PCGvQ1l7lgEmnj5w+t8/UYSppIgAbMo UH70h/K6ah65vf1uxfdcVIVuqUk85on5nuOySotYUzjJXZVhrhxSMhaf5NIu+DaExiZq sFqLrbkcQa8vTgo7sT3BKgF2nfPSjv/bK3cG/KPYis6OCn9KWjbKKf8/JMfLbyaLOh4Y 28SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115065; x=1691719865; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=utAxHXGCk0s0MGtyBcy0wGG+fjIXniu7w9sbMi99EII=; b=edHRkmk8zda7vZjybfBtfpSZg7qFgePSQ4bXxJRcPQrLdzgUwrOBIVc3L5gPVzcf1L VyCulzlHL2GOJVNGbwVT8l6wnECdLwZwcuBpbPNh2STajpX8y0b+mw1e/xe1rPFQf55Y z4RHdshc5tQlbF7bh87MLERZbddfDRw9vJ6Y29U0ulm1YUpXntkQFuXl6wQoU5XAM9sj atB5bJEXnqG/4Lalr7kgb1ST8N2qGeG4Y3FFjLi+i1qYGfeiR2DnZFDopJ57pOeEIABB M2/DB00bKOVSYDMYCvXXh3YfAQhUIvV66jfSkWv0XeVnoayUZ/+9+dDy8OQblD7v+7e+ O3nw== X-Gm-Message-State: AOJu0YwXaNRh6czxi04/j/HPJcJnifRhFrquGL0Crmq/xl83arXCzO0N ZMmieva7P4XeRZznBzLmSSTHAw== X-Google-Smtp-Source: AGHT+IG9sCpMJtqiJ9jvb2wzQ5yReUA9iACju+gpqXRvIhk4weyUg+HwEHIbnyozun5cFHPRNV7nfw== X-Received: by 2002:a05:6358:7213:b0:134:ec26:b53 with SMTP id h19-20020a056358721300b00134ec260b53mr710567rwa.16.1691115065043; Thu, 03 Aug 2023 19:11:05 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:04 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:32 -0700 Subject: [PATCH 07/10] RISC-V: nommu: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-7-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191109_181960_86C7E6E6 X-CRM114-Status: GOOD ( 11.26 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/traps_misaligned.c | 218 ++++++++--------------------------- 1 file changed, 45 insertions(+), 173 deletions(-) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 378f5b151443..b72045ce432a 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -12,144 +12,10 @@ #include #include #include +#include +#include -#define INSN_MATCH_LB 0x3 -#define INSN_MASK_LB 0x707f -#define INSN_MATCH_LH 0x1003 -#define INSN_MASK_LH 0x707f -#define INSN_MATCH_LW 0x2003 -#define INSN_MASK_LW 0x707f -#define INSN_MATCH_LD 0x3003 -#define INSN_MASK_LD 0x707f -#define INSN_MATCH_LBU 0x4003 -#define INSN_MASK_LBU 0x707f -#define INSN_MATCH_LHU 0x5003 -#define INSN_MASK_LHU 0x707f -#define INSN_MATCH_LWU 0x6003 -#define INSN_MASK_LWU 0x707f -#define INSN_MATCH_SB 0x23 -#define INSN_MASK_SB 0x707f -#define INSN_MATCH_SH 0x1023 -#define INSN_MASK_SH 0x707f -#define INSN_MATCH_SW 0x2023 -#define INSN_MASK_SW 0x707f -#define INSN_MATCH_SD 0x3023 -#define INSN_MASK_SD 0x707f - -#define INSN_MATCH_FLW 0x2007 -#define INSN_MASK_FLW 0x707f -#define INSN_MATCH_FLD 0x3007 -#define INSN_MASK_FLD 0x707f -#define INSN_MATCH_FLQ 0x4007 -#define INSN_MASK_FLQ 0x707f -#define INSN_MATCH_FSW 0x2027 -#define INSN_MASK_FSW 0x707f -#define INSN_MATCH_FSD 0x3027 -#define INSN_MASK_FSD 0x707f -#define INSN_MATCH_FSQ 0x4027 -#define INSN_MASK_FSQ 0x707f - -#define INSN_MATCH_C_LD 0x6000 -#define INSN_MASK_C_LD 0xe003 -#define INSN_MATCH_C_SD 0xe000 -#define INSN_MASK_C_SD 0xe003 -#define INSN_MATCH_C_LW 0x4000 -#define INSN_MASK_C_LW 0xe003 -#define INSN_MATCH_C_SW 0xc000 -#define INSN_MASK_C_SW 0xe003 -#define INSN_MATCH_C_LDSP 0x6002 -#define INSN_MASK_C_LDSP 0xe003 -#define INSN_MATCH_C_SDSP 0xe002 -#define INSN_MASK_C_SDSP 0xe003 -#define INSN_MATCH_C_LWSP 0x4002 -#define INSN_MASK_C_LWSP 0xe003 -#define INSN_MATCH_C_SWSP 0xc002 -#define INSN_MASK_C_SWSP 0xe003 - -#define INSN_MATCH_C_FLD 0x2000 -#define INSN_MASK_C_FLD 0xe003 -#define INSN_MATCH_C_FLW 0x6000 -#define INSN_MASK_C_FLW 0xe003 -#define INSN_MATCH_C_FSD 0xa000 -#define INSN_MASK_C_FSD 0xe003 -#define INSN_MATCH_C_FSW 0xe000 -#define INSN_MASK_C_FSW 0xe003 -#define INSN_MATCH_C_FLDSP 0x2002 -#define INSN_MASK_C_FLDSP 0xe003 -#define INSN_MATCH_C_FSDSP 0xa002 -#define INSN_MASK_C_FSDSP 0xe003 -#define INSN_MATCH_C_FLWSP 0x6002 -#define INSN_MASK_C_FLWSP 0xe003 -#define INSN_MATCH_C_FSWSP 0xe002 -#define INSN_MASK_C_FSWSP 0xe003 - -#define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4) - -#if defined(CONFIG_64BIT) -#define LOG_REGBYTES 3 -#define XLEN 64 -#else -#define LOG_REGBYTES 2 -#define XLEN 32 -#endif -#define REGBYTES (1 << LOG_REGBYTES) -#define XLEN_MINUS_16 ((XLEN) - 16) - -#define SH_RD 7 -#define SH_RS1 15 -#define SH_RS2 20 -#define SH_RS2C 2 - -#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) -#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ - (RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 1) << 6)) -#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 2) << 6)) -#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 2) << 6)) -#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 3) << 6)) -#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ - (RV_X(x, 7, 2) << 6)) -#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 7, 3) << 6)) -#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) -#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) -#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) - -#define SHIFT_RIGHT(x, y) \ - ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) - -#define REG_MASK \ - ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) - -#define REG_OFFSET(insn, pos) \ - (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) - -#define REG_PTR(insn, pos, regs) \ - (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)) - -#define GET_RM(insn) (((insn) >> 12) & 7) - -#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) -#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) -#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) -#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) -#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) -#define GET_SP(regs) (*REG_PTR(2, 0, regs)) -#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) -#define IMM_I(insn) ((s32)(insn) >> 20) -#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ - (s32)(((insn) >> 7) & 0x1f)) -#define MASK_FUNCT3 0x7000 - -#define GET_PRECISION(insn) (((insn) >> 25) & 3) -#define GET_RM(insn) (((insn) >> 12) & 7) -#define PRECISION_S 0 -#define PRECISION_D 1 +#define XLEN_MINUS_16 ((__riscv_xlen) - 16) #define DECLARE_UNPRIVILEGED_LOAD_FUNCTION(type, insn) \ static inline type load_##type(const type *addr) \ @@ -245,58 +111,56 @@ int handle_misaligned_load(struct pt_regs *regs) regs->epc = 0; - if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { + if (riscv_insn_is_lw(insn)) { len = 4; shift = 8 * (sizeof(unsigned long) - len); #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) { + } else if (riscv_insn_is_ld(insn)) { len = 8; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) { + } else if (riscv_insn_is_lwu(insn)) { len = 4; #endif - } else if ((insn & INSN_MASK_FLD) == INSN_MATCH_FLD) { + } else if (riscv_insn_is_fld(insn)) { fp = 1; len = 8; - } else if ((insn & INSN_MASK_FLW) == INSN_MATCH_FLW) { + } else if (riscv_insn_is_flw(insn)) { fp = 1; len = 4; - } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) { + } else if (riscv_insn_is_lh(insn)) { len = 2; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) { + } else if (riscv_insn_is_lhu(insn)) { len = 2; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) { + } else if (riscv_insn_is_c_ld(insn)) { len = 8; shift = 8 * (sizeof(unsigned long) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_ldsp(insn) && (RVC_RD_CI(insn))) { len = 8; shift = 8 * (sizeof(unsigned long) - len); #endif - } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) { + } else if (riscv_insn_is_c_lw(insn)) { len = 4; shift = 8 * (sizeof(unsigned long) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_lwsp(insn) && (RVC_RD_CI(insn))) { len = 4; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_C_FLD) == INSN_MATCH_C_FLD) { + } else if (riscv_insn_is_c_fld(insn)) { fp = 1; len = 8; - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_FLDSP) == INSN_MATCH_C_FLDSP) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_fldsp(insn)) { fp = 1; len = 8; #if defined(CONFIG_32BIT) - } else if ((insn & INSN_MASK_C_FLW) == INSN_MATCH_C_FLW) { + } else if (riscv_insn_is_c_flw(insn)) { fp = 1; len = 4; - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_FLWSP) == INSN_MATCH_C_FLWSP) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_flwsp(insn)) { fp = 1; len = 4; #endif @@ -311,7 +175,8 @@ int handle_misaligned_load(struct pt_regs *regs) if (fp) return -1; - SET_RD(insn, regs, val.data_ulong << shift >> shift); + rv_insn_reg_set_val((unsigned long *)regs, RV_EXTRACT_RD_REG(insn), + val.data_ulong << shift >> shift); regs->epc = epc + INSN_LEN(insn); @@ -328,32 +193,39 @@ int handle_misaligned_store(struct pt_regs *regs) regs->epc = 0; - val.data_ulong = GET_RS2(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(insn), + &val.data_ulong); - if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { + if (riscv_insn_is_sw(insn)) { len = 4; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { + } else if (riscv_insn_is_sd(insn)) { len = 8; #endif - } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { + } else if (riscv_insn_is_sh(insn)) { len = 2; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { + } else if (riscv_insn_is_c_sd(insn)) { len = 8; - val.data_ulong = GET_RS2S(insn, regs); - } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_cr_rs2(insn), + &val.data_ulong); + } else if (riscv_insn_is_c_sdsp(insn)) { len = 8; - val.data_ulong = GET_RS2C(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_csca_rs2(insn), + &val.data_ulong); #endif - } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { + } else if (riscv_insn_is_c_sw(insn)) { len = 4; - val.data_ulong = GET_RS2S(insn, regs); - } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_cr_rs2(insn), + &val.data_ulong); + } else if (riscv_insn_is_c_swsp(insn)) { len = 4; - val.data_ulong = GET_RS2C(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_csca_rs2(insn), + &val.data_ulong); } else { regs->epc = epc; return -1; From patchwork Fri Aug 4 02:10:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57629C0015E for ; Fri, 4 Aug 2023 02:11:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4dwXwkghjo/wsuCxRxSLaq01Qr3Cy9+8Pi49i3gUDj0=; b=zDSNXwVu0C8hrc hQVBtUs7ONHWFYHv4jq3WrebwkOB9WVBDaBDI6V6xe5rECUGIlufqWrusONapBBsM1f30jyoua8eJ lF1hHr3Z38wArq68DqsLONfERwrNX8lZthd7ln5uLNfOrUpQpLwujFTlXknzZRy2UK11QpEZIUk0K RL55BB5pqwxQBuCf1FTyJLFS56LD0YlxRxVhdc28mYcmixpBHaKxgyUZNvsaK2j365gSs0LRuIz7c 2qp0MGpichHi7jh03obEZviSyKjw10KdctvQSDsQBK977OwVeuzfCCxCwh7l7Z2p4yxWQvuwfkCYI 12jYlaXJ1IHmuY2B/S+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHY-00BL9i-34; Fri, 04 Aug 2023 02:11:12 +0000 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHT-00BL3T-22 for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:10 +0000 Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-6874d1c8610so1209409b3a.0 for ; Thu, 03 Aug 2023 19:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115067; x=1691719867; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=BPKfiGgHhMfFwAhfWVPwqE1z0LYuKsCQFhQ5Mh2pvvg=; b=ta12HpfnDKw4trrgJe1kDu+nj00f+KtjOCKksu/MbokQkDICvsHjetWJjnnUAQ9mKA WxcGuGzIACqWL4C7F0cNuUurtmsjAqQtku6EGVRRAIRX6czrur7a1fkoTwQb67I8psns qiw1jr3pgIU6q5lKcqF2K0YyV/Q9izdftRMTHTrdstDWvQdYLXBYAs9POUL60yw9Qx6o NxmgZnNssWnJqQ8eqgXh5ytWv/DME6JtSdllQIrmeVjOnn+1OhwqPUSJ71/U9z5WHACg tIgJe92B0DResa1Wi2A+yRPOYTSNhVYGYoNJY+JGdxcqvoQH4MrlPXgKUvzgvd61u6zx 6NWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115067; x=1691719867; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BPKfiGgHhMfFwAhfWVPwqE1z0LYuKsCQFhQ5Mh2pvvg=; b=F9HDCzFw5oaav8Hfm6gPfcHK9CQqlx3D6CbK/pMBBDiz5XXmbb7hklBRyA26MgIQdi 8EEhsZsIQa1QR1x+o3bf69L0h0U+CbfwsOEpKr9BA+hu3zVJ/diE2fzu0PDpiNGPywbW 89mczLgUhw8cC459JXGfGpIM1WKDKdrxCqo11MVNJcfUI+1IK2BEzqzBiygi5JHrFhyf Owe6ZVA2luRVdsnVuKgKNb27oTmvpbAfbyX7CDPVNSAloMUNJYMDGdeRm5gGJW8pXzGa Wv/y6eOsdys82hDmkZ5mpDr2hvBBkMGj3AW5Avh5VX9NfyGfMimD6ZDVesaYfCOx+C7K IrVg== X-Gm-Message-State: AOJu0YwbDND/Tkx12dmDz7nhjvvKbNFd91tN8DBzod//E1p2ATvr9NQl UDr2c+qngTuj46E8iBw2/nTvRw== X-Google-Smtp-Source: AGHT+IFN09ZI6JBftIb4c4o1ythrdzlTD6KiUWhToEJZR6WsVai+ZUmWDZsVnZ7Njw6c/iZ527XncQ== X-Received: by 2002:a05:6a00:2301:b0:686:efda:76a2 with SMTP id h1-20020a056a00230100b00686efda76a2mr405546pfh.29.1691115067089; Thu, 03 Aug 2023 19:11:07 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:06 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:33 -0700 Subject: [PATCH 08/10] RISC-V: kvm: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-8-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191107_686967_8C18D729 X-CRM114-Status: GOOD ( 18.27 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/kvm/vcpu_insn.c | 281 ++++++++++++++------------------------------- 1 file changed, 86 insertions(+), 195 deletions(-) diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 7a6abed41bc1..73c7d21b496e 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -6,130 +6,7 @@ #include #include - -#define INSN_OPCODE_MASK 0x007c -#define INSN_OPCODE_SHIFT 2 -#define INSN_OPCODE_SYSTEM 28 - -#define INSN_MASK_WFI 0xffffffff -#define INSN_MATCH_WFI 0x10500073 - -#define INSN_MATCH_CSRRW 0x1073 -#define INSN_MASK_CSRRW 0x707f -#define INSN_MATCH_CSRRS 0x2073 -#define INSN_MASK_CSRRS 0x707f -#define INSN_MATCH_CSRRC 0x3073 -#define INSN_MASK_CSRRC 0x707f -#define INSN_MATCH_CSRRWI 0x5073 -#define INSN_MASK_CSRRWI 0x707f -#define INSN_MATCH_CSRRSI 0x6073 -#define INSN_MASK_CSRRSI 0x707f -#define INSN_MATCH_CSRRCI 0x7073 -#define INSN_MASK_CSRRCI 0x707f - -#define INSN_MATCH_LB 0x3 -#define INSN_MASK_LB 0x707f -#define INSN_MATCH_LH 0x1003 -#define INSN_MASK_LH 0x707f -#define INSN_MATCH_LW 0x2003 -#define INSN_MASK_LW 0x707f -#define INSN_MATCH_LD 0x3003 -#define INSN_MASK_LD 0x707f -#define INSN_MATCH_LBU 0x4003 -#define INSN_MASK_LBU 0x707f -#define INSN_MATCH_LHU 0x5003 -#define INSN_MASK_LHU 0x707f -#define INSN_MATCH_LWU 0x6003 -#define INSN_MASK_LWU 0x707f -#define INSN_MATCH_SB 0x23 -#define INSN_MASK_SB 0x707f -#define INSN_MATCH_SH 0x1023 -#define INSN_MASK_SH 0x707f -#define INSN_MATCH_SW 0x2023 -#define INSN_MASK_SW 0x707f -#define INSN_MATCH_SD 0x3023 -#define INSN_MASK_SD 0x707f - -#define INSN_MATCH_C_LD 0x6000 -#define INSN_MASK_C_LD 0xe003 -#define INSN_MATCH_C_SD 0xe000 -#define INSN_MASK_C_SD 0xe003 -#define INSN_MATCH_C_LW 0x4000 -#define INSN_MASK_C_LW 0xe003 -#define INSN_MATCH_C_SW 0xc000 -#define INSN_MASK_C_SW 0xe003 -#define INSN_MATCH_C_LDSP 0x6002 -#define INSN_MASK_C_LDSP 0xe003 -#define INSN_MATCH_C_SDSP 0xe002 -#define INSN_MASK_C_SDSP 0xe003 -#define INSN_MATCH_C_LWSP 0x4002 -#define INSN_MASK_C_LWSP 0xe003 -#define INSN_MATCH_C_SWSP 0xc002 -#define INSN_MASK_C_SWSP 0xe003 - -#define INSN_16BIT_MASK 0x3 - -#define INSN_IS_16BIT(insn) (((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK) - -#define INSN_LEN(insn) (INSN_IS_16BIT(insn) ? 2 : 4) - -#ifdef CONFIG_64BIT -#define LOG_REGBYTES 3 -#else -#define LOG_REGBYTES 2 -#endif -#define REGBYTES (1 << LOG_REGBYTES) - -#define SH_RD 7 -#define SH_RS1 15 -#define SH_RS2 20 -#define SH_RS2C 2 -#define MASK_RX 0x1f - -#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) -#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ - (RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 1) << 6)) -#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 2) << 6)) -#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 2) << 6)) -#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 3) << 6)) -#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ - (RV_X(x, 7, 2) << 6)) -#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 7, 3) << 6)) -#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) -#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) -#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) - -#define SHIFT_RIGHT(x, y) \ - ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) - -#define REG_MASK \ - ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) - -#define REG_OFFSET(insn, pos) \ - (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) - -#define REG_PTR(insn, pos, regs) \ - ((ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))) - -#define GET_FUNCT3(insn) (((insn) >> 12) & 7) - -#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) -#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) -#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) -#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) -#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) -#define GET_SP(regs) (*REG_PTR(2, 0, regs)) -#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) -#define IMM_I(insn) ((s32)(insn) >> 20) -#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ - (s32)(((insn) >> 7) & 0x1f)) +#include struct insn_func { unsigned long mask; @@ -230,6 +107,7 @@ static const struct csr_func csr_funcs[] = { int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) { ulong insn; + u32 index; if (vcpu->arch.csr_decode.return_handled) return 0; @@ -237,9 +115,10 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) /* Update destination register for CSR reads */ insn = vcpu->arch.csr_decode.insn; - if ((insn >> SH_RD) & MASK_RX) - SET_RD(insn, &vcpu->arch.guest_context, - run->riscv_csr.ret_value); + riscv_insn_extract_rd(insn); + if (index) + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, run->riscv_csr.ret_value); /* Move to next instruction */ vcpu->arch.guest_context.sepc += INSN_LEN(insn); @@ -249,36 +128,39 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) { + ulong rs1_val; int i, rc = KVM_INSN_ILLEGAL_TRAP; - unsigned int csr_num = insn >> SH_RS2; - unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX; - ulong rs1_val = GET_RS1(insn, &vcpu->arch.guest_context); + unsigned int csr_num = insn >> RV_I_IMM_11_0_OPOFF; + unsigned int rs1_num = riscv_insn_extract_rs1(insn); const struct csr_func *tcfn, *cfn = NULL; ulong val = 0, wr_mask = 0, new_val = 0; + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs1(insn), &rs1_val); + /* Decode the CSR instruction */ - switch (GET_FUNCT3(insn)) { - case GET_FUNCT3(INSN_MATCH_CSRRW): + switch (riscv_insn_extract_funct3(insn)) { + case RVG_FUNCT3_CSRRW: wr_mask = -1UL; new_val = rs1_val; break; - case GET_FUNCT3(INSN_MATCH_CSRRS): + case RVG_FUNCT3_CSRRS: wr_mask = rs1_val; new_val = -1UL; break; - case GET_FUNCT3(INSN_MATCH_CSRRC): + case RVG_FUNCT3_CSRRC: wr_mask = rs1_val; new_val = 0; break; - case GET_FUNCT3(INSN_MATCH_CSRRWI): + case RVG_FUNCT3_CSRRWI: wr_mask = -1UL; new_val = rs1_num; break; - case GET_FUNCT3(INSN_MATCH_CSRRSI): + case RVG_FUNCT3_CSRRSI: wr_mask = rs1_num; new_val = -1UL; break; - case GET_FUNCT3(INSN_MATCH_CSRRCI): + case RVG_FUNCT3_CSRRCI: wr_mask = rs1_num; new_val = 0; break; @@ -331,38 +213,38 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) static const struct insn_func system_opcode_funcs[] = { { - .mask = INSN_MASK_CSRRW, - .match = INSN_MATCH_CSRRW, + .mask = RVG_MASK_CSRRW, + .match = RVG_MATCH_CSRRW, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRS, - .match = INSN_MATCH_CSRRS, + .mask = RVG_MASK_CSRRS, + .match = RVG_MATCH_CSRRS, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRC, - .match = INSN_MATCH_CSRRC, + .mask = RVG_MASK_CSRRC, + .match = RVG_MATCH_CSRRC, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRWI, - .match = INSN_MATCH_CSRRWI, + .mask = RVG_MASK_CSRRWI, + .match = RVG_MATCH_CSRRWI, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRSI, - .match = INSN_MATCH_CSRRSI, + .mask = RVG_MASK_CSRRSI, + .match = RVG_MATCH_CSRRSI, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRCI, - .match = INSN_MATCH_CSRRCI, + .mask = RVG_MASK_CSRRCI, + .match = RVG_MATCH_CSRRCI, .func = csr_insn, }, { - .mask = INSN_MASK_WFI, - .match = INSN_MATCH_WFI, + .mask = RV_MASK_WFI, + .match = RV_MATCH_WFI, .func = wfi_insn, }, }; @@ -414,7 +296,7 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap utrap = { 0 }; struct kvm_cpu_context *ct; - if (unlikely(INSN_IS_16BIT(insn))) { + if (unlikely(INSN_IS_C(insn))) { if (insn == 0) { ct = &vcpu->arch.guest_context; insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, @@ -426,12 +308,12 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, return 1; } } - if (INSN_IS_16BIT(insn)) + if (INSN_IS_C(insn)) return truly_illegal_insn(vcpu, run, insn); } - switch ((insn & INSN_OPCODE_MASK) >> INSN_OPCODE_SHIFT) { - case INSN_OPCODE_SYSTEM: + switch (insn) { + case RVG_OPCODE_SYSTEM: return system_opcode_insn(vcpu, run, insn); default: return truly_illegal_insn(vcpu, run, insn); @@ -466,7 +348,7 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, * Bit[0] == 1 implies trapped instruction value is * transformed instruction or custom instruction. */ - insn = htinst | INSN_16BIT_MASK; + insn = htinst | INSN_C_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { /* @@ -485,43 +367,43 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, } /* Decode length of MMIO and shift */ - if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { + if (riscv_insn_is_lw(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LB) == INSN_MATCH_LB) { + } else if (riscv_insn_is_lb(insn)) { len = 1; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LBU) == INSN_MATCH_LBU) { + } else if (riscv_insn_is_lbu(insn)) { len = 1; shift = 8 * (sizeof(ulong) - len); #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) { + } else if (riscv_insn_is_ld(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) { + } else if (riscv_insn_is_lwu(insn)) { len = 4; #endif - } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) { + } else if (riscv_insn_is_lh(insn)) { len = 2; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) { + } else if (riscv_insn_is_lhu(insn)) { len = 2; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) { + } else if (riscv_insn_is_c_ld(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF; + } else if (riscv_insn_is_c_ldsp(insn) && + riscv_insn_extract_rd(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); #endif - } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) { + } else if (riscv_insn_is_c_lw(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF; + } else if (riscv_insn_is_c_lwsp(insn) && + riscv_insn_extract_rd(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); } else { @@ -592,7 +474,7 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, * Bit[0] == 1 implies trapped instruction value is * transformed instruction or custom instruction. */ - insn = htinst | INSN_16BIT_MASK; + insn = htinst | INSN_C_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { /* @@ -610,35 +492,42 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, insn_len = INSN_LEN(insn); } - data = GET_RS2(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs1(insn), &data); data8 = data16 = data32 = data64 = data; - if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { + if (riscv_insn_is_sw(insn)) { len = 4; - } else if ((insn & INSN_MASK_SB) == INSN_MATCH_SB) { + } else if (riscv_insn_is_sb(insn)) { len = 1; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { + } else if (riscv_insn_is_sd(insn)) { len = 8; #endif - } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { + } else if (riscv_insn_is_sh(insn)) { len = 2; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { + } else if (riscv_insn_is_c_sd(insn)) { len = 8; - data64 = GET_RS2S(insn, &vcpu->arch.guest_context); - } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs2(insn), + (unsigned long *)&data64); + } else if (riscv_insn_is_c_sdsp(insn) && riscv_insn_extract_rd(insn)) { len = 8; - data64 = GET_RS2C(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_csca_rs2(insn), + (unsigned long *)&data64); #endif - } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { + } else if (riscv_insn_is_c_sw(insn)) { len = 4; - data32 = GET_RS2S(insn, &vcpu->arch.guest_context); - } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs2(insn), + (unsigned long *)&data32); + } else if (riscv_insn_is_c_swsp(insn) && riscv_insn_extract_rd(insn)) { len = 4; - data32 = GET_RS2C(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_csca_rs2(insn), + (unsigned long *)&data32); } else { return -EOPNOTSUPP; } @@ -707,6 +596,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) u32 data32; u64 data64; ulong insn; + u32 index; int len, shift; if (vcpu->arch.mmio_decode.return_handled) @@ -720,27 +610,28 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) len = vcpu->arch.mmio_decode.len; shift = vcpu->arch.mmio_decode.shift; + index = riscv_insn_extract_rd(insn); switch (len) { case 1: data8 = *((u8 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data8 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data8 << shift >> shift); break; case 2: data16 = *((u16 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data16 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data16 << shift >> shift); break; case 4: data32 = *((u32 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data32 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data32 << shift >> shift); break; case 8: data64 = *((u64 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data64 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data64 << shift >> shift); break; default: return -EOPNOTSUPP; From patchwork Fri Aug 4 02:10:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A220C0015E for ; Fri, 4 Aug 2023 02:11:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6XAS3V/MwULgPqPA66KZiSRT6lywXrrqUSzW2LnPoWg=; b=t2Lkee0Q4sCO4S NYPQ2pZrdZEKwiM0wMH9pLT/Cb3JHAcnhO3V+zDk3J+hagPaMXyeMTt9f9t1XvNVSnZQbf4wkxpK5 tS5CxMcUj+Ou823KXmYN2Iewqmmvhd7G43Hpo/MiO0LwWrRvRygTtEdliayxlKP6gVAWlM1ogKqlK TfHVXUwPG9PQWcm+6xZj+fQfG/zEC8Cg9rDhHh4eSApqW9STQrAgXKZ1McKpoJEFQ5ZihM8va90xb wHah4ln89bJN47Ffe9dnVNg3qJc7qoQ6GAu9SgeUjCxPdfZFDQdWdyA23GdY0lZ950v4Xh2dgnrpd QbGfZ10kHjOqOXcKBMXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHh-00BLHI-05; Fri, 04 Aug 2023 02:11:21 +0000 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHY-00BL5z-1P for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:18 +0000 Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-686c06b806cso1153611b3a.2 for ; Thu, 03 Aug 2023 19:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115069; x=1691719869; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=/9x+TFVnCGe2WU5RlGvKxqVtj4tmROvC82ze+5KJL3U=; b=IkJmUMf16prXQkjP4lj5rTDJLv7FD4/XceOUaKRy9nwYg1AbjR/4hACj7L4bdWnNht i/68KOTrShtWhocen7puRJ0ZaDPHRmsWFaBrqaPGdFnd0H1vvz0dIffzYNiC4Sgx5nZl sOvmRmlJ+e0a66iwtno4OqFeMbZ6mHXcj5Xi/jbqbEpgrG75Gt1Mz8kXrKk4nIcFtlVh bLy9PqCz8rm0g2iU/gPweYHr5CSnbe/J5MA5IoTzLIHYkq3S4sx80FZgJyF51i4nK1wG JSKBN38+oh9vdwZG72hCVhOSiz6e7xSAoKkN3cE2pEDlaVUDrUaQBnQXT2hZ1YjnL+3c 44Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115069; x=1691719869; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/9x+TFVnCGe2WU5RlGvKxqVtj4tmROvC82ze+5KJL3U=; b=B3G67QWGBaK8EwtSfDK9Novg923D/QoFlUuSHxwTn0mNlbsImxMBaTEK5E56XZ6JOz t+rdJBisk2zlriGNxF3pWLMzWw3+wmv7FNt5kKp7sSF2jhzlLIHnLfYm/MMwNN+7RClO 4AArNSGmIWM3vQVB9JHwpbZSed0WAh1N+DLhh9v4mviVZCi5kpfEe3ayt2FTE5BoaRS2 bFjEhqpDubAHZ/uA5vedS212qsMNLS37XzfJfdLCTUFP4q3+qVAxLjl/b5YkaGk6kmj6 iM6vxrMATCGVsjRq0uvcdS94gykAx8MHRI7mdH8rIxYNd2D1MBaJF7qd0fAHlw3dLKgy v61g== X-Gm-Message-State: AOJu0Yzvx/KDyTDbs7dC7Q/Z1JhdhZdgvkaCtlCf4hxQ7+Lut8SlDne+ OLKKbpWE6Ka8w4MDrvIzWG4LHQ== X-Google-Smtp-Source: AGHT+IFkJti/J73gi8xVh6nlOFT/LodI5/N609ojJyoF75AUYfW9T3mL2OZpJIfMNG8tpG29DJWNVA== X-Received: by 2002:a05:6a00:228f:b0:666:ecf4:ed6d with SMTP id f15-20020a056a00228f00b00666ecf4ed6dmr469564pfe.18.1691115069170; Thu, 03 Aug 2023 19:11:09 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:08 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:34 -0700 Subject: [PATCH 09/10] RISC-V: bpf: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-9-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191112_544921_7BEF1E37 X-CRM114-Status: UNSURE ( 9.91 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/net/bpf_jit.h | 707 +---------------------------------------------- 1 file changed, 2 insertions(+), 705 deletions(-) diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h index 2717f5490428..3f79c938166d 100644 --- a/arch/riscv/net/bpf_jit.h +++ b/arch/riscv/net/bpf_jit.h @@ -12,58 +12,8 @@ #include #include #include - -static inline bool rvc_enabled(void) -{ - return IS_ENABLED(CONFIG_RISCV_ISA_C); -} - -enum { - RV_REG_ZERO = 0, /* The constant value 0 */ - RV_REG_RA = 1, /* Return address */ - RV_REG_SP = 2, /* Stack pointer */ - RV_REG_GP = 3, /* Global pointer */ - RV_REG_TP = 4, /* Thread pointer */ - RV_REG_T0 = 5, /* Temporaries */ - RV_REG_T1 = 6, - RV_REG_T2 = 7, - RV_REG_FP = 8, /* Saved register/frame pointer */ - RV_REG_S1 = 9, /* Saved register */ - RV_REG_A0 = 10, /* Function argument/return values */ - RV_REG_A1 = 11, /* Function arguments */ - RV_REG_A2 = 12, - RV_REG_A3 = 13, - RV_REG_A4 = 14, - RV_REG_A5 = 15, - RV_REG_A6 = 16, - RV_REG_A7 = 17, - RV_REG_S2 = 18, /* Saved registers */ - RV_REG_S3 = 19, - RV_REG_S4 = 20, - RV_REG_S5 = 21, - RV_REG_S6 = 22, - RV_REG_S7 = 23, - RV_REG_S8 = 24, - RV_REG_S9 = 25, - RV_REG_S10 = 26, - RV_REG_S11 = 27, - RV_REG_T3 = 28, /* Temporaries */ - RV_REG_T4 = 29, - RV_REG_T5 = 30, - RV_REG_T6 = 31, -}; - -static inline bool is_creg(u8 reg) -{ - return (1 << reg) & (BIT(RV_REG_FP) | - BIT(RV_REG_S1) | - BIT(RV_REG_A0) | - BIT(RV_REG_A1) | - BIT(RV_REG_A2) | - BIT(RV_REG_A3) | - BIT(RV_REG_A4) | - BIT(RV_REG_A5)); -} +#include +#include struct rv_jit_context { struct bpf_prog *prog; @@ -221,659 +171,6 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx) return ninsns_rvoff(to - from); } -/* Instruction formats. */ - -static inline u32 rv_r_insn(u8 funct7, u8 rs2, u8 rs1, u8 funct3, u8 rd, - u8 opcode) -{ - return (funct7 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (rd << 7) | opcode; -} - -static inline u32 rv_i_insn(u16 imm11_0, u8 rs1, u8 funct3, u8 rd, u8 opcode) -{ - return (imm11_0 << 20) | (rs1 << 15) | (funct3 << 12) | (rd << 7) | - opcode; -} - -static inline u32 rv_s_insn(u16 imm11_0, u8 rs2, u8 rs1, u8 funct3, u8 opcode) -{ - u8 imm11_5 = imm11_0 >> 5, imm4_0 = imm11_0 & 0x1f; - - return (imm11_5 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (imm4_0 << 7) | opcode; -} - -static inline u32 rv_b_insn(u16 imm12_1, u8 rs2, u8 rs1, u8 funct3, u8 opcode) -{ - u8 imm12 = ((imm12_1 & 0x800) >> 5) | ((imm12_1 & 0x3f0) >> 4); - u8 imm4_1 = ((imm12_1 & 0xf) << 1) | ((imm12_1 & 0x400) >> 10); - - return (imm12 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (imm4_1 << 7) | opcode; -} - -static inline u32 rv_u_insn(u32 imm31_12, u8 rd, u8 opcode) -{ - return (imm31_12 << 12) | (rd << 7) | opcode; -} - -static inline u32 rv_j_insn(u32 imm20_1, u8 rd, u8 opcode) -{ - u32 imm; - - imm = (imm20_1 & 0x80000) | ((imm20_1 & 0x3ff) << 9) | - ((imm20_1 & 0x400) >> 2) | ((imm20_1 & 0x7f800) >> 11); - - return (imm << 12) | (rd << 7) | opcode; -} - -static inline u32 rv_amo_insn(u8 funct5, u8 aq, u8 rl, u8 rs2, u8 rs1, - u8 funct3, u8 rd, u8 opcode) -{ - u8 funct7 = (funct5 << 2) | (aq << 1) | rl; - - return rv_r_insn(funct7, rs2, rs1, funct3, rd, opcode); -} - -/* RISC-V compressed instruction formats. */ - -static inline u16 rv_cr_insn(u8 funct4, u8 rd, u8 rs2, u8 op) -{ - return (funct4 << 12) | (rd << 7) | (rs2 << 2) | op; -} - -static inline u16 rv_ci_insn(u8 funct3, u32 imm6, u8 rd, u8 op) -{ - u32 imm; - - imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2); - return (funct3 << 13) | (rd << 7) | op | imm; -} - -static inline u16 rv_css_insn(u8 funct3, u32 uimm, u8 rs2, u8 op) -{ - return (funct3 << 13) | (uimm << 7) | (rs2 << 2) | op; -} - -static inline u16 rv_ciw_insn(u8 funct3, u32 uimm, u8 rd, u8 op) -{ - return (funct3 << 13) | (uimm << 5) | ((rd & 0x7) << 2) | op; -} - -static inline u16 rv_cl_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rd, - u8 op) -{ - return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) | - (imm_lo << 5) | ((rd & 0x7) << 2) | op; -} - -static inline u16 rv_cs_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rs2, - u8 op) -{ - return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) | - (imm_lo << 5) | ((rs2 & 0x7) << 2) | op; -} - -static inline u16 rv_ca_insn(u8 funct6, u8 rd, u8 funct2, u8 rs2, u8 op) -{ - return (funct6 << 10) | ((rd & 0x7) << 7) | (funct2 << 5) | - ((rs2 & 0x7) << 2) | op; -} - -static inline u16 rv_cb_insn(u8 funct3, u32 imm6, u8 funct2, u8 rd, u8 op) -{ - u32 imm; - - imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2); - return (funct3 << 13) | (funct2 << 10) | ((rd & 0x7) << 7) | op | imm; -} - -/* Instructions shared by both RV32 and RV64. */ - -static inline u32 rv_addi(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x13); -} - -static inline u32 rv_andi(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 7, rd, 0x13); -} - -static inline u32 rv_ori(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 6, rd, 0x13); -} - -static inline u32 rv_xori(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 4, rd, 0x13); -} - -static inline u32 rv_slli(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 1, rd, 0x13); -} - -static inline u32 rv_srli(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x13); -} - -static inline u32 rv_srai(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x13); -} - -static inline u32 rv_lui(u8 rd, u32 imm31_12) -{ - return rv_u_insn(imm31_12, rd, 0x37); -} - -static inline u32 rv_auipc(u8 rd, u32 imm31_12) -{ - return rv_u_insn(imm31_12, rd, 0x17); -} - -static inline u32 rv_add(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_sub(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_sltu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 3, rd, 0x33); -} - -static inline u32 rv_and(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 7, rd, 0x33); -} - -static inline u32 rv_or(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 6, rd, 0x33); -} - -static inline u32 rv_xor(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 4, rd, 0x33); -} - -static inline u32 rv_sll(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 1, rd, 0x33); -} - -static inline u32 rv_srl(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_sra(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_mul(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_mulhu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 3, rd, 0x33); -} - -static inline u32 rv_divu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_remu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 7, rd, 0x33); -} - -static inline u32 rv_jal(u8 rd, u32 imm20_1) -{ - return rv_j_insn(imm20_1, rd, 0x6f); -} - -static inline u32 rv_jalr(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x67); -} - -static inline u32 rv_beq(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 0, 0x63); -} - -static inline u32 rv_bne(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 1, 0x63); -} - -static inline u32 rv_bltu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 6, 0x63); -} - -static inline u32 rv_bgtu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bltu(rs2, rs1, imm12_1); -} - -static inline u32 rv_bgeu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 7, 0x63); -} - -static inline u32 rv_bleu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bgeu(rs2, rs1, imm12_1); -} - -static inline u32 rv_blt(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 4, 0x63); -} - -static inline u32 rv_bgt(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_blt(rs2, rs1, imm12_1); -} - -static inline u32 rv_bge(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 5, 0x63); -} - -static inline u32 rv_ble(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bge(rs2, rs1, imm12_1); -} - -static inline u32 rv_lw(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 2, rd, 0x03); -} - -static inline u32 rv_lbu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 4, rd, 0x03); -} - -static inline u32 rv_lhu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x03); -} - -static inline u32 rv_sb(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 0, 0x23); -} - -static inline u32 rv_sh(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 1, 0x23); -} - -static inline u32 rv_sw(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 2, 0x23); -} - -static inline u32 rv_amoadd_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoand_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0xc, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x8, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoxor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x4, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoswap_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x1, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_lr_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x2, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_sc_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x3, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_fence(u8 pred, u8 succ) -{ - u16 imm11_0 = pred << 4 | succ; - - return rv_i_insn(imm11_0, 0, 0, 0, 0xf); -} - -static inline u32 rv_nop(void) -{ - return rv_i_insn(0, 0, 0, 0, 0x13); -} - -/* RVC instrutions. */ - -static inline u16 rvc_addi4spn(u8 rd, u32 imm10) -{ - u32 imm; - - imm = ((imm10 & 0x30) << 2) | ((imm10 & 0x3c0) >> 4) | - ((imm10 & 0x4) >> 1) | ((imm10 & 0x8) >> 3); - return rv_ciw_insn(0x0, imm, rd, 0x0); -} - -static inline u16 rvc_lw(u8 rd, u32 imm7, u8 rs1) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm7 & 0x38) >> 3; - imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6); - return rv_cl_insn(0x2, imm_hi, rs1, imm_lo, rd, 0x0); -} - -static inline u16 rvc_sw(u8 rs1, u32 imm7, u8 rs2) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm7 & 0x38) >> 3; - imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6); - return rv_cs_insn(0x6, imm_hi, rs1, imm_lo, rs2, 0x0); -} - -static inline u16 rvc_addi(u8 rd, u32 imm6) -{ - return rv_ci_insn(0, imm6, rd, 0x1); -} - -static inline u16 rvc_li(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x2, imm6, rd, 0x1); -} - -static inline u16 rvc_addi16sp(u32 imm10) -{ - u32 imm; - - imm = ((imm10 & 0x200) >> 4) | (imm10 & 0x10) | ((imm10 & 0x40) >> 3) | - ((imm10 & 0x180) >> 6) | ((imm10 & 0x20) >> 5); - return rv_ci_insn(0x3, imm, RV_REG_SP, 0x1); -} - -static inline u16 rvc_lui(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x3, imm6, rd, 0x1); -} - -static inline u16 rvc_srli(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0, rd, 0x1); -} - -static inline u16 rvc_srai(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0x1, rd, 0x1); -} - -static inline u16 rvc_andi(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0x2, rd, 0x1); -} - -static inline u16 rvc_sub(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0, rs, 0x1); -} - -static inline u16 rvc_xor(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x1, rs, 0x1); -} - -static inline u16 rvc_or(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x2, rs, 0x1); -} - -static inline u16 rvc_and(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x3, rs, 0x1); -} - -static inline u16 rvc_slli(u8 rd, u32 imm6) -{ - return rv_ci_insn(0, imm6, rd, 0x2); -} - -static inline u16 rvc_lwsp(u8 rd, u32 imm8) -{ - u32 imm; - - imm = ((imm8 & 0xc0) >> 6) | (imm8 & 0x3c); - return rv_ci_insn(0x2, imm, rd, 0x2); -} - -static inline u16 rvc_jr(u8 rs1) -{ - return rv_cr_insn(0x8, rs1, RV_REG_ZERO, 0x2); -} - -static inline u16 rvc_mv(u8 rd, u8 rs) -{ - return rv_cr_insn(0x8, rd, rs, 0x2); -} - -static inline u16 rvc_jalr(u8 rs1) -{ - return rv_cr_insn(0x9, rs1, RV_REG_ZERO, 0x2); -} - -static inline u16 rvc_add(u8 rd, u8 rs) -{ - return rv_cr_insn(0x9, rd, rs, 0x2); -} - -static inline u16 rvc_swsp(u32 imm8, u8 rs2) -{ - u32 imm; - - imm = (imm8 & 0x3c) | ((imm8 & 0xc0) >> 6); - return rv_css_insn(0x6, imm, rs2, 0x2); -} - -/* - * RV64-only instructions. - * - * These instructions are not available on RV32. Wrap them below a #if to - * ensure that the RV32 JIT doesn't emit any of these instructions. - */ - -#if __riscv_xlen == 64 - -static inline u32 rv_addiw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x1b); -} - -static inline u32 rv_slliw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 1, rd, 0x1b); -} - -static inline u32 rv_srliw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x1b); -} - -static inline u32 rv_sraiw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x1b); -} - -static inline u32 rv_addw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_subw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_sllw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 1, rd, 0x3b); -} - -static inline u32 rv_srlw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_sraw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_mulw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_divuw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_remuw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 7, rd, 0x3b); -} - -static inline u32 rv_ld(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 3, rd, 0x03); -} - -static inline u32 rv_lwu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 6, rd, 0x03); -} - -static inline u32 rv_sd(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 3, 0x23); -} - -static inline u32 rv_amoadd_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoand_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0xc, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x8, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoxor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x4, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoswap_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x1, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_lr_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x2, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_sc_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x3, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -/* RV64-only RVC instructions. */ - -static inline u16 rvc_ld(u8 rd, u32 imm8, u8 rs1) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm8 & 0x38) >> 3; - imm_lo = (imm8 & 0xc0) >> 6; - return rv_cl_insn(0x3, imm_hi, rs1, imm_lo, rd, 0x0); -} - -static inline u16 rvc_sd(u8 rs1, u32 imm8, u8 rs2) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm8 & 0x38) >> 3; - imm_lo = (imm8 & 0xc0) >> 6; - return rv_cs_insn(0x7, imm_hi, rs1, imm_lo, rs2, 0x0); -} - -static inline u16 rvc_subw(u8 rd, u8 rs) -{ - return rv_ca_insn(0x27, rd, 0, rs, 0x1); -} - -static inline u16 rvc_addiw(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x1, imm6, rd, 0x1); -} - -static inline u16 rvc_ldsp(u8 rd, u32 imm9) -{ - u32 imm; - - imm = ((imm9 & 0x1c0) >> 6) | (imm9 & 0x38); - return rv_ci_insn(0x3, imm, rd, 0x2); -} - -static inline u16 rvc_sdsp(u32 imm9, u8 rs2) -{ - u32 imm; - - imm = (imm9 & 0x38) | ((imm9 & 0x1c0) >> 6); - return rv_css_insn(0x7, imm, rs2, 0x2); -} - -#endif /* __riscv_xlen == 64 */ - /* Helper functions that emit RVC instructions when possible. */ static inline void emit_jalr(u8 rd, u8 rs, s32 imm, struct rv_jit_context *ctx) From patchwork Fri Aug 4 02:10:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0DDC2EB64DD for ; Fri, 4 Aug 2023 02:11:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ppl+bVC+7xyn9eH+3FmICCwASGEImyH5oIuHA8+JWN4=; b=4olkI69aRBg4F1 xzoOR/2kSZH9UWIIlCFK8qefnLO904aNweivJen46vdL0K/ZMiJ31FCLYmrXVOh+Dv0W8vFyy/4by nr8OYISV+GPhy0yLjUFDSpc3729ZB1b9aXuZSSb4Ka5bWkeW1sMEcMnIXec/xofG70sDmwn3h/q+U UKcLMPNepULGfnzdShCFqvuspfLgbajXh/B8QDbqHTEmi2+sX0hr1eUK5ivciwD/GGyJ0Wb40VfTf 3xgTLozwJsmRbpb6YHmXNFSxqMJt/C7UDjxMfP82eYYUyJO40sn4WvkboKDfb2Mk73IlzZ7Srn0LD y07s8xkJcCfuA/L/CEtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHf-00BLFP-19; Fri, 04 Aug 2023 02:11:19 +0000 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRkHa-00BL8O-0y for linux-riscv@lists.infradead.org; Fri, 04 Aug 2023 02:11:17 +0000 Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-686f090310dso1510922b3a.0 for ; Thu, 03 Aug 2023 19:11:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115071; x=1691719871; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ih06HArn9iAryRcOKi/nyaTeAf4B1pYQ5n3lzBGwExw=; b=ew4rB+H6wBVVmeh/ui4LIzo+yvnm98oRAW4V9bmZm0KpT/glMjqRgPghU5Pe7h4Esc ULiTigZbpum8dCcqLDUYCXDomBKmpj2JOHEklft07nNCBx1z82UYhCKCpuxj1PHAOzaE NMax1U3YW9v1aHa4//bwc1cTcDl4jdIR8DOmMX78OtylMdsneAv0M+DVuZ/oUvlzDjuy /Eo3HBz3BDHFXmGEXCMfBNAi9hv3UebxWREasDtlfQDxVygFxm5W9omrOT80TWEdv1+I 6Q3E3HhXfnIfxGtKYjYgSXjJeDKzMUjL0JxLCdBqUgV5jfcWHjIg0Oq1bhb98rkkscuS E9Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115071; x=1691719871; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ih06HArn9iAryRcOKi/nyaTeAf4B1pYQ5n3lzBGwExw=; b=F4KmF1QZnnFYcBkMD82BN6GxxsE0EsnUGCOfFha0kEVsarZUQoD//Fgq6Ne1VzTPYm 08b40mtyqFXrxYlerJipi0sGPQHtDpntvmZjg96h0gSsOlPEhOhhZSpKQ1OXhDEAZVpt DjNlUiY6jjrgpyzA1wDRO/GmvynW+N5U/VHMOUeMgk6ktxl/YnE1KhqikrsrYcG3EHKZ 13kKvXCfFp9nesnqVlH4hiyJwesWxnJVRi8DELHm55v9AMCmunMkAXGI++nJJFIz+LjO QHwRponCN/t8yiMZw+yg0tgxLhxqD6Ga2u9sxfMBuec9xZNgAM0GuvRf8MrRJVbsTK2h /8NA== X-Gm-Message-State: AOJu0Yy7zowd2frp2moXkJsTg31cQROLZ9beXowpKh4+xJZuXUzlbxmB 3YI7aJrJeHzrYgdCu+IVF8Xldw== X-Google-Smtp-Source: AGHT+IHuwaCo50Vehua+AGfdkcRKSRRWmSFHlzqzBP1tYtF/TOsW9qAWFW9rXyDnlZAMKZPqtqPdwQ== X-Received: by 2002:a05:6a00:189d:b0:687:2be1:e2f6 with SMTP id x29-20020a056a00189d00b006872be1e2f6mr555577pfh.16.1691115071421; Thu, 03 Aug 2023 19:11:11 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:10 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:35 -0700 Subject: [PATCH 10/10] RISC-V: Refactor bug and traps instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-10-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230803_191114_356611_1919FD15 X-CRM114-Status: GOOD ( 10.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/bug.h | 18 +++++------------- arch/riscv/kernel/traps.c | 9 +++++---- 2 files changed, 10 insertions(+), 17 deletions(-) diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h index 1aaea81fb141..6d9002d93f85 100644 --- a/arch/riscv/include/asm/bug.h +++ b/arch/riscv/include/asm/bug.h @@ -11,21 +11,13 @@ #include #include +#include -#define __INSN_LENGTH_MASK _UL(0x3) -#define __INSN_LENGTH_32 _UL(0x3) -#define __COMPRESSED_INSN_MASK _UL(0xffff) +#define __IS_BUG_INSN_32(insn) riscv_insn_is_c_ebreak(insn) +#define __IS_BUG_INSN_16(insn) riscv_insn_is_ebreak(insn) -#define __BUG_INSN_32 _UL(0x00100073) /* ebreak */ -#define __BUG_INSN_16 _UL(0x9002) /* c.ebreak */ - -#define GET_INSN_LENGTH(insn) \ -({ \ - unsigned long __len; \ - __len = ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ? \ - 4UL : 2UL; \ - __len; \ -}) +#define __BUG_INSN_32 RVG_MATCH_EBREAK +#define __BUG_INSN_16 RVC_MATCH_C_EBREAK typedef u32 bug_insn_t; diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index f910dfccbf5d..970b118d36b5 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -243,7 +244,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc) if (get_kernel_nofault(insn, (bug_insn_t *)pc)) return 0; - return GET_INSN_LENGTH(insn); + return INSN_LEN(insn); } void handle_break(struct pt_regs *regs) @@ -389,10 +390,10 @@ int is_valid_bugaddr(unsigned long pc) return 0; if (get_kernel_nofault(insn, (bug_insn_t *)pc)) return 0; - if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) - return (insn == __BUG_INSN_32); + if (INSN_IS_C(insn)) + return __IS_BUG_INSN_16(insn); else - return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16); + return __IS_BUG_INSN_32(insn); } #endif /* CONFIG_GENERIC_BUG */