From patchwork Fri Aug 4 02:10:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A4C6EB64DD for ; Fri, 4 Aug 2023 02:11:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233554AbjHDCLw (ORCPT ); Thu, 3 Aug 2023 22:11:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233450AbjHDCL3 (ORCPT ); Thu, 3 Aug 2023 22:11:29 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D14265249 for ; Thu, 3 Aug 2023 19:10:57 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id 46e09a7af769-6bcbb0c40b1so1341427a34.3 for ; Thu, 03 Aug 2023 19:10:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115054; x=1691719854; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=efS36dkbCzja1DEoOkKk5twnSGIQYzSPz7GFgwV0rps=; b=sL5L+XGZXxz3uilHLZ9996rABpzv96OCLn70c9kreiKgPg+WH3KJroL9cVLj2fBjI4 NQe8+w7CLAox+ot9c6Sx0ktR3iYP5P+c8pN85u54Rsy06KDa4GcfJccDlv6X4vB+zHUn fOSxh1hOUBlo6UgFcJDC47IQVz1f4EDcD0TlBOGU/8cvax961M2jcrcuGJPlBRjyBApH Yr9yABf/BnVF1c20z4GgGSV/40DAWyj5SVhDT+9mC7k6UVziYmYdgL3moyY6QrCvZd73 4FZ3hUQCNUuO/ok3JHJqIRfA/yxktc+meR9K37rZmH55b5iR6aKlCC4Gg1xr0MwsaNDG Pekw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115054; x=1691719854; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=efS36dkbCzja1DEoOkKk5twnSGIQYzSPz7GFgwV0rps=; b=kCoy3nILxG9FGQKT+Icf21fI7ntg+9FtyAxBR7HvdtDTPy6yeU3ypg9ieCqrSZeJib 0xrZEIu9tX46TWNOuXrLYXkMcHk5Zh1/QcJcJIxy+dNntGS3fOVG7QxnMV2MDI63o2RS vXKPNC1LAQ3PAcKOfLxqV3TQDYymSsD6BXld6DmE20cMNurCwtNF43oM+Y9o9xygmpdK 4JYofbwU63kRt/kNUlWyWkl1MNzl59m0NaBIv7FyKAsjQ7s8KBGPKmrXHq+Z2lgBUw/4 ZBS9IR461LYrxmr9W4WJPYoirT7FXS8j+9uFyp+7ILSwAB6Gb3EVtNCiru36ttDDXHi5 Z6jw== X-Gm-Message-State: AOJu0YzgRZ+45MiTjQ6rDpLCXTuo5oEYv61SeIc9MNuYSPxrGpWFmtsg +FJy6LJG10vkYVPl3LHiNOYWyg== X-Google-Smtp-Source: AGHT+IHkkquc+vtnTAEnaVFOIeS3KymV7yesjBryf/XUZfueFQfWUT4yuX+ZoQdYenvjMp3+aT7A/A== X-Received: by 2002:a05:6358:c603:b0:134:ded4:294 with SMTP id fd3-20020a056358c60300b00134ded40294mr441125rwb.17.1691115054417; Thu, 03 Aug 2023 19:10:54 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:53 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:27 -0700 Subject: [PATCH 02/10] RISC-V: vector: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-2-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use instructions in insn.h Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/vector.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index d67a60369e02..1433d70abdd7 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -18,7 +18,6 @@ #include #include #include -#include static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE); @@ -56,7 +55,7 @@ static bool insn_is_vector(u32 insn_buf) * All V-related instructions, including CSR operations are 4-Byte. So, * do not handle if the instruction length is not 4-Byte. */ - if (unlikely(GET_INSN_LENGTH(insn_buf) != 4)) + if (unlikely(INSN_LEN(insn_buf) != 4)) return false; switch (opcode) { From patchwork Fri Aug 4 02:10:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75151C00528 for ; Fri, 4 Aug 2023 02:12:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233328AbjHDCMA (ORCPT ); Thu, 3 Aug 2023 22:12:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233321AbjHDCLf (ORCPT ); Thu, 3 Aug 2023 22:11:35 -0400 Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com [IPv6:2607:f8b0:4864:20::72c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF92046A8 for ; Thu, 3 Aug 2023 19:11:06 -0700 (PDT) Received: by mail-qk1-x72c.google.com with SMTP id af79cd13be357-76ad8892d49so132768485a.1 for ; Thu, 03 Aug 2023 19:11:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115056; x=1691719856; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=+3U++cEq45L7iITV+zLwLaldo0Na07T+o+hHDfRRKC4=; b=ayC0Day7CXdP53wwIDs0tv4bKX6RYBEbt+WDQyLb8tpMqzB8hUHeS+pZ/fmXrx4qpA 3JkEy7VvFQFPwmFiZrOcgMEo3fzpW+ldBC+JO6eugwsbDr7vJSjP1/35CxXsN0MB0Irt mkx9hB89+CUahHSi5pi8nO9KgVUWFgp8tr4jlCEdJmhvewPu4OWQqZ3baJYZwvTPdBfy +HdxqmvQtuuS7UKmZq8wZJhY55hKFLVGi3/KRn4eaZy7GtYIOEbVuJ5qIJOIr+oSnw4f I7cK7giGeGjDDUCgXif5SspsldvYp/giTIrzSG6SdHe7tpD9OaruV+cA9piG0QSIk91/ G56g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115056; x=1691719856; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+3U++cEq45L7iITV+zLwLaldo0Na07T+o+hHDfRRKC4=; b=Kf/2IX7YRAV7BNHa95vqras79MVN8xv/W7Cz1qWxJh5deiAznweYSGRreDsSFZVvk3 ghPfRMUNGFLYulMFwHRrmXyZ69mpFaGmkUpJpwoB5Zx4Zuf87bg7ssfQ5vP24iNietfK vX78sG5Gi5yAlUVu2x64KmfLDVZn50XkOy4VhUXs+bSqBqPKZ7k/g2+eMJwzs8cAG5/s aTotBGjUW4pUedjIa4BKq/8SMUDiekGEgWaM41NyQ9qo6HhS+fL8VJRWtFmWtSwXKJj+ RBJidREaOMLlkdVDgbemU0k5gZ8J2NvkQWjWA0u9ZWd2zX9Wpwqg3fz5kz1idRl3zHqN r14w== X-Gm-Message-State: AOJu0YyScRwC2WiF0fhRNcxboZlMYahTQdOICC2pI63H6pzmAGkm/iqS GgvCGUqXnIe96tfx/NQwnCPkEA== X-Google-Smtp-Source: AGHT+IG+6EaYMDmn7R52Jc4ascni7z5WQFwfR/PJzZqWfgRRzwdONmN63Q30hGcwUi7GhkBgKxKI9A== X-Received: by 2002:a05:620a:1918:b0:76c:a187:13be with SMTP id bj24-20020a05620a191800b0076ca18713bemr687807qkb.33.1691115056542; Thu, 03 Aug 2023 19:10:56 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:56 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:28 -0700 Subject: [PATCH 03/10] RISC-V: Refactor jump label instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-3-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/insn.h | 2 +- arch/riscv/kernel/jump_label.c | 13 ++++--------- 2 files changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h index 04f7649e1add..124ab02973a7 100644 --- a/arch/riscv/include/asm/insn.h +++ b/arch/riscv/include/asm/insn.h @@ -1984,7 +1984,7 @@ static __always_inline bool riscv_insn_is_branch(u32 code) << RVC_J_IMM_10_OFF) | \ (RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); }) -#define RVC_EXTRACT_BTYPE_IMM(x) \ +#define RVC_EXTRACT_BZ_IMM(x) \ ({typeof(x) x_ = (x); \ (RVC_X(x_, RVC_BZ_IMM_2_1_OPOFF, RVC_BZ_IMM_2_1_MASK) \ << RVC_BZ_IMM_2_1_OFF) | \ diff --git a/arch/riscv/kernel/jump_label.c b/arch/riscv/kernel/jump_label.c index e6694759dbd0..fdaac2a13eac 100644 --- a/arch/riscv/kernel/jump_label.c +++ b/arch/riscv/kernel/jump_label.c @@ -9,11 +9,9 @@ #include #include #include +#include #include -#define RISCV_INSN_NOP 0x00000013U -#define RISCV_INSN_JAL 0x0000006fU - void arch_jump_label_transform(struct jump_entry *entry, enum jump_label_type type) { @@ -26,13 +24,10 @@ void arch_jump_label_transform(struct jump_entry *entry, if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288)) return; - insn = RISCV_INSN_JAL | - (((u32)offset & GENMASK(19, 12)) << (12 - 12)) | - (((u32)offset & GENMASK(11, 11)) << (20 - 11)) | - (((u32)offset & GENMASK(10, 1)) << (21 - 1)) | - (((u32)offset & GENMASK(20, 20)) << (31 - 20)); + insn = RVG_OPCODE_JAL; + riscv_insn_insert_jtype_imm(&insn, (s32)offset); } else { - insn = RISCV_INSN_NOP; + insn = RVG_OPCODE_NOP; } mutex_lock(&text_mutex); From patchwork Fri Aug 4 02:10:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FB95EB64DD for ; Fri, 4 Aug 2023 02:12:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233657AbjHDCMO (ORCPT ); Thu, 3 Aug 2023 22:12:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232818AbjHDCLi (ORCPT ); Thu, 3 Aug 2023 22:11:38 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B504E49CB for ; Thu, 3 Aug 2023 19:11:10 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1bbc06f830aso11574195ad.0 for ; Thu, 03 Aug 2023 19:11:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115058; x=1691719858; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=k6HgGLGtG7M3LESpN5nTWoHfsSRrv8ioEMKugiOnOTw=; b=tu7avlotyozMV/l8NRFA0DJ9nnNC6Iq43LymSGWeqaIFo+4SOLemWteCRxuxVqX1vN v5bb3l0plvTmmo2yGXB6uIC1F/QI4lTwiO69NqjpKxuMRfL+xYAnS4u+tegfJm0NuEKP 0kChJuxf5ozsMipWf8xNnVMnHYeK7CIXvxUiE3DuPC17GkDDWyzdDHCq7v8KNNMZN3aR E+nUQYwRE0MLaL/LwTWHQGSQHIoxQ1RLPR4iRDtmnbokti7RepO5Ri/GxUt3bdFA6/dP pyxnjsEJpHF1MQS3xf8YsV6SjjvC5Eyd4tgBsMQxdwZBNR6xOFws8g38Zn3skjbBCBfO Im3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115059; x=1691719859; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k6HgGLGtG7M3LESpN5nTWoHfsSRrv8ioEMKugiOnOTw=; b=HnnBKkp7naesXWDaHhoTcS6CU/3IRj4EnJMlLu6O22bUvW1HGI1hFfV0fhT0FsCj/l TVmKWQza0JizomjDyD/7+CvMepbFGtas2KkMSeMOolNRc6OPauVYpD6GB/i5sdhw2A7E eBtxWvE88klhNKpFn6O3SmikdDWof0aPaNQwwygenA+nET8i2+LdgGNcMNL0NxbihQuU 5Nn2uVBdy5pAQQxd+lR3zs4nV+VVjyUS2eYXC3GTgkCkb1xepFZ9OgSrn4t5RtBKxF+Z 5C1UouYh3f3QvnQYLSNtV1xmsKCZxGdtX5YSXIG4YgYScT4FhcRqMMhdpMWhJrZzWAdH rKSQ== X-Gm-Message-State: AOJu0YxRvzcJYK8Z/bkHuW3fJkkqWNlkbWrvl0ypQuNEa34UubKYl1lq 5myNTbawxiDk0rJL2VCPYibr7g== X-Google-Smtp-Source: AGHT+IEN0Jt5wwNuskL0qGGhhGkOeUg3t8WkQm4o7XfOAfqqqzUJqOVX9dt6wxX88cfzTtEYQSuJcg== X-Received: by 2002:a17:902:ab48:b0:1bc:e37:aa76 with SMTP id ij8-20020a170902ab4800b001bc0e37aa76mr516147plb.6.1691115058660; Thu, 03 Aug 2023 19:10:58 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:10:58 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:29 -0700 Subject: [PATCH 04/10] RISC-V: KGDB: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-4-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/kgdb.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c index 2393342ab362..e1305706120e 100644 --- a/arch/riscv/kernel/kgdb.c +++ b/arch/riscv/kernel/kgdb.c @@ -5,7 +5,6 @@ #include #include -#include #include #include #include @@ -25,12 +24,12 @@ static unsigned int stepped_opcode; static int decode_register_index(unsigned long opcode, int offset) { - return (opcode >> offset) & 0x1F; + return (opcode >> offset) & RV_STANDARD_REG_MASK; } static int decode_register_index_short(unsigned long opcode, int offset) { - return ((opcode >> offset) & 0x7) + 8; + return ((opcode >> offset) & RV_COMPRESSED_REG_MASK) + 8; } /* Calculate the new address for after a step */ @@ -43,7 +42,7 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) if (get_kernel_nofault(op_code, (void *)pc)) return -EINVAL; - if ((op_code & __INSN_LENGTH_MASK) != INSN_C_MASK) { + if (INSN_IS_C(op_code)) { if (riscv_insn_is_c_jalr(op_code) || riscv_insn_is_c_jr(op_code)) { rs1_num = decode_register_index(op_code, RVC_C2_RS1_OPOFF); @@ -55,14 +54,14 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr) rs1_num = decode_register_index_short(op_code, RVC_C1_RS1_OPOFF); if (!rs1_num || regs_ptr[rs1_num] == 0) - *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; + *next_addr = RVC_EXTRACT_BZ_IMM(op_code) + pc; else *next_addr = pc + 2; } else if (riscv_insn_is_c_bnez(op_code)) { rs1_num = decode_register_index_short(op_code, RVC_C1_RS1_OPOFF); if (rs1_num && regs_ptr[rs1_num] != 0) - *next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc; + *next_addr = RVC_EXTRACT_BZ_IMM(op_code) + pc; else *next_addr = pc + 2; } else { From patchwork Fri Aug 4 02:10:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 186F4C00528 for ; Fri, 4 Aug 2023 02:12:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233688AbjHDCMP (ORCPT ); Thu, 3 Aug 2023 22:12:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232908AbjHDCLk (ORCPT ); Thu, 3 Aug 2023 22:11:40 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F46C49F2 for ; Thu, 3 Aug 2023 19:11:11 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-686b91c2744so1180312b3a.0 for ; Thu, 03 Aug 2023 19:11:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115061; x=1691719861; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=TvocQ3kzvUtZx7OcqBm6IQToMDU6XVm4cjeyn7D1owI=; b=189wskav2mNhYG4PGFGcjURhFsVNfX4Y2mmf3tyouPd0xHiF8Zy1G9prJ2A1rZSVQs VEDyD9B+biyNNSs1RMPW9xFW/EekJBfiPkqkJGTZCaMQXa2mxF70yjBFQ6hp+sHQPsJO gjYTifDvOG3vFwqrZFXWH0OiKV0FxYXC7tFoj3Iq+aJETZTq8d2IZaCZD0spwPY1qGZF AGFXjHOArK3zFuEfsGU9hw1SWkPcZca9aRY/xDyWw0f0Ibn0agLtGVG59zM2XT16YGp/ 6g7WmhnPKQk5sYktnoKZE4HXqdRLhMXm22Kkgch4Sh4doYU0Hm2+wcvwdq0TP6tUIr5G tUHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115061; x=1691719861; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TvocQ3kzvUtZx7OcqBm6IQToMDU6XVm4cjeyn7D1owI=; b=XhxMpn17b0L8s5CUsEE9eWNtLJ71pxKIukO0oatSTEoObjGWA5g23eZbo0kPRj1tow MqzspghICqCWkwE3etjHp/HrPCrCx2EsjZ7F8/ZTo67/FyahnPYGqXGqTCcqgmqjBTIO eTsHOdTzH+hsYY2V7yPKkME3RhyC2mD+hQTYr8PDyvT2QEFj7LVaIflUKQiUr3XSkmuJ A2EhxLaSHWVDIHeUwARIEjEAK1ClPnHgxAYKiE2Yz1mBPVu36/PQC6y9rnPzGm7QG8lE 3s00uBRnv6yux9/kpWnYkxwd7doktvDA0OQKvwFWLSBXlsowtF+ah3pVoFLCEXRA4zsF HX4g== X-Gm-Message-State: AOJu0YwlpLXw6B3CAcVWzxUXRzDp27xbITT1+Rm4+QiHUZnC3kBbkWv2 YnROQX2OPreVFpmumSj+bciCWg== X-Google-Smtp-Source: AGHT+IGW5fUbLeHB1q3HXvX2gzuNau1K9/d2OOrCFZdKIeflYlEuvm3pmsHong703HZymw3Xc9M04Q== X-Received: by 2002:a05:6a20:8f13:b0:13a:52ce:13cc with SMTP id b19-20020a056a208f1300b0013a52ce13ccmr380626pzk.51.1691115061016; Thu, 03 Aug 2023 19:11:01 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:00 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:30 -0700 Subject: [PATCH 05/10] RISC-V: module: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-5-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Additionally, extra work was being done in apply_r_riscv_lo12_s_rela to ensure that the bits were set up properly for the lo12, but because -(a-b)=b-a it wasn't actually doing anything. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/module.c | 80 +++++++++++----------------------------------- 1 file changed, 18 insertions(+), 62 deletions(-) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 7c651d55fcbd..950783e5b5ae 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -12,8 +12,11 @@ #include #include #include +#include #include +#define HI20_OFFSET 0x800 + /* * The auipc+jalr instruction pair can reach any PC-relative offset * in the range [-2^31 - 2^11, 2^31 - 2^11) @@ -48,12 +51,8 @@ static int apply_r_riscv_branch_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 imm12 = (offset & 0x1000) << (31 - 12); - u32 imm11 = (offset & 0x800) >> (11 - 7); - u32 imm10_5 = (offset & 0x7e0) << (30 - 10); - u32 imm4_1 = (offset & 0x1e) << (11 - 4); - *location = (*location & 0x1fff07f) | imm12 | imm11 | imm10_5 | imm4_1; + riscv_insn_insert_btype_imm(location, ((s32)offset)); return 0; } @@ -61,12 +60,8 @@ static int apply_r_riscv_jal_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 imm20 = (offset & 0x100000) << (31 - 20); - u32 imm19_12 = (offset & 0xff000); - u32 imm11 = (offset & 0x800) << (20 - 11); - u32 imm10_1 = (offset & 0x7fe) << (30 - 10); - *location = (*location & 0xfff) | imm20 | imm19_12 | imm11 | imm10_1; + riscv_insn_insert_jtype_imm(location, ((s32)offset)); return 0; } @@ -74,14 +69,8 @@ static int apply_r_riscv_rvc_branch_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u16 imm8 = (offset & 0x100) << (12 - 8); - u16 imm7_6 = (offset & 0xc0) >> (6 - 5); - u16 imm5 = (offset & 0x20) >> (5 - 2); - u16 imm4_3 = (offset & 0x18) << (12 - 5); - u16 imm2_1 = (offset & 0x6) << (12 - 10); - - *(u16 *)location = (*(u16 *)location & 0xe383) | - imm8 | imm7_6 | imm5 | imm4_3 | imm2_1; + + riscv_insn_insert_cbztype_imm(location, (s32)offset); return 0; } @@ -89,17 +78,8 @@ static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u16 imm11 = (offset & 0x800) << (12 - 11); - u16 imm10 = (offset & 0x400) >> (10 - 8); - u16 imm9_8 = (offset & 0x300) << (12 - 11); - u16 imm7 = (offset & 0x80) >> (7 - 6); - u16 imm6 = (offset & 0x40) << (12 - 11); - u16 imm5 = (offset & 0x20) >> (5 - 2); - u16 imm4 = (offset & 0x10) << (12 - 5); - u16 imm3_1 = (offset & 0xe) << (12 - 10); - - *(u16 *)location = (*(u16 *)location & 0xe003) | - imm11 | imm10 | imm9_8 | imm7 | imm6 | imm5 | imm4 | imm3_1; + + riscv_insn_insert_cjtype_imm(location, (s32)offset); return 0; } @@ -107,7 +87,6 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - s32 hi20; if (!riscv_insn_valid_32bit_offset(offset)) { pr_err( @@ -116,8 +95,7 @@ static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, (offset + HI20_OFFSET)); return 0; } @@ -128,7 +106,7 @@ static int apply_r_riscv_pcrel_lo12_i_rela(struct module *me, u32 *location, * v is the lo12 value to fill. It is calculated before calling this * handler. */ - *location = (*location & 0xfffff) | ((v & 0xfff) << 20); + riscv_insn_insert_itype_imm(location, ((s32)v)); return 0; } @@ -139,18 +117,13 @@ static int apply_r_riscv_pcrel_lo12_s_rela(struct module *me, u32 *location, * v is the lo12 value to fill. It is calculated before calling this * handler. */ - u32 imm11_5 = (v & 0xfe0) << (31 - 11); - u32 imm4_0 = (v & 0x1f) << (11 - 4); - - *location = (*location & 0x1fff07f) | imm11_5 | imm4_0; + riscv_insn_insert_stype_imm(location, ((s32)v)); return 0; } static int apply_r_riscv_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { - s32 hi20; - if (IS_ENABLED(CONFIG_CMODEL_MEDLOW)) { pr_err( "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", @@ -158,8 +131,7 @@ static int apply_r_riscv_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = ((s32)v + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, ((s32)v + HI20_OFFSET)); return 0; } @@ -167,9 +139,7 @@ static int apply_r_riscv_lo12_i_rela(struct module *me, u32 *location, Elf_Addr v) { /* Skip medlow checking because of filtering by HI20 already */ - s32 hi20 = ((s32)v + 0x800) & 0xfffff000; - s32 lo12 = ((s32)v - hi20); - *location = (*location & 0xfffff) | ((lo12 & 0xfff) << 20); + riscv_insn_insert_itype_imm(location, (s32)v); return 0; } @@ -177,11 +147,7 @@ static int apply_r_riscv_lo12_s_rela(struct module *me, u32 *location, Elf_Addr v) { /* Skip medlow checking because of filtering by HI20 already */ - s32 hi20 = ((s32)v + 0x800) & 0xfffff000; - s32 lo12 = ((s32)v - hi20); - u32 imm11_5 = (lo12 & 0xfe0) << (31 - 11); - u32 imm4_0 = (lo12 & 0x1f) << (11 - 4); - *location = (*location & 0x1fff07f) | imm11_5 | imm4_0; + riscv_insn_insert_stype_imm(location, (s32)v); return 0; } @@ -189,7 +155,6 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - s32 hi20; /* Always emit the got entry */ if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) { @@ -202,8 +167,7 @@ static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - *location = (*location & 0xfff) | hi20; + riscv_insn_insert_utype_imm(location, (s32)(offset + HI20_OFFSET)); return 0; } @@ -211,7 +175,6 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 hi20, lo12; if (!riscv_insn_valid_32bit_offset(offset)) { /* Only emit the plt entry if offset over 32-bit range */ @@ -226,10 +189,7 @@ static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, } } - hi20 = (offset + 0x800) & 0xfffff000; - lo12 = (offset - hi20) & 0xfff; - *location = (*location & 0xfff) | hi20; - *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20); + riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset); return 0; } @@ -237,7 +197,6 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location, Elf_Addr v) { ptrdiff_t offset = (void *)v - (void *)location; - u32 hi20, lo12; if (!riscv_insn_valid_32bit_offset(offset)) { pr_err( @@ -246,10 +205,7 @@ static int apply_r_riscv_call_rela(struct module *me, u32 *location, return -EINVAL; } - hi20 = (offset + 0x800) & 0xfffff000; - lo12 = (offset - hi20) & 0xfff; - *location = (*location & 0xfff) | hi20; - *(location + 1) = (*(location + 1) & 0xfffff) | (lo12 << 20); + riscv_insn_insert_utype_itype_imm(location, location + 1, (s32)offset); return 0; } From patchwork Fri Aug 4 02:10:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 966B5EB64DD for ; Fri, 4 Aug 2023 02:12:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233480AbjHDCM2 (ORCPT ); Thu, 3 Aug 2023 22:12:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233416AbjHDCLp (ORCPT ); Thu, 3 Aug 2023 22:11:45 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD66949D4 for ; Thu, 3 Aug 2023 19:11:16 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-686c06b806cso1153562b3a.2 for ; Thu, 03 Aug 2023 19:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115063; x=1691719863; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=xL4SUrFC0SEjwT2Aq0+zll5WmXviuZuzcvI35+EgLbI=; b=pl/wAP3GSnmzUj0rIIA7pNZsP1pu/x7DAz3UUDociqK/9ym2Q/7lCLccZpyr/oGnOq mazjRlgtR9P6+ZibckgI161WdRt3n/X0e4AyL3oo6I9i3eGF0JfSd+Qu5lrFriErxi4u eA09Q6fGr4ajJpvqenrJ8LYLM17C9sM55cijEYZBSFpwzQik+AqRUbz4uGL2W5TiYd3E s5XjBCiBrptpAxA12KHT+YPsl/nKK/hqYXObmPljinTzetAoAON5K7krRBGRTUXKcpan x4ezXngz/kpvx3dDGBgRcigjlgo9pkBfD4Yz7/V9NdppOdCEcN6rRhHL+ZmGvt1yYP8R LssA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115063; x=1691719863; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xL4SUrFC0SEjwT2Aq0+zll5WmXviuZuzcvI35+EgLbI=; b=MsxAxwQC3NNAfEwhC7hLsSJ3nnJJ5+ErB1fJ/6O5lM54nsdEBf9ORt/qBAfa8oUi6J 0Fs3YSLctEeC4R3nM8qvvDfclEBhrV2saW52R/RsNeFCTSIA0FEXI++W80XGVrtbBHpT P7o/xSSagJ2VpeaFfFNrQhIVIga1x5vl+MOdYjQ76HRWDXn4wf5RbU8pvkLm6TrIpvTz jEK5brYjh1MgHpz0NaEvQbzXNtyTWhiXrMlGU9QMVH0PSjPfyosUxQgS4s8L/5xqcm2K GliiP+xR+Xo5pzisCZmzq+FGAnxV215PAQPQoJSMt6ofJRhZVPu0FhydYQef/Musm2tu W9SA== X-Gm-Message-State: AOJu0YxWSuGA336U19NPRxNoBmwzuBPGv94lgSWU1+AfU6YkvLSn6+la r/QzTxuy9O/UfmtNQ7Cj/gkYKA== X-Google-Smtp-Source: AGHT+IEWMGG8I7Bx8vxPRq5vbS6bzyhi8Am9MK0LkQRTff8vwoXnpjEoBIXTfO5sLYuNOOcKckXAfw== X-Received: by 2002:a05:6a00:18a2:b0:666:81ae:fec0 with SMTP id x34-20020a056a0018a200b0066681aefec0mr463394pfh.25.1691115062994; Thu, 03 Aug 2023 19:11:02 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:02 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:31 -0700 Subject: [PATCH 06/10] RISC-V: Refactor patch instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-6-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/patch.c | 3 +- arch/riscv/kernel/probes/kprobes.c | 13 +++---- arch/riscv/kernel/probes/simulate-insn.c | 61 +++++++------------------------- arch/riscv/kernel/probes/uprobes.c | 5 +-- 4 files changed, 25 insertions(+), 57 deletions(-) diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 575e71d6c8ae..df51f5155673 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -12,6 +12,7 @@ #include #include #include +#include #include struct patch_insn { @@ -118,7 +119,7 @@ static int patch_text_cb(void *data) if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { for (i = 0; ret == 0 && i < patch->ninsns; i++) { - len = GET_INSN_LENGTH(patch->insns[i]); + len = INSN_LEN(patch->insns[i]); ret = patch_text_nosync(patch->addr + i * len, &patch->insns[i], len); } diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index 2f08c14a933d..501c6ae4d803 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "decode-insn.h" @@ -24,7 +25,7 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *); static void __kprobes arch_prepare_ss_slot(struct kprobe *p) { u32 insn = __BUG_INSN_32; - unsigned long offset = GET_INSN_LENGTH(p->opcode); + unsigned long offset = INSN_LEN(p->opcode); p->ainsn.api.restore = (unsigned long)p->addr + offset; @@ -58,7 +59,7 @@ static bool __kprobes arch_check_kprobe(struct kprobe *p) if (tmp == addr) return true; - tmp += GET_INSN_LENGTH(*(u16 *)tmp); + tmp += INSN_LEN(*(u16 *)tmp); } return false; @@ -76,7 +77,7 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) /* copy instruction */ p->opcode = (kprobe_opcode_t)(*insn++); - if (GET_INSN_LENGTH(p->opcode) == 4) + if (INSN_LEN(p->opcode) == 4) p->opcode |= (kprobe_opcode_t)(*insn) << 16; /* decode instruction */ @@ -117,8 +118,8 @@ void *alloc_insn_page(void) /* install breakpoint in text */ void __kprobes arch_arm_kprobe(struct kprobe *p) { - u32 insn = (p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32 ? - __BUG_INSN_32 : __BUG_INSN_16; + u32 insn = INSN_IS_C(p->opcode) ? + __BUG_INSN_16 : __BUG_INSN_32; patch_text(p->addr, &insn, 1); } @@ -344,7 +345,7 @@ kprobe_single_step_handler(struct pt_regs *regs) struct kprobe *cur = kprobe_running(); if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) && - ((unsigned long)&cur->ainsn.api.insn[0] + GET_INSN_LENGTH(cur->opcode) == addr)) { + ((unsigned long)&cur->ainsn.api.insn[0] + INSN_LEN(cur->opcode) == addr)) { kprobes_restore_local_irqflag(kcb, regs); post_kprobe_handler(cur, kcb, regs); return true; diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c index 994edb4bd16a..f9671bb864a3 100644 --- a/arch/riscv/kernel/probes/simulate-insn.c +++ b/arch/riscv/kernel/probes/simulate-insn.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0+ +#include #include #include #include @@ -16,19 +17,16 @@ bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs * 1 10 1 8 5 JAL/J */ bool ret; - u32 imm; - u32 index = (opcode >> 7) & 0x1f; + s32 imm; + u32 index = riscv_insn_extract_rd(opcode); ret = rv_insn_reg_set_val((unsigned long *)regs, index, addr + 4); if (!ret) return ret; - imm = ((opcode >> 21) & 0x3ff) << 1; - imm |= ((opcode >> 20) & 0x1) << 11; - imm |= ((opcode >> 12) & 0xff) << 12; - imm |= ((opcode >> 31) & 0x1) << 20; + imm = riscv_insn_extract_jtype_imm(opcode); - instruction_pointer_set(regs, addr + sign_extend32((imm), 20)); + instruction_pointer_set(regs, addr + imm); return ret; } @@ -42,9 +40,9 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg */ bool ret; unsigned long base_addr; - u32 imm = (opcode >> 20) & 0xfff; - u32 rd_index = (opcode >> 7) & 0x1f; - u32 rs1_index = (opcode >> 15) & 0x1f; + s32 imm = riscv_insn_extract_itype_imm(opcode); + u32 rd_index = riscv_insn_extract_rd(opcode); + u32 rs1_index = riscv_insn_extract_rs1(opcode); ret = rv_insn_reg_get_val((unsigned long *)regs, rs1_index, &base_addr); if (!ret) @@ -54,25 +52,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg if (!ret) return ret; - instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1); + instruction_pointer_set(regs, (base_addr + imm) & ~1); return ret; } -#define auipc_rd_idx(opcode) \ - ((opcode >> 7) & 0x1f) - -#define auipc_imm(opcode) \ - ((((opcode) >> 12) & 0xfffff) << 12) - -#if __riscv_xlen == 64 -#define auipc_offset(opcode) sign_extend64(auipc_imm(opcode), 31) -#elif __riscv_xlen == 32 -#define auipc_offset(opcode) auipc_imm(opcode) -#else -#error "Unexpected __riscv_xlen" -#endif - bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *regs) { /* @@ -82,35 +66,16 @@ bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *re * 20 5 7 */ - u32 rd_idx = auipc_rd_idx(opcode); - unsigned long rd_val = addr + auipc_offset(opcode); + u32 rd_idx = riscv_insn_extract_rd(opcode); + unsigned long rd_val = addr + riscv_insn_extract_utype_imm(opcode); if (!rv_insn_reg_set_val((unsigned long *)regs, rd_idx, rd_val)) return false; instruction_pointer_set(regs, addr + 4); - return true; } -#define branch_rs1_idx(opcode) \ - (((opcode) >> 15) & 0x1f) - -#define branch_rs2_idx(opcode) \ - (((opcode) >> 20) & 0x1f) - -#define branch_funct3(opcode) \ - (((opcode) >> 12) & 0x7) - -#define branch_imm(opcode) \ - (((((opcode) >> 8) & 0xf ) << 1) | \ - ((((opcode) >> 25) & 0x3f) << 5) | \ - ((((opcode) >> 7) & 0x1 ) << 11) | \ - ((((opcode) >> 31) & 0x1 ) << 12)) - -#define branch_offset(opcode) \ - sign_extend32((branch_imm(opcode)), 12) - bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs) { /* @@ -135,8 +100,8 @@ bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *r !rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(opcode), &rs2_val)) return false; - offset_tmp = branch_offset(opcode); - switch (branch_funct3(opcode)) { + offset_tmp = riscv_insn_extract_btype_imm(opcode); + switch (riscv_insn_extract_funct3(opcode)) { case RVG_FUNCT3_BEQ: offset = (rs1_val == rs2_val) ? offset_tmp : 4; break; diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c index 194f166b2cc4..f2511cbaf931 100644 --- a/arch/riscv/kernel/probes/uprobes.c +++ b/arch/riscv/kernel/probes/uprobes.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only +#include #include #include #include @@ -29,7 +30,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, opcode = *(probe_opcode_t *)(&auprobe->insn[0]); - auprobe->insn_size = GET_INSN_LENGTH(opcode); + auprobe->insn_size = INSN_LEN(opcode); switch (riscv_probe_decode_insn(&opcode, &auprobe->api)) { case INSN_REJECTED: @@ -166,7 +167,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, /* Add ebreak behind opcode to simulate singlestep */ if (vaddr) { - dst += GET_INSN_LENGTH(*(probe_opcode_t *)src); + dst += INSN_LEN(*(probe_opcode_t *)src); *(uprobe_opcode_t *)dst = __BUG_INSN_32; } From patchwork Fri Aug 4 02:10:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 086EAC0015E for ; Fri, 4 Aug 2023 02:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233358AbjHDCMj (ORCPT ); Thu, 3 Aug 2023 22:12:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233046AbjHDCL4 (ORCPT ); Thu, 3 Aug 2023 22:11:56 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3BF34C06 for ; Thu, 3 Aug 2023 19:11:24 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id 5614622812f47-3a74d759be4so1222788b6e.2 for ; Thu, 03 Aug 2023 19:11:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115065; x=1691719865; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=utAxHXGCk0s0MGtyBcy0wGG+fjIXniu7w9sbMi99EII=; b=2Ud2NQBS5pxrNSE/4sEBtKmvdzrwzD0DNrpLL4Jpk5IVgAuLYD2Qb8lIcXGXHdFUTe ZJ5/RbdpTyO/DoseixXdyQrCuicaGE/ufBY7isL11QxvVjoBeWOZiHDWBWVuR4ouKOGn gUFR73gMIiTAl7wiGpoRbNR9dUWiqqOhSYcd3PCGvQ1l7lgEmnj5w+t8/UYSppIgAbMo UH70h/K6ah65vf1uxfdcVIVuqUk85on5nuOySotYUzjJXZVhrhxSMhaf5NIu+DaExiZq sFqLrbkcQa8vTgo7sT3BKgF2nfPSjv/bK3cG/KPYis6OCn9KWjbKKf8/JMfLbyaLOh4Y 28SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115065; x=1691719865; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=utAxHXGCk0s0MGtyBcy0wGG+fjIXniu7w9sbMi99EII=; b=azYUnE+VtPynSnPgLG+icDkVVeyWiY/NS0BQGrCrszRWyiq9/yF8mdcnhJHhE7e3q4 AV6IOAOS13p23P2oVu8D1yslCJZ8IQ2ezzBr+V0KEuxsw1cKhjBkkgLAKHkpKw6oavIQ v8gPkJ0Yba64MlLKJy6zRji42Sla2tSD0LPu1mcg2MEdbqxYcmsQZXmQdUH+swlrOy3U csBTAIO3nAjbRu1+Zg5ohO+jU01X5Tc2GCX58mF+fc0oA5CbzHe28fNDe+M1TrJzOxXl w/fVZD52hxyPCodSqTPv9qSyAKML04M6HE9HUvlCh90G5LF/RR98KXH/e/+knTzJvBvu rLiA== X-Gm-Message-State: AOJu0Yywa9tDN4B4wPKHpK1boJVdjK1nlc1XlRmUwcGVWuiT5CTRwKrU bJxA1aXZhQV+I4VAnrLKmfK5bw== X-Google-Smtp-Source: AGHT+IG9sCpMJtqiJ9jvb2wzQ5yReUA9iACju+gpqXRvIhk4weyUg+HwEHIbnyozun5cFHPRNV7nfw== X-Received: by 2002:a05:6358:7213:b0:134:ec26:b53 with SMTP id h19-20020a056358721300b00134ec260b53mr710567rwa.16.1691115065043; Thu, 03 Aug 2023 19:11:05 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:04 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:32 -0700 Subject: [PATCH 07/10] RISC-V: nommu: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-7-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/kernel/traps_misaligned.c | 218 ++++++++--------------------------- 1 file changed, 45 insertions(+), 173 deletions(-) diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 378f5b151443..b72045ce432a 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -12,144 +12,10 @@ #include #include #include +#include +#include -#define INSN_MATCH_LB 0x3 -#define INSN_MASK_LB 0x707f -#define INSN_MATCH_LH 0x1003 -#define INSN_MASK_LH 0x707f -#define INSN_MATCH_LW 0x2003 -#define INSN_MASK_LW 0x707f -#define INSN_MATCH_LD 0x3003 -#define INSN_MASK_LD 0x707f -#define INSN_MATCH_LBU 0x4003 -#define INSN_MASK_LBU 0x707f -#define INSN_MATCH_LHU 0x5003 -#define INSN_MASK_LHU 0x707f -#define INSN_MATCH_LWU 0x6003 -#define INSN_MASK_LWU 0x707f -#define INSN_MATCH_SB 0x23 -#define INSN_MASK_SB 0x707f -#define INSN_MATCH_SH 0x1023 -#define INSN_MASK_SH 0x707f -#define INSN_MATCH_SW 0x2023 -#define INSN_MASK_SW 0x707f -#define INSN_MATCH_SD 0x3023 -#define INSN_MASK_SD 0x707f - -#define INSN_MATCH_FLW 0x2007 -#define INSN_MASK_FLW 0x707f -#define INSN_MATCH_FLD 0x3007 -#define INSN_MASK_FLD 0x707f -#define INSN_MATCH_FLQ 0x4007 -#define INSN_MASK_FLQ 0x707f -#define INSN_MATCH_FSW 0x2027 -#define INSN_MASK_FSW 0x707f -#define INSN_MATCH_FSD 0x3027 -#define INSN_MASK_FSD 0x707f -#define INSN_MATCH_FSQ 0x4027 -#define INSN_MASK_FSQ 0x707f - -#define INSN_MATCH_C_LD 0x6000 -#define INSN_MASK_C_LD 0xe003 -#define INSN_MATCH_C_SD 0xe000 -#define INSN_MASK_C_SD 0xe003 -#define INSN_MATCH_C_LW 0x4000 -#define INSN_MASK_C_LW 0xe003 -#define INSN_MATCH_C_SW 0xc000 -#define INSN_MASK_C_SW 0xe003 -#define INSN_MATCH_C_LDSP 0x6002 -#define INSN_MASK_C_LDSP 0xe003 -#define INSN_MATCH_C_SDSP 0xe002 -#define INSN_MASK_C_SDSP 0xe003 -#define INSN_MATCH_C_LWSP 0x4002 -#define INSN_MASK_C_LWSP 0xe003 -#define INSN_MATCH_C_SWSP 0xc002 -#define INSN_MASK_C_SWSP 0xe003 - -#define INSN_MATCH_C_FLD 0x2000 -#define INSN_MASK_C_FLD 0xe003 -#define INSN_MATCH_C_FLW 0x6000 -#define INSN_MASK_C_FLW 0xe003 -#define INSN_MATCH_C_FSD 0xa000 -#define INSN_MASK_C_FSD 0xe003 -#define INSN_MATCH_C_FSW 0xe000 -#define INSN_MASK_C_FSW 0xe003 -#define INSN_MATCH_C_FLDSP 0x2002 -#define INSN_MASK_C_FLDSP 0xe003 -#define INSN_MATCH_C_FSDSP 0xa002 -#define INSN_MASK_C_FSDSP 0xe003 -#define INSN_MATCH_C_FLWSP 0x6002 -#define INSN_MASK_C_FLWSP 0xe003 -#define INSN_MATCH_C_FSWSP 0xe002 -#define INSN_MASK_C_FSWSP 0xe003 - -#define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4) - -#if defined(CONFIG_64BIT) -#define LOG_REGBYTES 3 -#define XLEN 64 -#else -#define LOG_REGBYTES 2 -#define XLEN 32 -#endif -#define REGBYTES (1 << LOG_REGBYTES) -#define XLEN_MINUS_16 ((XLEN) - 16) - -#define SH_RD 7 -#define SH_RS1 15 -#define SH_RS2 20 -#define SH_RS2C 2 - -#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) -#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ - (RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 1) << 6)) -#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 2) << 6)) -#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 2) << 6)) -#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 3) << 6)) -#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ - (RV_X(x, 7, 2) << 6)) -#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 7, 3) << 6)) -#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) -#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) -#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) - -#define SHIFT_RIGHT(x, y) \ - ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) - -#define REG_MASK \ - ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) - -#define REG_OFFSET(insn, pos) \ - (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) - -#define REG_PTR(insn, pos, regs) \ - (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)) - -#define GET_RM(insn) (((insn) >> 12) & 7) - -#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) -#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) -#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) -#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) -#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) -#define GET_SP(regs) (*REG_PTR(2, 0, regs)) -#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) -#define IMM_I(insn) ((s32)(insn) >> 20) -#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ - (s32)(((insn) >> 7) & 0x1f)) -#define MASK_FUNCT3 0x7000 - -#define GET_PRECISION(insn) (((insn) >> 25) & 3) -#define GET_RM(insn) (((insn) >> 12) & 7) -#define PRECISION_S 0 -#define PRECISION_D 1 +#define XLEN_MINUS_16 ((__riscv_xlen) - 16) #define DECLARE_UNPRIVILEGED_LOAD_FUNCTION(type, insn) \ static inline type load_##type(const type *addr) \ @@ -245,58 +111,56 @@ int handle_misaligned_load(struct pt_regs *regs) regs->epc = 0; - if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { + if (riscv_insn_is_lw(insn)) { len = 4; shift = 8 * (sizeof(unsigned long) - len); #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) { + } else if (riscv_insn_is_ld(insn)) { len = 8; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) { + } else if (riscv_insn_is_lwu(insn)) { len = 4; #endif - } else if ((insn & INSN_MASK_FLD) == INSN_MATCH_FLD) { + } else if (riscv_insn_is_fld(insn)) { fp = 1; len = 8; - } else if ((insn & INSN_MASK_FLW) == INSN_MATCH_FLW) { + } else if (riscv_insn_is_flw(insn)) { fp = 1; len = 4; - } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) { + } else if (riscv_insn_is_lh(insn)) { len = 2; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) { + } else if (riscv_insn_is_lhu(insn)) { len = 2; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) { + } else if (riscv_insn_is_c_ld(insn)) { len = 8; shift = 8 * (sizeof(unsigned long) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_ldsp(insn) && (RVC_RD_CI(insn))) { len = 8; shift = 8 * (sizeof(unsigned long) - len); #endif - } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) { + } else if (riscv_insn_is_c_lw(insn)) { len = 4; shift = 8 * (sizeof(unsigned long) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_lwsp(insn) && (RVC_RD_CI(insn))) { len = 4; shift = 8 * (sizeof(unsigned long) - len); - } else if ((insn & INSN_MASK_C_FLD) == INSN_MATCH_C_FLD) { + } else if (riscv_insn_is_c_fld(insn)) { fp = 1; len = 8; - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_FLDSP) == INSN_MATCH_C_FLDSP) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_fldsp(insn)) { fp = 1; len = 8; #if defined(CONFIG_32BIT) - } else if ((insn & INSN_MASK_C_FLW) == INSN_MATCH_C_FLW) { + } else if (riscv_insn_is_c_flw(insn)) { fp = 1; len = 4; - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_FLWSP) == INSN_MATCH_C_FLWSP) { + insn = riscv_insn_extract_csca_rs2(insn); + } else if (riscv_insn_is_c_flwsp(insn)) { fp = 1; len = 4; #endif @@ -311,7 +175,8 @@ int handle_misaligned_load(struct pt_regs *regs) if (fp) return -1; - SET_RD(insn, regs, val.data_ulong << shift >> shift); + rv_insn_reg_set_val((unsigned long *)regs, RV_EXTRACT_RD_REG(insn), + val.data_ulong << shift >> shift); regs->epc = epc + INSN_LEN(insn); @@ -328,32 +193,39 @@ int handle_misaligned_store(struct pt_regs *regs) regs->epc = 0; - val.data_ulong = GET_RS2(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, riscv_insn_extract_rs2(insn), + &val.data_ulong); - if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { + if (riscv_insn_is_sw(insn)) { len = 4; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { + } else if (riscv_insn_is_sd(insn)) { len = 8; #endif - } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { + } else if (riscv_insn_is_sh(insn)) { len = 2; #if defined(CONFIG_64BIT) - } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { + } else if (riscv_insn_is_c_sd(insn)) { len = 8; - val.data_ulong = GET_RS2S(insn, regs); - } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_cr_rs2(insn), + &val.data_ulong); + } else if (riscv_insn_is_c_sdsp(insn)) { len = 8; - val.data_ulong = GET_RS2C(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_csca_rs2(insn), + &val.data_ulong); #endif - } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { + } else if (riscv_insn_is_c_sw(insn)) { len = 4; - val.data_ulong = GET_RS2S(insn, regs); - } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_cr_rs2(insn), + &val.data_ulong); + } else if (riscv_insn_is_c_swsp(insn)) { len = 4; - val.data_ulong = GET_RS2C(insn, regs); + rv_insn_reg_get_val((unsigned long *)regs, + riscv_insn_extract_csca_rs2(insn), + &val.data_ulong); } else { regs->epc = epc; return -1; From patchwork Fri Aug 4 02:10:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCA6CC0015E for ; Fri, 4 Aug 2023 02:12:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233212AbjHDCMo (ORCPT ); Thu, 3 Aug 2023 22:12:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231797AbjHDCMJ (ORCPT ); Thu, 3 Aug 2023 22:12:09 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 940E14C1F for ; Thu, 3 Aug 2023 19:11:29 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-686e0213c0bso1201557b3a.1 for ; Thu, 03 Aug 2023 19:11:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115067; x=1691719867; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=BPKfiGgHhMfFwAhfWVPwqE1z0LYuKsCQFhQ5Mh2pvvg=; b=ta12HpfnDKw4trrgJe1kDu+nj00f+KtjOCKksu/MbokQkDICvsHjetWJjnnUAQ9mKA WxcGuGzIACqWL4C7F0cNuUurtmsjAqQtku6EGVRRAIRX6czrur7a1fkoTwQb67I8psns qiw1jr3pgIU6q5lKcqF2K0YyV/Q9izdftRMTHTrdstDWvQdYLXBYAs9POUL60yw9Qx6o NxmgZnNssWnJqQ8eqgXh5ytWv/DME6JtSdllQIrmeVjOnn+1OhwqPUSJ71/U9z5WHACg tIgJe92B0DResa1Wi2A+yRPOYTSNhVYGYoNJY+JGdxcqvoQH4MrlPXgKUvzgvd61u6zx 6NWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115067; x=1691719867; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BPKfiGgHhMfFwAhfWVPwqE1z0LYuKsCQFhQ5Mh2pvvg=; b=jKv2qJzHQMx/6/N5SLDxb8bucrONCe3XcxRn5u6TZsVs5cgZ8Gu//le1K5q+xVqV8d GvIQevq61R2YnsXRhcjZp0uCZ4Zuw9nc5Xhw+Z6Dmdcxq47RzYfM5XweXMTe35lcOhAo GZwyhhgPHJ+PHdhQJfMHKgBq6tOfVcT9APa2l+e9u5fYIYRQNPqCcinkyWJXuffBImnP NsRtMk/KEvE6w3NNWjaIq8HqyeF47zzWrswdUjUiiVciwl+A0Cy5XPqxtZC8JbHrzyJ2 mng9QZ2bBVKtAislF9XFEUaa3pGoayJw3JLxrym3Utmiuwqx8KzGdh1DCSZLyI0EdBXT fuOQ== X-Gm-Message-State: AOJu0Yz4aN5HZLwYIb1ZBC1SAAuVjzzvWs8BninkBr5AstG7IUD8k0Qq mJwKYr4Ha/jxie1wsYGWu2SWHA== X-Google-Smtp-Source: AGHT+IFN09ZI6JBftIb4c4o1ythrdzlTD6KiUWhToEJZR6WsVai+ZUmWDZsVnZ7Njw6c/iZ527XncQ== X-Received: by 2002:a05:6a00:2301:b0:686:efda:76a2 with SMTP id h1-20020a056a00230100b00686efda76a2mr405546pfh.29.1691115067089; Thu, 03 Aug 2023 19:11:07 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:06 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:33 -0700 Subject: [PATCH 08/10] RISC-V: kvm: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-8-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/kvm/vcpu_insn.c | 281 ++++++++++++++------------------------------- 1 file changed, 86 insertions(+), 195 deletions(-) diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 7a6abed41bc1..73c7d21b496e 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -6,130 +6,7 @@ #include #include - -#define INSN_OPCODE_MASK 0x007c -#define INSN_OPCODE_SHIFT 2 -#define INSN_OPCODE_SYSTEM 28 - -#define INSN_MASK_WFI 0xffffffff -#define INSN_MATCH_WFI 0x10500073 - -#define INSN_MATCH_CSRRW 0x1073 -#define INSN_MASK_CSRRW 0x707f -#define INSN_MATCH_CSRRS 0x2073 -#define INSN_MASK_CSRRS 0x707f -#define INSN_MATCH_CSRRC 0x3073 -#define INSN_MASK_CSRRC 0x707f -#define INSN_MATCH_CSRRWI 0x5073 -#define INSN_MASK_CSRRWI 0x707f -#define INSN_MATCH_CSRRSI 0x6073 -#define INSN_MASK_CSRRSI 0x707f -#define INSN_MATCH_CSRRCI 0x7073 -#define INSN_MASK_CSRRCI 0x707f - -#define INSN_MATCH_LB 0x3 -#define INSN_MASK_LB 0x707f -#define INSN_MATCH_LH 0x1003 -#define INSN_MASK_LH 0x707f -#define INSN_MATCH_LW 0x2003 -#define INSN_MASK_LW 0x707f -#define INSN_MATCH_LD 0x3003 -#define INSN_MASK_LD 0x707f -#define INSN_MATCH_LBU 0x4003 -#define INSN_MASK_LBU 0x707f -#define INSN_MATCH_LHU 0x5003 -#define INSN_MASK_LHU 0x707f -#define INSN_MATCH_LWU 0x6003 -#define INSN_MASK_LWU 0x707f -#define INSN_MATCH_SB 0x23 -#define INSN_MASK_SB 0x707f -#define INSN_MATCH_SH 0x1023 -#define INSN_MASK_SH 0x707f -#define INSN_MATCH_SW 0x2023 -#define INSN_MASK_SW 0x707f -#define INSN_MATCH_SD 0x3023 -#define INSN_MASK_SD 0x707f - -#define INSN_MATCH_C_LD 0x6000 -#define INSN_MASK_C_LD 0xe003 -#define INSN_MATCH_C_SD 0xe000 -#define INSN_MASK_C_SD 0xe003 -#define INSN_MATCH_C_LW 0x4000 -#define INSN_MASK_C_LW 0xe003 -#define INSN_MATCH_C_SW 0xc000 -#define INSN_MASK_C_SW 0xe003 -#define INSN_MATCH_C_LDSP 0x6002 -#define INSN_MASK_C_LDSP 0xe003 -#define INSN_MATCH_C_SDSP 0xe002 -#define INSN_MASK_C_SDSP 0xe003 -#define INSN_MATCH_C_LWSP 0x4002 -#define INSN_MASK_C_LWSP 0xe003 -#define INSN_MATCH_C_SWSP 0xc002 -#define INSN_MASK_C_SWSP 0xe003 - -#define INSN_16BIT_MASK 0x3 - -#define INSN_IS_16BIT(insn) (((insn) & INSN_16BIT_MASK) != INSN_16BIT_MASK) - -#define INSN_LEN(insn) (INSN_IS_16BIT(insn) ? 2 : 4) - -#ifdef CONFIG_64BIT -#define LOG_REGBYTES 3 -#else -#define LOG_REGBYTES 2 -#endif -#define REGBYTES (1 << LOG_REGBYTES) - -#define SH_RD 7 -#define SH_RS1 15 -#define SH_RS2 20 -#define SH_RS2C 2 -#define MASK_RX 0x1f - -#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) -#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ - (RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 1) << 6)) -#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 5, 2) << 6)) -#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 2) << 6)) -#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ - (RV_X(x, 12, 1) << 5) | \ - (RV_X(x, 2, 3) << 6)) -#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ - (RV_X(x, 7, 2) << 6)) -#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ - (RV_X(x, 7, 3) << 6)) -#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) -#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) -#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) - -#define SHIFT_RIGHT(x, y) \ - ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) - -#define REG_MASK \ - ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) - -#define REG_OFFSET(insn, pos) \ - (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) - -#define REG_PTR(insn, pos, regs) \ - ((ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))) - -#define GET_FUNCT3(insn) (((insn) >> 12) & 7) - -#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) -#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) -#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) -#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) -#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) -#define GET_SP(regs) (*REG_PTR(2, 0, regs)) -#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) -#define IMM_I(insn) ((s32)(insn) >> 20) -#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ - (s32)(((insn) >> 7) & 0x1f)) +#include struct insn_func { unsigned long mask; @@ -230,6 +107,7 @@ static const struct csr_func csr_funcs[] = { int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) { ulong insn; + u32 index; if (vcpu->arch.csr_decode.return_handled) return 0; @@ -237,9 +115,10 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) /* Update destination register for CSR reads */ insn = vcpu->arch.csr_decode.insn; - if ((insn >> SH_RD) & MASK_RX) - SET_RD(insn, &vcpu->arch.guest_context, - run->riscv_csr.ret_value); + riscv_insn_extract_rd(insn); + if (index) + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, run->riscv_csr.ret_value); /* Move to next instruction */ vcpu->arch.guest_context.sepc += INSN_LEN(insn); @@ -249,36 +128,39 @@ int kvm_riscv_vcpu_csr_return(struct kvm_vcpu *vcpu, struct kvm_run *run) static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) { + ulong rs1_val; int i, rc = KVM_INSN_ILLEGAL_TRAP; - unsigned int csr_num = insn >> SH_RS2; - unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX; - ulong rs1_val = GET_RS1(insn, &vcpu->arch.guest_context); + unsigned int csr_num = insn >> RV_I_IMM_11_0_OPOFF; + unsigned int rs1_num = riscv_insn_extract_rs1(insn); const struct csr_func *tcfn, *cfn = NULL; ulong val = 0, wr_mask = 0, new_val = 0; + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs1(insn), &rs1_val); + /* Decode the CSR instruction */ - switch (GET_FUNCT3(insn)) { - case GET_FUNCT3(INSN_MATCH_CSRRW): + switch (riscv_insn_extract_funct3(insn)) { + case RVG_FUNCT3_CSRRW: wr_mask = -1UL; new_val = rs1_val; break; - case GET_FUNCT3(INSN_MATCH_CSRRS): + case RVG_FUNCT3_CSRRS: wr_mask = rs1_val; new_val = -1UL; break; - case GET_FUNCT3(INSN_MATCH_CSRRC): + case RVG_FUNCT3_CSRRC: wr_mask = rs1_val; new_val = 0; break; - case GET_FUNCT3(INSN_MATCH_CSRRWI): + case RVG_FUNCT3_CSRRWI: wr_mask = -1UL; new_val = rs1_num; break; - case GET_FUNCT3(INSN_MATCH_CSRRSI): + case RVG_FUNCT3_CSRRSI: wr_mask = rs1_num; new_val = -1UL; break; - case GET_FUNCT3(INSN_MATCH_CSRRCI): + case RVG_FUNCT3_CSRRCI: wr_mask = rs1_num; new_val = 0; break; @@ -331,38 +213,38 @@ static int csr_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) static const struct insn_func system_opcode_funcs[] = { { - .mask = INSN_MASK_CSRRW, - .match = INSN_MATCH_CSRRW, + .mask = RVG_MASK_CSRRW, + .match = RVG_MATCH_CSRRW, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRS, - .match = INSN_MATCH_CSRRS, + .mask = RVG_MASK_CSRRS, + .match = RVG_MATCH_CSRRS, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRC, - .match = INSN_MATCH_CSRRC, + .mask = RVG_MASK_CSRRC, + .match = RVG_MATCH_CSRRC, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRWI, - .match = INSN_MATCH_CSRRWI, + .mask = RVG_MASK_CSRRWI, + .match = RVG_MATCH_CSRRWI, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRSI, - .match = INSN_MATCH_CSRRSI, + .mask = RVG_MASK_CSRRSI, + .match = RVG_MATCH_CSRRSI, .func = csr_insn, }, { - .mask = INSN_MASK_CSRRCI, - .match = INSN_MATCH_CSRRCI, + .mask = RVG_MASK_CSRRCI, + .match = RVG_MATCH_CSRRCI, .func = csr_insn, }, { - .mask = INSN_MASK_WFI, - .match = INSN_MATCH_WFI, + .mask = RV_MASK_WFI, + .match = RV_MATCH_WFI, .func = wfi_insn, }, }; @@ -414,7 +296,7 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap utrap = { 0 }; struct kvm_cpu_context *ct; - if (unlikely(INSN_IS_16BIT(insn))) { + if (unlikely(INSN_IS_C(insn))) { if (insn == 0) { ct = &vcpu->arch.guest_context; insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, @@ -426,12 +308,12 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, return 1; } } - if (INSN_IS_16BIT(insn)) + if (INSN_IS_C(insn)) return truly_illegal_insn(vcpu, run, insn); } - switch ((insn & INSN_OPCODE_MASK) >> INSN_OPCODE_SHIFT) { - case INSN_OPCODE_SYSTEM: + switch (insn) { + case RVG_OPCODE_SYSTEM: return system_opcode_insn(vcpu, run, insn); default: return truly_illegal_insn(vcpu, run, insn); @@ -466,7 +348,7 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, * Bit[0] == 1 implies trapped instruction value is * transformed instruction or custom instruction. */ - insn = htinst | INSN_16BIT_MASK; + insn = htinst | INSN_C_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { /* @@ -485,43 +367,43 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, } /* Decode length of MMIO and shift */ - if ((insn & INSN_MASK_LW) == INSN_MATCH_LW) { + if (riscv_insn_is_lw(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LB) == INSN_MATCH_LB) { + } else if (riscv_insn_is_lb(insn)) { len = 1; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LBU) == INSN_MATCH_LBU) { + } else if (riscv_insn_is_lbu(insn)) { len = 1; shift = 8 * (sizeof(ulong) - len); #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_LD) == INSN_MATCH_LD) { + } else if (riscv_insn_is_ld(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LWU) == INSN_MATCH_LWU) { + } else if (riscv_insn_is_lwu(insn)) { len = 4; #endif - } else if ((insn & INSN_MASK_LH) == INSN_MATCH_LH) { + } else if (riscv_insn_is_lh(insn)) { len = 2; shift = 8 * (sizeof(ulong) - len); - } else if ((insn & INSN_MASK_LHU) == INSN_MATCH_LHU) { + } else if (riscv_insn_is_lhu(insn)) { len = 2; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_C_LD) == INSN_MATCH_C_LD) { + } else if (riscv_insn_is_c_ld(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LDSP) == INSN_MATCH_C_LDSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF; + } else if (riscv_insn_is_c_ldsp(insn) && + riscv_insn_extract_rd(insn)) { len = 8; shift = 8 * (sizeof(ulong) - len); #endif - } else if ((insn & INSN_MASK_C_LW) == INSN_MATCH_C_LW) { + } else if (riscv_insn_is_c_lw(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); - insn = RVC_RS2S(insn) << SH_RD; - } else if ((insn & INSN_MASK_C_LWSP) == INSN_MATCH_C_LWSP && - ((insn >> SH_RD) & 0x1f)) { + insn = riscv_insn_extract_csca_rs2(insn) << RVG_RD_OPOFF; + } else if (riscv_insn_is_c_lwsp(insn) && + riscv_insn_extract_rd(insn)) { len = 4; shift = 8 * (sizeof(ulong) - len); } else { @@ -592,7 +474,7 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, * Bit[0] == 1 implies trapped instruction value is * transformed instruction or custom instruction. */ - insn = htinst | INSN_16BIT_MASK; + insn = htinst | INSN_C_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { /* @@ -610,35 +492,42 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, insn_len = INSN_LEN(insn); } - data = GET_RS2(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs1(insn), &data); data8 = data16 = data32 = data64 = data; - if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { + if (riscv_insn_is_sw(insn)) { len = 4; - } else if ((insn & INSN_MASK_SB) == INSN_MATCH_SB) { + } else if (riscv_insn_is_sb(insn)) { len = 1; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_SD) == INSN_MATCH_SD) { + } else if (riscv_insn_is_sd(insn)) { len = 8; #endif - } else if ((insn & INSN_MASK_SH) == INSN_MATCH_SH) { + } else if (riscv_insn_is_sh(insn)) { len = 2; #ifdef CONFIG_64BIT - } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { + } else if (riscv_insn_is_c_sd(insn)) { len = 8; - data64 = GET_RS2S(insn, &vcpu->arch.guest_context); - } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs2(insn), + (unsigned long *)&data64); + } else if (riscv_insn_is_c_sdsp(insn) && riscv_insn_extract_rd(insn)) { len = 8; - data64 = GET_RS2C(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_csca_rs2(insn), + (unsigned long *)&data64); #endif - } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { + } else if (riscv_insn_is_c_sw(insn)) { len = 4; - data32 = GET_RS2S(insn, &vcpu->arch.guest_context); - } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && - ((insn >> SH_RD) & 0x1f)) { + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_rs2(insn), + (unsigned long *)&data32); + } else if (riscv_insn_is_c_swsp(insn) && riscv_insn_extract_rd(insn)) { len = 4; - data32 = GET_RS2C(insn, &vcpu->arch.guest_context); + rv_insn_reg_get_val((unsigned long *)&vcpu->arch.guest_context, + riscv_insn_extract_csca_rs2(insn), + (unsigned long *)&data32); } else { return -EOPNOTSUPP; } @@ -707,6 +596,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) u32 data32; u64 data64; ulong insn; + u32 index; int len, shift; if (vcpu->arch.mmio_decode.return_handled) @@ -720,27 +610,28 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) len = vcpu->arch.mmio_decode.len; shift = vcpu->arch.mmio_decode.shift; + index = riscv_insn_extract_rd(insn); switch (len) { case 1: data8 = *((u8 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data8 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data8 << shift >> shift); break; case 2: data16 = *((u16 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data16 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data16 << shift >> shift); break; case 4: data32 = *((u32 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data32 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data32 << shift >> shift); break; case 8: data64 = *((u64 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data64 << shift >> shift); + rv_insn_reg_set_val((unsigned long *)&vcpu->arch.guest_context, + index, (ulong)data64 << shift >> shift); break; default: return -EOPNOTSUPP; From patchwork Fri Aug 4 02:10:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38AAAC0015E for ; Fri, 4 Aug 2023 02:13:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232749AbjHDCNK (ORCPT ); Thu, 3 Aug 2023 22:13:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233660AbjHDCMO (ORCPT ); Thu, 3 Aug 2023 22:12:14 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1C6E4682 for ; Thu, 3 Aug 2023 19:11:33 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-686c06b806cso1153616b3a.2 for ; Thu, 03 Aug 2023 19:11:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115069; x=1691719869; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=/9x+TFVnCGe2WU5RlGvKxqVtj4tmROvC82ze+5KJL3U=; b=IkJmUMf16prXQkjP4lj5rTDJLv7FD4/XceOUaKRy9nwYg1AbjR/4hACj7L4bdWnNht i/68KOTrShtWhocen7puRJ0ZaDPHRmsWFaBrqaPGdFnd0H1vvz0dIffzYNiC4Sgx5nZl sOvmRmlJ+e0a66iwtno4OqFeMbZ6mHXcj5Xi/jbqbEpgrG75Gt1Mz8kXrKk4nIcFtlVh bLy9PqCz8rm0g2iU/gPweYHr5CSnbe/J5MA5IoTzLIHYkq3S4sx80FZgJyF51i4nK1wG JSKBN38+oh9vdwZG72hCVhOSiz6e7xSAoKkN3cE2pEDlaVUDrUaQBnQXT2hZ1YjnL+3c 44Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115069; x=1691719869; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/9x+TFVnCGe2WU5RlGvKxqVtj4tmROvC82ze+5KJL3U=; b=QgGEnhxey6P6NEqzoxU1XfbnI3kCjMJPoYs/Q6gwJzAAndn6eu4McrOlrC/P5r/lyJ SCHw2s9dYENskD10GipZv8Bn/vml5mOFLCPnq34tJtMzbaAaL8a58VAg0g1ckvHhFMap D2HXoKWQtJhJsy3M5ZGHoRMUkVyxiRoAjoFEednKll2Yhhg2tHYVGCkrFc/sBx+asaxW EiJUY6mijifQgrtZQJ/xLyv0DdbcnRBkxKaQTDoRffRtQzt8ztmA6CjgmfmyDQ/E/wr7 1j74aYM5q8gb5hK1uKRQbYHxPfPeu/AB+BV5T99l+8S1O47Qt8bIZyNh+MkJ8FQBYZVv K28w== X-Gm-Message-State: AOJu0YwLrcjurrxQ2fUUWlhQmsDxN5CFKHjDhA/wGM5iv4Njn2Ob2why GyUbitXCqNiWFBkxgHAb+XxV7g== X-Google-Smtp-Source: AGHT+IFkJti/J73gi8xVh6nlOFT/LodI5/N609ojJyoF75AUYfW9T3mL2OZpJIfMNG8tpG29DJWNVA== X-Received: by 2002:a05:6a00:228f:b0:666:ecf4:ed6d with SMTP id f15-20020a056a00228f00b00666ecf4ed6dmr469564pfe.18.1691115069170; Thu, 03 Aug 2023 19:11:09 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:08 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:34 -0700 Subject: [PATCH 09/10] RISC-V: bpf: Refactor instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-9-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/net/bpf_jit.h | 707 +---------------------------------------------- 1 file changed, 2 insertions(+), 705 deletions(-) diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h index 2717f5490428..3f79c938166d 100644 --- a/arch/riscv/net/bpf_jit.h +++ b/arch/riscv/net/bpf_jit.h @@ -12,58 +12,8 @@ #include #include #include - -static inline bool rvc_enabled(void) -{ - return IS_ENABLED(CONFIG_RISCV_ISA_C); -} - -enum { - RV_REG_ZERO = 0, /* The constant value 0 */ - RV_REG_RA = 1, /* Return address */ - RV_REG_SP = 2, /* Stack pointer */ - RV_REG_GP = 3, /* Global pointer */ - RV_REG_TP = 4, /* Thread pointer */ - RV_REG_T0 = 5, /* Temporaries */ - RV_REG_T1 = 6, - RV_REG_T2 = 7, - RV_REG_FP = 8, /* Saved register/frame pointer */ - RV_REG_S1 = 9, /* Saved register */ - RV_REG_A0 = 10, /* Function argument/return values */ - RV_REG_A1 = 11, /* Function arguments */ - RV_REG_A2 = 12, - RV_REG_A3 = 13, - RV_REG_A4 = 14, - RV_REG_A5 = 15, - RV_REG_A6 = 16, - RV_REG_A7 = 17, - RV_REG_S2 = 18, /* Saved registers */ - RV_REG_S3 = 19, - RV_REG_S4 = 20, - RV_REG_S5 = 21, - RV_REG_S6 = 22, - RV_REG_S7 = 23, - RV_REG_S8 = 24, - RV_REG_S9 = 25, - RV_REG_S10 = 26, - RV_REG_S11 = 27, - RV_REG_T3 = 28, /* Temporaries */ - RV_REG_T4 = 29, - RV_REG_T5 = 30, - RV_REG_T6 = 31, -}; - -static inline bool is_creg(u8 reg) -{ - return (1 << reg) & (BIT(RV_REG_FP) | - BIT(RV_REG_S1) | - BIT(RV_REG_A0) | - BIT(RV_REG_A1) | - BIT(RV_REG_A2) | - BIT(RV_REG_A3) | - BIT(RV_REG_A4) | - BIT(RV_REG_A5)); -} +#include +#include struct rv_jit_context { struct bpf_prog *prog; @@ -221,659 +171,6 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx) return ninsns_rvoff(to - from); } -/* Instruction formats. */ - -static inline u32 rv_r_insn(u8 funct7, u8 rs2, u8 rs1, u8 funct3, u8 rd, - u8 opcode) -{ - return (funct7 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (rd << 7) | opcode; -} - -static inline u32 rv_i_insn(u16 imm11_0, u8 rs1, u8 funct3, u8 rd, u8 opcode) -{ - return (imm11_0 << 20) | (rs1 << 15) | (funct3 << 12) | (rd << 7) | - opcode; -} - -static inline u32 rv_s_insn(u16 imm11_0, u8 rs2, u8 rs1, u8 funct3, u8 opcode) -{ - u8 imm11_5 = imm11_0 >> 5, imm4_0 = imm11_0 & 0x1f; - - return (imm11_5 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (imm4_0 << 7) | opcode; -} - -static inline u32 rv_b_insn(u16 imm12_1, u8 rs2, u8 rs1, u8 funct3, u8 opcode) -{ - u8 imm12 = ((imm12_1 & 0x800) >> 5) | ((imm12_1 & 0x3f0) >> 4); - u8 imm4_1 = ((imm12_1 & 0xf) << 1) | ((imm12_1 & 0x400) >> 10); - - return (imm12 << 25) | (rs2 << 20) | (rs1 << 15) | (funct3 << 12) | - (imm4_1 << 7) | opcode; -} - -static inline u32 rv_u_insn(u32 imm31_12, u8 rd, u8 opcode) -{ - return (imm31_12 << 12) | (rd << 7) | opcode; -} - -static inline u32 rv_j_insn(u32 imm20_1, u8 rd, u8 opcode) -{ - u32 imm; - - imm = (imm20_1 & 0x80000) | ((imm20_1 & 0x3ff) << 9) | - ((imm20_1 & 0x400) >> 2) | ((imm20_1 & 0x7f800) >> 11); - - return (imm << 12) | (rd << 7) | opcode; -} - -static inline u32 rv_amo_insn(u8 funct5, u8 aq, u8 rl, u8 rs2, u8 rs1, - u8 funct3, u8 rd, u8 opcode) -{ - u8 funct7 = (funct5 << 2) | (aq << 1) | rl; - - return rv_r_insn(funct7, rs2, rs1, funct3, rd, opcode); -} - -/* RISC-V compressed instruction formats. */ - -static inline u16 rv_cr_insn(u8 funct4, u8 rd, u8 rs2, u8 op) -{ - return (funct4 << 12) | (rd << 7) | (rs2 << 2) | op; -} - -static inline u16 rv_ci_insn(u8 funct3, u32 imm6, u8 rd, u8 op) -{ - u32 imm; - - imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2); - return (funct3 << 13) | (rd << 7) | op | imm; -} - -static inline u16 rv_css_insn(u8 funct3, u32 uimm, u8 rs2, u8 op) -{ - return (funct3 << 13) | (uimm << 7) | (rs2 << 2) | op; -} - -static inline u16 rv_ciw_insn(u8 funct3, u32 uimm, u8 rd, u8 op) -{ - return (funct3 << 13) | (uimm << 5) | ((rd & 0x7) << 2) | op; -} - -static inline u16 rv_cl_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rd, - u8 op) -{ - return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) | - (imm_lo << 5) | ((rd & 0x7) << 2) | op; -} - -static inline u16 rv_cs_insn(u8 funct3, u32 imm_hi, u8 rs1, u32 imm_lo, u8 rs2, - u8 op) -{ - return (funct3 << 13) | (imm_hi << 10) | ((rs1 & 0x7) << 7) | - (imm_lo << 5) | ((rs2 & 0x7) << 2) | op; -} - -static inline u16 rv_ca_insn(u8 funct6, u8 rd, u8 funct2, u8 rs2, u8 op) -{ - return (funct6 << 10) | ((rd & 0x7) << 7) | (funct2 << 5) | - ((rs2 & 0x7) << 2) | op; -} - -static inline u16 rv_cb_insn(u8 funct3, u32 imm6, u8 funct2, u8 rd, u8 op) -{ - u32 imm; - - imm = ((imm6 & 0x20) << 7) | ((imm6 & 0x1f) << 2); - return (funct3 << 13) | (funct2 << 10) | ((rd & 0x7) << 7) | op | imm; -} - -/* Instructions shared by both RV32 and RV64. */ - -static inline u32 rv_addi(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x13); -} - -static inline u32 rv_andi(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 7, rd, 0x13); -} - -static inline u32 rv_ori(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 6, rd, 0x13); -} - -static inline u32 rv_xori(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 4, rd, 0x13); -} - -static inline u32 rv_slli(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 1, rd, 0x13); -} - -static inline u32 rv_srli(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x13); -} - -static inline u32 rv_srai(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x13); -} - -static inline u32 rv_lui(u8 rd, u32 imm31_12) -{ - return rv_u_insn(imm31_12, rd, 0x37); -} - -static inline u32 rv_auipc(u8 rd, u32 imm31_12) -{ - return rv_u_insn(imm31_12, rd, 0x17); -} - -static inline u32 rv_add(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_sub(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_sltu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 3, rd, 0x33); -} - -static inline u32 rv_and(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 7, rd, 0x33); -} - -static inline u32 rv_or(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 6, rd, 0x33); -} - -static inline u32 rv_xor(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 4, rd, 0x33); -} - -static inline u32 rv_sll(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 1, rd, 0x33); -} - -static inline u32 rv_srl(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_sra(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_mul(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 0, rd, 0x33); -} - -static inline u32 rv_mulhu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 3, rd, 0x33); -} - -static inline u32 rv_divu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 5, rd, 0x33); -} - -static inline u32 rv_remu(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 7, rd, 0x33); -} - -static inline u32 rv_jal(u8 rd, u32 imm20_1) -{ - return rv_j_insn(imm20_1, rd, 0x6f); -} - -static inline u32 rv_jalr(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x67); -} - -static inline u32 rv_beq(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 0, 0x63); -} - -static inline u32 rv_bne(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 1, 0x63); -} - -static inline u32 rv_bltu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 6, 0x63); -} - -static inline u32 rv_bgtu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bltu(rs2, rs1, imm12_1); -} - -static inline u32 rv_bgeu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 7, 0x63); -} - -static inline u32 rv_bleu(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bgeu(rs2, rs1, imm12_1); -} - -static inline u32 rv_blt(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 4, 0x63); -} - -static inline u32 rv_bgt(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_blt(rs2, rs1, imm12_1); -} - -static inline u32 rv_bge(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_b_insn(imm12_1, rs2, rs1, 5, 0x63); -} - -static inline u32 rv_ble(u8 rs1, u8 rs2, u16 imm12_1) -{ - return rv_bge(rs2, rs1, imm12_1); -} - -static inline u32 rv_lw(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 2, rd, 0x03); -} - -static inline u32 rv_lbu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 4, rd, 0x03); -} - -static inline u32 rv_lhu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x03); -} - -static inline u32 rv_sb(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 0, 0x23); -} - -static inline u32 rv_sh(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 1, 0x23); -} - -static inline u32 rv_sw(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 2, 0x23); -} - -static inline u32 rv_amoadd_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoand_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0xc, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x8, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoxor_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x4, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_amoswap_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x1, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_lr_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x2, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_sc_w(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x3, aq, rl, rs2, rs1, 2, rd, 0x2f); -} - -static inline u32 rv_fence(u8 pred, u8 succ) -{ - u16 imm11_0 = pred << 4 | succ; - - return rv_i_insn(imm11_0, 0, 0, 0, 0xf); -} - -static inline u32 rv_nop(void) -{ - return rv_i_insn(0, 0, 0, 0, 0x13); -} - -/* RVC instrutions. */ - -static inline u16 rvc_addi4spn(u8 rd, u32 imm10) -{ - u32 imm; - - imm = ((imm10 & 0x30) << 2) | ((imm10 & 0x3c0) >> 4) | - ((imm10 & 0x4) >> 1) | ((imm10 & 0x8) >> 3); - return rv_ciw_insn(0x0, imm, rd, 0x0); -} - -static inline u16 rvc_lw(u8 rd, u32 imm7, u8 rs1) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm7 & 0x38) >> 3; - imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6); - return rv_cl_insn(0x2, imm_hi, rs1, imm_lo, rd, 0x0); -} - -static inline u16 rvc_sw(u8 rs1, u32 imm7, u8 rs2) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm7 & 0x38) >> 3; - imm_lo = ((imm7 & 0x4) >> 1) | ((imm7 & 0x40) >> 6); - return rv_cs_insn(0x6, imm_hi, rs1, imm_lo, rs2, 0x0); -} - -static inline u16 rvc_addi(u8 rd, u32 imm6) -{ - return rv_ci_insn(0, imm6, rd, 0x1); -} - -static inline u16 rvc_li(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x2, imm6, rd, 0x1); -} - -static inline u16 rvc_addi16sp(u32 imm10) -{ - u32 imm; - - imm = ((imm10 & 0x200) >> 4) | (imm10 & 0x10) | ((imm10 & 0x40) >> 3) | - ((imm10 & 0x180) >> 6) | ((imm10 & 0x20) >> 5); - return rv_ci_insn(0x3, imm, RV_REG_SP, 0x1); -} - -static inline u16 rvc_lui(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x3, imm6, rd, 0x1); -} - -static inline u16 rvc_srli(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0, rd, 0x1); -} - -static inline u16 rvc_srai(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0x1, rd, 0x1); -} - -static inline u16 rvc_andi(u8 rd, u32 imm6) -{ - return rv_cb_insn(0x4, imm6, 0x2, rd, 0x1); -} - -static inline u16 rvc_sub(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0, rs, 0x1); -} - -static inline u16 rvc_xor(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x1, rs, 0x1); -} - -static inline u16 rvc_or(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x2, rs, 0x1); -} - -static inline u16 rvc_and(u8 rd, u8 rs) -{ - return rv_ca_insn(0x23, rd, 0x3, rs, 0x1); -} - -static inline u16 rvc_slli(u8 rd, u32 imm6) -{ - return rv_ci_insn(0, imm6, rd, 0x2); -} - -static inline u16 rvc_lwsp(u8 rd, u32 imm8) -{ - u32 imm; - - imm = ((imm8 & 0xc0) >> 6) | (imm8 & 0x3c); - return rv_ci_insn(0x2, imm, rd, 0x2); -} - -static inline u16 rvc_jr(u8 rs1) -{ - return rv_cr_insn(0x8, rs1, RV_REG_ZERO, 0x2); -} - -static inline u16 rvc_mv(u8 rd, u8 rs) -{ - return rv_cr_insn(0x8, rd, rs, 0x2); -} - -static inline u16 rvc_jalr(u8 rs1) -{ - return rv_cr_insn(0x9, rs1, RV_REG_ZERO, 0x2); -} - -static inline u16 rvc_add(u8 rd, u8 rs) -{ - return rv_cr_insn(0x9, rd, rs, 0x2); -} - -static inline u16 rvc_swsp(u32 imm8, u8 rs2) -{ - u32 imm; - - imm = (imm8 & 0x3c) | ((imm8 & 0xc0) >> 6); - return rv_css_insn(0x6, imm, rs2, 0x2); -} - -/* - * RV64-only instructions. - * - * These instructions are not available on RV32. Wrap them below a #if to - * ensure that the RV32 JIT doesn't emit any of these instructions. - */ - -#if __riscv_xlen == 64 - -static inline u32 rv_addiw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 0, rd, 0x1b); -} - -static inline u32 rv_slliw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 1, rd, 0x1b); -} - -static inline u32 rv_srliw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(imm11_0, rs1, 5, rd, 0x1b); -} - -static inline u32 rv_sraiw(u8 rd, u8 rs1, u16 imm11_0) -{ - return rv_i_insn(0x400 | imm11_0, rs1, 5, rd, 0x1b); -} - -static inline u32 rv_addw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_subw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_sllw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 1, rd, 0x3b); -} - -static inline u32 rv_srlw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_sraw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(0x20, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_mulw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 0, rd, 0x3b); -} - -static inline u32 rv_divuw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 5, rd, 0x3b); -} - -static inline u32 rv_remuw(u8 rd, u8 rs1, u8 rs2) -{ - return rv_r_insn(1, rs2, rs1, 7, rd, 0x3b); -} - -static inline u32 rv_ld(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 3, rd, 0x03); -} - -static inline u32 rv_lwu(u8 rd, u16 imm11_0, u8 rs1) -{ - return rv_i_insn(imm11_0, rs1, 6, rd, 0x03); -} - -static inline u32 rv_sd(u8 rs1, u16 imm11_0, u8 rs2) -{ - return rv_s_insn(imm11_0, rs2, rs1, 3, 0x23); -} - -static inline u32 rv_amoadd_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoand_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0xc, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x8, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoxor_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x4, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_amoswap_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x1, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_lr_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x2, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -static inline u32 rv_sc_d(u8 rd, u8 rs2, u8 rs1, u8 aq, u8 rl) -{ - return rv_amo_insn(0x3, aq, rl, rs2, rs1, 3, rd, 0x2f); -} - -/* RV64-only RVC instructions. */ - -static inline u16 rvc_ld(u8 rd, u32 imm8, u8 rs1) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm8 & 0x38) >> 3; - imm_lo = (imm8 & 0xc0) >> 6; - return rv_cl_insn(0x3, imm_hi, rs1, imm_lo, rd, 0x0); -} - -static inline u16 rvc_sd(u8 rs1, u32 imm8, u8 rs2) -{ - u32 imm_hi, imm_lo; - - imm_hi = (imm8 & 0x38) >> 3; - imm_lo = (imm8 & 0xc0) >> 6; - return rv_cs_insn(0x7, imm_hi, rs1, imm_lo, rs2, 0x0); -} - -static inline u16 rvc_subw(u8 rd, u8 rs) -{ - return rv_ca_insn(0x27, rd, 0, rs, 0x1); -} - -static inline u16 rvc_addiw(u8 rd, u32 imm6) -{ - return rv_ci_insn(0x1, imm6, rd, 0x1); -} - -static inline u16 rvc_ldsp(u8 rd, u32 imm9) -{ - u32 imm; - - imm = ((imm9 & 0x1c0) >> 6) | (imm9 & 0x38); - return rv_ci_insn(0x3, imm, rd, 0x2); -} - -static inline u16 rvc_sdsp(u32 imm9, u8 rs2) -{ - u32 imm; - - imm = (imm9 & 0x38) | ((imm9 & 0x1c0) >> 6); - return rv_css_insn(0x7, imm, rs2, 0x2); -} - -#endif /* __riscv_xlen == 64 */ - /* Helper functions that emit RVC instructions when possible. */ static inline void emit_jalr(u8 rd, u8 rs, s32 imm, struct rv_jit_context *ctx) From patchwork Fri Aug 4 02:10:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charlie Jenkins X-Patchwork-Id: 13341147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03357EB64DD for ; Fri, 4 Aug 2023 02:13:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232543AbjHDCNM (ORCPT ); Thu, 3 Aug 2023 22:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233700AbjHDCMQ (ORCPT ); Thu, 3 Aug 2023 22:12:16 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA1644C32 for ; Thu, 3 Aug 2023 19:11:37 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-686f38692b3so1487905b3a.2 for ; Thu, 03 Aug 2023 19:11:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1691115071; x=1691719871; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ih06HArn9iAryRcOKi/nyaTeAf4B1pYQ5n3lzBGwExw=; b=ew4rB+H6wBVVmeh/ui4LIzo+yvnm98oRAW4V9bmZm0KpT/glMjqRgPghU5Pe7h4Esc ULiTigZbpum8dCcqLDUYCXDomBKmpj2JOHEklft07nNCBx1z82UYhCKCpuxj1PHAOzaE NMax1U3YW9v1aHa4//bwc1cTcDl4jdIR8DOmMX78OtylMdsneAv0M+DVuZ/oUvlzDjuy /Eo3HBz3BDHFXmGEXCMfBNAi9hv3UebxWREasDtlfQDxVygFxm5W9omrOT80TWEdv1+I 6Q3E3HhXfnIfxGtKYjYgSXjJeDKzMUjL0JxLCdBqUgV5jfcWHjIg0Oq1bhb98rkkscuS E9Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691115071; x=1691719871; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ih06HArn9iAryRcOKi/nyaTeAf4B1pYQ5n3lzBGwExw=; b=KeZGZMH/WuRFE3z0pGL4vsxh5yYwKisV6cIMOKTIpX5+LFe2BC22JO96sQaeTjr3Ne qw+0bj8aUSTaokSYheJq3x5t1wdGsrOevXqg1m35XLApzOYGixXd2mAFlv8qrcAgKUPU SULtviSmlJZolCujuM3RGjCEOk7ezvNQODkYIAdEO+3FOPVm+lC2WFh2eIuJN2gNt1Pd A3YVIUsgqqCGkSPKrKTS5bjfqsfQnCNPrxYWDmuVtWPb9I0Y0l8dotMuo5Uc4qSBVqpl ahgj9klroDkK8IFhgaCSP27pBhX90hf/q38QmmjHuTUVM+pEoNYjfs+04ENKWWzkBWs2 Htnw== X-Gm-Message-State: AOJu0YwSkg5tf633y4QDRL9v8RjLRjFC1ZZt+/4N9sYO8UIGRAna5YQX wWEnf22GAbXpgkaPXqzq0thCmw== X-Google-Smtp-Source: AGHT+IHuwaCo50Vehua+AGfdkcRKSRRWmSFHlzqzBP1tYtF/TOsW9qAWFW9rXyDnlZAMKZPqtqPdwQ== X-Received: by 2002:a05:6a00:189d:b0:687:2be1:e2f6 with SMTP id x29-20020a056a00189d00b006872be1e2f6mr555577pfh.16.1691115071421; Thu, 03 Aug 2023 19:11:11 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id g6-20020a655806000000b0055c558ac4edsm369499pgr.46.2023.08.03.19.11.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Aug 2023 19:11:10 -0700 (PDT) From: Charlie Jenkins Date: Thu, 03 Aug 2023 19:10:35 -0700 Subject: [PATCH 10/10] RISC-V: Refactor bug and traps instructions MIME-Version: 1.0 Message-Id: <20230803-master-refactor-instructions-v4-v1-10-2128e61fa4ff@rivosinc.com> References: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> In-Reply-To: <20230803-master-refactor-instructions-v4-v1-0-2128e61fa4ff@rivosinc.com> To: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, bpf@vger.kernel.org Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Peter Zijlstra , Josh Poimboeuf , Jason Baron , Steven Rostedt , Ard Biesheuvel , Anup Patel , Atish Patra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , =?utf-8?b?Qmo=?= =?utf-8?b?w7ZybiBUw7ZwZWw=?= , Luke Nelson , Xi Wang , Nam Cao , Charlie Jenkins X-Mailer: b4 0.12.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use shared instruction definitions in insn.h instead of manually constructing them. Signed-off-by: Charlie Jenkins --- arch/riscv/include/asm/bug.h | 18 +++++------------- arch/riscv/kernel/traps.c | 9 +++++---- 2 files changed, 10 insertions(+), 17 deletions(-) diff --git a/arch/riscv/include/asm/bug.h b/arch/riscv/include/asm/bug.h index 1aaea81fb141..6d9002d93f85 100644 --- a/arch/riscv/include/asm/bug.h +++ b/arch/riscv/include/asm/bug.h @@ -11,21 +11,13 @@ #include #include +#include -#define __INSN_LENGTH_MASK _UL(0x3) -#define __INSN_LENGTH_32 _UL(0x3) -#define __COMPRESSED_INSN_MASK _UL(0xffff) +#define __IS_BUG_INSN_32(insn) riscv_insn_is_c_ebreak(insn) +#define __IS_BUG_INSN_16(insn) riscv_insn_is_ebreak(insn) -#define __BUG_INSN_32 _UL(0x00100073) /* ebreak */ -#define __BUG_INSN_16 _UL(0x9002) /* c.ebreak */ - -#define GET_INSN_LENGTH(insn) \ -({ \ - unsigned long __len; \ - __len = ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ? \ - 4UL : 2UL; \ - __len; \ -}) +#define __BUG_INSN_32 RVG_MATCH_EBREAK +#define __BUG_INSN_16 RVC_MATCH_C_EBREAK typedef u32 bug_insn_t; diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index f910dfccbf5d..970b118d36b5 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -243,7 +244,7 @@ static inline unsigned long get_break_insn_length(unsigned long pc) if (get_kernel_nofault(insn, (bug_insn_t *)pc)) return 0; - return GET_INSN_LENGTH(insn); + return INSN_LEN(insn); } void handle_break(struct pt_regs *regs) @@ -389,10 +390,10 @@ int is_valid_bugaddr(unsigned long pc) return 0; if (get_kernel_nofault(insn, (bug_insn_t *)pc)) return 0; - if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) - return (insn == __BUG_INSN_32); + if (INSN_IS_C(insn)) + return __IS_BUG_INSN_16(insn); else - return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16); + return __IS_BUG_INSN_32(insn); } #endif /* CONFIG_GENERIC_BUG */