From patchwork Mon Mar 31 09:43:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CED91C36017 for ; Mon, 31 Mar 2025 10:04:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=COHb3DfwDqX3YFHLVRf/SfrCNEpmJ+1eBw/LdJJ0Z9g=; b=ySgQg/8qd5Rpr6KgEUBQvSOGsl tn1s1V7BiJafi2sH2qbm09d7OkCQNKXvBh03wZsSTCrsxSjZLmI356GS+uirlhiEpb25Eo/SrHEIq SR7UPPKjN3anBNZvV8Xlq6vMdntcj+vUwllt8pIsevsXuoY1IX5H1a+jZaAmc0ql6j2r6b+UrmI4P oFmmM1LX0g4zcFBoCgfAovOzmUQ6YU+WEKhSOMpXQMd6QcegFBWcp8GhfaFISETCYPnVHcIKBZc/s bC67CPJlbKMhLxRe7cDWFRnWK0AfiafgiHiN0gbKJihXnwsbsfd4xHQsgQ0PFzFHVMt9XH1f0Qlba oXY/xJ4Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0F-0000000Hb4g-3zod; Mon, 31 Mar 2025 10:04:23 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfo-0000000HXqd-36sB for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:18 +0000 Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4ZR5lN3g7jz13LF1; Mon, 31 Mar 2025 17:42:40 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id AC75D140158; Mon, 31 Mar 2025 17:43:09 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:08 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 1/6] arm64: Provide basic EL2 setup for FEAT_{LS64, LS64_V} usage at EL0/1 Date: Mon, 31 Mar 2025 17:43:15 +0800 Message-ID: <20250331094320.35226-2-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024316_965941_A10A5063 X-CRM114-Status: UNSURE ( 9.99 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang Instructions introduced by FEAT_{LS64, LS64_V} is controlled by HCRX_EL2.{EnALS, EnASR}. Configure all of these to allow usage at EL0/1. This doesn't mean these instructions are always available in EL0/1 if provided. The hypervisor still have the control at runtime. Signed-off-by: Yicong Yang --- arch/arm64/include/asm/el2_setup.h | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h index ebceaae3c749..0259941602c4 100644 --- a/arch/arm64/include/asm/el2_setup.h +++ b/arch/arm64/include/asm/el2_setup.h @@ -57,9 +57,19 @@ /* Enable GCS if supported */ mrs_s x1, SYS_ID_AA64PFR1_EL1 ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 - cbz x1, .Lset_hcrx_\@ + cbz x1, .Lskip_gcs_hcrx_\@ orr x0, x0, #HCRX_EL2_GCSEn +.Lskip_gcs_hcrx_\@: + /* Enable LS64, LS64_V if supported */ + mrs_s x1, SYS_ID_AA64ISAR1_EL1 + ubfx x1, x1, #ID_AA64ISAR1_EL1_LS64_SHIFT, #4 + cbz x1, .Lset_hcrx_\@ + orr x0, x0, #HCRX_EL2_EnALS + cmp x1, #ID_AA64ISAR1_EL1_LS64_LS64_V + b.lt .Lset_hcrx_\@ + orr x0, x0, #HCRX_EL2_EnASR + .Lset_hcrx_\@: msr_s SYS_HCRX_EL2, x0 .Lskip_hcrx_\@: From patchwork Mon Mar 31 09:43:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BDC2C3600C for ; Mon, 31 Mar 2025 10:04:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mQJICgEnJ0hxzwAFe+WGyeMtpMXmeGbnuFrOSKCMGZQ=; b=PqUK9k4aNsrsSGOzbnqaoCxLzE M30tu7BqH2hNgc8JlF3BVL1yWx2vfq7xU2iMQo3vbLKvXe+OkxIuODTCy2PBlkeMRo0amRi22iCdx 7u+Ia7HSInSYaXFnsfv4/74YtHOGOxPQ45GScfvLGCDE7eXcCuenyPO/p/+uFzq3aLKhkWP0S/KBE CO/HWg/f8Jc/l0mvulEkMjri9QF2Ntfzm4nOU+F0I23sOND5k6tCEatp+kEjWZ1qfaoTEOqb2QvQX SWu3ukUjf4VNV0fMvjoZFIatMf2gZ/WOpvpFQAKm76aL8fXX/67N1/ZBDU6gupm/cjl8YIro7kqBH E2o6xhdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0J-0000000Hb7r-1lgJ; Mon, 31 Mar 2025 10:04:27 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfq-0000000HXs8-2cGy for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:21 +0000 Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4ZR5gM2bGbzvWw8; Mon, 31 Mar 2025 17:39:11 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id 56479180116; Mon, 31 Mar 2025 17:43:10 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:09 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 2/6] arm64: Add support for FEAT_{LS64, LS64_V} Date: Mon, 31 Mar 2025 17:43:16 +0800 Message-ID: <20250331094320.35226-3-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024319_053361_51FA2B98 X-CRM114-Status: GOOD ( 16.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang Armv8.7 introduces single-copy atomic 64-byte loads and stores instructions and its variants named under FEAT_{LS64, LS64_V}. These features are identified by ID_AA64ISAR1_EL1.LS64 and the use of such instructions in userspace (EL0) can be trapped. In order to support the use of corresponding instructions in userspace: - Make ID_AA64ISAR1_EL1.LS64 visbile to userspace - Add identifying and enabling in the cpufeature list - Expose these support of these features to userspace through HWCAP3 and cpuinfo Signed-off-by: Yicong Yang --- Documentation/arch/arm64/booting.rst | 12 ++++++ Documentation/arch/arm64/elf_hwcaps.rst | 6 +++ arch/arm64/include/asm/hwcap.h | 2 + arch/arm64/include/uapi/asm/hwcap.h | 2 + arch/arm64/kernel/cpufeature.c | 51 +++++++++++++++++++++++++ arch/arm64/kernel/cpuinfo.c | 2 + arch/arm64/tools/cpucaps | 2 + 7 files changed, 77 insertions(+) diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst index dee7b6de864f..cd26e6326146 100644 --- a/Documentation/arch/arm64/booting.rst +++ b/Documentation/arch/arm64/booting.rst @@ -483,6 +483,18 @@ Before jumping into the kernel, the following conditions must be met: - MDCR_EL3.TPM (bit 6) must be initialized to 0b0 + For CPUs support for 64-byte loads and stores without status (FEAT_LS64): + + - If the kernel is entered at EL1 and EL2 is present: + + - HCRX_EL2.EnALS (bit 1) must be initialised to 0b1. + + For CPUs support for 64-byte loads and stores with status (FEAT_LS64_V): + + - If the kernel is entered at EL1 and EL2 is present: + + - HCRX_EL2.EnASR (bit 2) must be initialised to 0b1. + The requirements described above for CPU mode, caches, MMUs, architected timers, coherency and system registers apply to all CPUs. All CPUs must enter the kernel in the same exception level. Where the values documented diff --git a/Documentation/arch/arm64/elf_hwcaps.rst b/Documentation/arch/arm64/elf_hwcaps.rst index 69d7afe56853..9e6db258ff48 100644 --- a/Documentation/arch/arm64/elf_hwcaps.rst +++ b/Documentation/arch/arm64/elf_hwcaps.rst @@ -435,6 +435,12 @@ HWCAP2_SME_SF8DP4 HWCAP2_POE Functionality implied by ID_AA64MMFR3_EL1.S1POE == 0b0001. +HWCAP3_LS64 + Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0001. + +HWCAP3_LS64_V + Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0010. + 4. Unused AT_HWCAP bits ----------------------- diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 1c3f9617d54f..f45ab66d3466 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -176,6 +176,8 @@ #define KERNEL_HWCAP_POE __khwcap2_feature(POE) #define __khwcap3_feature(x) (const_ilog2(HWCAP3_ ## x) + 128) +#define KERNEL_HWCAP_LS64 __khwcap3_feature(LS64) +#define KERNEL_HWCAP_LS64_V __khwcap3_feature(LS64_V) /* * This yields a mask that user programs can use to figure out what diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 705a7afa8e58..88579dad778d 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -143,5 +143,7 @@ /* * HWCAP3 flags - for AT_HWCAP3 */ +#define HWCAP3_LS64 (1UL << 0) +#define HWCAP3_LS64_V (1UL << 1) #endif /* _UAPI__ASM_HWCAP_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9c4d6d552b25..61d9d9959269 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -231,6 +231,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_LS64_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0), @@ -2278,6 +2279,38 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap) } #endif /* CONFIG_ARM64_E0PD */ +static bool has_ls64(const struct arm64_cpu_capabilities *entry, int __unused) +{ + u64 ls64; + + ls64 = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg), + entry->field_pos, entry->sign); + + if (ls64 == ID_AA64ISAR1_EL1_LS64_NI || + ls64 > ID_AA64ISAR1_EL1_LS64_LS64_ACCDATA) + return false; + + if (entry->capability == ARM64_HAS_LS64 && + ls64 >= ID_AA64ISAR1_EL1_LS64_LS64) + return true; + + if (entry->capability == ARM64_HAS_LS64_V && + ls64 >= ID_AA64ISAR1_EL1_LS64_LS64_V) + return true; + + return false; +} + +static void cpu_enable_ls64(struct arm64_cpu_capabilities const *cap) +{ + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnALS, SCTLR_EL1_EnALS); +} + +static void cpu_enable_ls64_v(struct arm64_cpu_capabilities const *cap) +{ + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnASR, SCTLR_EL1_EnASR); +} + #ifdef CONFIG_ARM64_PSEUDO_NMI static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, int scope) @@ -3041,6 +3074,22 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_pmuv3, }, #endif + { + .desc = "LS64", + .capability = ARM64_HAS_LS64, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_ls64, + .cpu_enable = cpu_enable_ls64, + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64) + }, + { + .desc = "LS64_V", + .capability = ARM64_HAS_LS64_V, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_ls64, + .cpu_enable = cpu_enable_ls64_v, + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64_V) + }, {}, }; @@ -3153,6 +3202,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { HWCAP_CAP(ID_AA64ISAR1_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_EBF16), HWCAP_CAP(ID_AA64ISAR1_EL1, DGH, IMP, CAP_HWCAP, KERNEL_HWCAP_DGH), HWCAP_CAP(ID_AA64ISAR1_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_I8MM), + HWCAP_CAP(ID_AA64ISAR1_EL1, LS64, LS64, CAP_HWCAP, KERNEL_HWCAP_LS64), + HWCAP_CAP(ID_AA64ISAR1_EL1, LS64, LS64_V, CAP_HWCAP, KERNEL_HWCAP_LS64_V), HWCAP_CAP(ID_AA64ISAR2_EL1, LUT, IMP, CAP_HWCAP, KERNEL_HWCAP_LUT), HWCAP_CAP(ID_AA64ISAR3_EL1, FAMINMAX, IMP, CAP_HWCAP, KERNEL_HWCAP_FAMINMAX), HWCAP_CAP(ID_AA64MMFR2_EL1, AT, IMP, CAP_HWCAP, KERNEL_HWCAP_USCAT), diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 285d7d538342..a56c313e6474 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -81,6 +81,8 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_PACA] = "paca", [KERNEL_HWCAP_PACG] = "pacg", [KERNEL_HWCAP_GCS] = "gcs", + [KERNEL_HWCAP_LS64] = "ls64", + [KERNEL_HWCAP_LS64_V] = "ls64_v", [KERNEL_HWCAP_DCPODP] = "dcpodp", [KERNEL_HWCAP_SVE2] = "sve2", [KERNEL_HWCAP_SVEAES] = "sveaes", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 772c1b008e43..4a6a508073d3 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -42,6 +42,8 @@ HAS_HCX HAS_LDAPR HAS_LPA2 HAS_LSE_ATOMICS +HAS_LS64 +HAS_LS64_V HAS_MOPS HAS_NESTED_VIRT HAS_PAN From patchwork Mon Mar 31 09:43:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1119C3600B for ; Mon, 31 Mar 2025 10:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ZvlHHlmOcGwKOnJKvJV9lL7twH5SFvJ+eOyJqbr+UJQ=; b=V9/L+fkp7lpl7Ea4+OdINrvBTr jVmDlQOfyl6GPyr89bnn+AAEZyBjylKLqpHZl4q5HDn+CDk9/Fyy4bYgXZMpMdjllih+Vo2r5GlGb cQ7wbh6ymlTw3+G2Od6TCZps2q/H9fuH/q9iL5Kma7lD6FienVXhsqgXZZhYKCK8vJHpBlOIJ0+/C wv2tRAovFBNHh+uH8T+jsahcOP8vlqIQOtjMHsUd1UpEF9zm0HvXwokVgBJOwnF2rqqZE0VVeC/rn jPzMWAv36k13SOae6EFjPWFske9WOcS9DRxCDMiN4gBmCvmLcP4Ke3NhAFwTZ/dl7LRredZN238mh vY9STdhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0K-0000000005G-0Hmn; Mon, 31 Mar 2025 10:04:28 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfr-0000000HXs5-2rxA for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:22 +0000 Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4ZR5fS4cpSz1f1jg; Mon, 31 Mar 2025 17:38:24 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id EF8A91A016C; Mon, 31 Mar 2025 17:43:10 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:10 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 3/6] KVM: arm64: Enable FEAT_{LS64, LS64_V} in the supported guest Date: Mon, 31 Mar 2025 17:43:17 +0800 Message-ID: <20250331094320.35226-4-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024319_889178_0E2F39B7 X-CRM114-Status: UNSURE ( 8.56 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang Using FEAT_{LS64, LS64_V} instructions in a guest is also controlled by HCRX_EL2.{EnALS, EnASR}. Enable it if guest has related feature. Signed-off-by: Yicong Yang --- arch/arm64/include/asm/kvm_emulate.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d7cf66573aca..9165fcf719ab 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -684,6 +684,12 @@ static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu) if (kvm_has_fpmr(kvm)) vcpu->arch.hcrx_el2 |= HCRX_EL2_EnFPM; + + if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64)) + vcpu->arch.hcrx_el2 |= HCRX_EL2_EnALS; + + if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V)) + vcpu->arch.hcrx_el2 |= HCRX_EL2_EnASR; } } #endif /* __ARM64_KVM_EMULATE_H__ */ From patchwork Mon Mar 31 09:43:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033404 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8104C36010 for ; Mon, 31 Mar 2025 10:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tc9Q022peDvT9sLCFqtvC1JYMYxACxE/o7e7rXMwxuA=; b=SIYzfgeog4LWeTL+5wr3xUPZlv 5ovV4Hdlq3TIUoAFJhd1X8/Wzo3uEICTDg1kLkQbevzWSAXtyqqPYkb/UTe8Rx+nBH3Cmdd5Phfg7 q8yaAIU2CKmjG3xbBJoE4PjisHucFzpXMBg9D/Nk6UUapSJ2kfgHKnDlpJVVP3lSrjdlEIdRWmZeR 3wrPCSHsKDM0zLfetMEq2Id8G4Q/8fHnIVpjsLs2NtX/9Hm5O8IH8btMd1ORpU3TfytKjgArUGwR3 XNO+J144P+OZ6zNu/3qVK7hcTFcn76ym4TGbPRYanYtNX20vXhR4MYXhwpiyTmqEN9/imFjesc9NE SPJvpP/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0I-0000000Hb7S-3S1R; Mon, 31 Mar 2025 10:04:26 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfp-0000000HXqw-1xi5 for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:21 +0000 Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4ZR5lQ2qfxz13LGQ; Mon, 31 Mar 2025 17:42:42 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id 978191800E2; Mon, 31 Mar 2025 17:43:11 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:10 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 4/6] kselftest/arm64: Add HWCAP test for FEAT_{LS64, LS64_V} Date: Mon, 31 Mar 2025 17:43:18 +0800 Message-ID: <20250331094320.35226-5-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024317_894968_0F56D36B X-CRM114-Status: GOOD ( 14.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang Add tests for FEAT_{LS64, LS64_V}. Issue related instructions if feature presents, no SIGILL should be received. When such instructions operate on Device memory or non-cacheable memory, we may received a SIGBUS during the test (w/o FEAT_LS64WB). Just ignore it since we only tested whether the instruction itself can be issued as expected on platforms declaring the support of such features. Signed-off-by: Yicong Yang --- tools/testing/selftests/arm64/abi/hwcap.c | 90 +++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c index 35f521e5f41c..e1724f038cc1 100644 --- a/tools/testing/selftests/arm64/abi/hwcap.c +++ b/tools/testing/selftests/arm64/abi/hwcap.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include #include @@ -578,6 +580,78 @@ static void lrcpc3_sigill(void) : "=r" (data0), "=r" (data1) : "r" (src) :); } +static void ignore_signal(int sig, siginfo_t *info, void *context) +{ + ucontext_t *uc = context; + + uc->uc_mcontext.pc += 4; +} + +static void ls64_sigill(void) +{ + struct sigaction ign, old; + char src[64] __aligned(64) = { 1 }; + + /* + * LS64, LS64_V require target memory to be Device/Non-cacheable (if + * FEAT_LS64WB not supported) and the completer supports these + * instructions, otherwise we'll receive a SIGBUS. Since we are only + * testing the ABI here, so just ignore the SIGBUS and see if we can + * execute the instructions without receiving a SIGILL. Restore the + * handler of SIGBUS after this test. + */ + ign.sa_sigaction = ignore_signal; + ign.sa_flags = SA_SIGINFO | SA_RESTART; + sigemptyset(&ign.sa_mask); + sigaction(SIGBUS, &ign, &old); + + register void *xn asm ("x8") = src; + register u64 xt_1 asm ("x0"); + register u64 __maybe_unused xt_2 asm ("x1"); + register u64 __maybe_unused xt_3 asm ("x2"); + register u64 __maybe_unused xt_4 asm ("x3"); + register u64 __maybe_unused xt_5 asm ("x4"); + register u64 __maybe_unused xt_6 asm ("x5"); + register u64 __maybe_unused xt_7 asm ("x6"); + register u64 __maybe_unused xt_8 asm ("x7"); + + /* LD64B x0, [x8] */ + asm volatile(".inst 0xf83fd100" : "=r" (xt_1) : "r" (xn)); + + /* ST64B x0, [x8] */ + asm volatile(".inst 0xf83f9100" : : "r" (xt_1), "r" (xn)); + + sigaction(SIGBUS, &old, NULL); +} + +static void ls64_v_sigill(void) +{ + struct sigaction ign, old; + char dst[64] __aligned(64); + + /* See comment in ls64_sigill() */ + ign.sa_sigaction = ignore_signal; + ign.sa_flags = SA_SIGINFO | SA_RESTART; + sigemptyset(&ign.sa_mask); + sigaction(SIGBUS, &ign, &old); + + register void *xn asm ("x8") = dst; + register u64 xt_1 asm ("x0") = 1; + register u64 __maybe_unused xt_2 asm ("x1") = 2; + register u64 __maybe_unused xt_3 asm ("x2") = 3; + register u64 __maybe_unused xt_4 asm ("x3") = 4; + register u64 __maybe_unused xt_5 asm ("x4") = 5; + register u64 __maybe_unused xt_6 asm ("x5") = 6; + register u64 __maybe_unused xt_7 asm ("x6") = 7; + register u64 __maybe_unused xt_8 asm ("x7") = 8; + register u64 st asm ("x9"); + + /* ST64BV x9, x0, [x8] */ + asm volatile(".inst 0xf829b100" : "=r" (st) : "r" (xt_1), "r" (xn)); + + sigaction(SIGBUS, &old, NULL); +} + static const struct hwcap_data { const char *name; unsigned long at_hwcap; @@ -1098,6 +1172,22 @@ static const struct hwcap_data { .sigill_fn = hbc_sigill, .sigill_reliable = true, }, + { + .name = "LS64", + .at_hwcap = AT_HWCAP3, + .hwcap_bit = HWCAP3_LS64, + .cpuinfo = "ls64", + .sigill_fn = ls64_sigill, + .sigill_reliable = true, + }, + { + .name = "LS64_V", + .at_hwcap = AT_HWCAP3, + .hwcap_bit = HWCAP3_LS64_V, + .cpuinfo = "ls64_v", + .sigill_fn = ls64_v_sigill, + .sigill_reliable = true, + }, }; typedef void (*sighandler_fn)(int, siginfo_t *, void *); From patchwork Mon Mar 31 09:43:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C132FC36010 for ; Mon, 31 Mar 2025 10:04:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=K0uyCRqysqwlz/M3gpeFQYnYDt/B6q6kuJ3jCKU26Zw=; b=NaJvx1xLGRoLGcpyxl0TszUFln N60z09VarufLlC+SvHEPKRE0VT8d9t55WYVITC8OgYve8CDKqL8lRNABZpJIZq8Lqb/qEcqFw6+4R 8AFLZqbk/r7vk2K8lE7lUeuHZA0WwCT7cYICfazQNJF/5zo0LKXVFWqDQ8O8MBGxWwN5QyN7WnjbQ afbtI6mU3T/zkBb/KB+iGX+U+zSALA3ducPY9nbqRIJwLh5DPlPBtE5akkj6NIMwhUHVtBp2AGosN lyLf0ki1Q5lqpaQeK55bPeuCVkjXs7BlKAHI3Oo/IxSSuSpra61+vQJiF9H2Zil6elNn/f0yBqGdJ NZXwX1LQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0G-0000000Hb57-2cZo; Mon, 31 Mar 2025 10:04:24 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfp-0000000HXqu-1fdI for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:19 +0000 Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4ZR5lR0hwTz13Kd0; Mon, 31 Mar 2025 17:42:43 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id 44BA61400DC; Mon, 31 Mar 2025 17:43:12 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:11 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 5/6] arm64: Add ESR.DFSC definition of unsupported exclusive or atomic access Date: Mon, 31 Mar 2025 17:43:19 +0800 Message-ID: <20250331094320.35226-6-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024317_623070_DBFE3573 X-CRM114-Status: GOOD ( 10.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang 0x35 indicates IMPLEMENTATION DEFINED fault for Unsupported Exclusive or Atomic access. Add ESR_ELx_FSC definition and corresponding wrapper. Signed-off-by: Yicong Yang --- arch/arm64/include/asm/esr.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d1b1a33f9a8b..2f357442f646 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -121,6 +121,7 @@ #define ESR_ELx_FSC_SEA_TTW(n) (0x14 + (n)) #define ESR_ELx_FSC_SECC (0x18) #define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n)) +#define ESR_ELx_FSC_EXCL_ATOMIC (0x35) /* Status codes for individual page table levels */ #define ESR_ELx_FSC_ACCESS_L(n) (ESR_ELx_FSC_ACCESS + (n)) @@ -464,6 +465,13 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr) (esr == ESR_ELx_FSC_ACCESS_L(0)); } +static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr) +{ + esr = esr & ESR_ELx_FSC; + + return esr == ESR_ELx_FSC_EXCL_ATOMIC; +} + /* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */ static inline bool esr_iss_is_eretax(unsigned long esr) { From patchwork Mon Mar 31 09:43:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 14033400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6DA23C3600B for ; Mon, 31 Mar 2025 10:04:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YzqI737JhabtqMx2QCxT8bvBXhPHGK7DXDw6nJVH1rk=; b=TwA42gCRIDjHciLfNPJITYID65 XU+l2UbHC0SuGrKOrtYHahvXWimtaZTIBBqqxqR87y5+UEE4NG4JPY6HriZW92HXYlW6OmNRb3wqo ax+K+uyOjj+foBbcqEqjbmCxB/xyaScmCodolYyZ9wigwTFoFLQW1n1gM7dB6BABoteOyTu6/IMc7 sbpuEfr8vzsBU5hybQgNk/IceCdx5rTtfghrylTgzqIfW171H9gAYs49JuJAq4IiUn5jDPDGCAjOj Jrgf2cAuK2tjCi/nEBhDOI7vFmU6oegp2842T87VvT3z7gSGN9a/Xgycn8j7Srr3MepC90cgz5Uzg 3od0eiyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzC0I-0000000Hb6I-0mo0; Mon, 31 Mar 2025 10:04:26 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1tzBfo-0000000HXqb-3gBB for linux-arm-kernel@lists.infradead.org; Mon, 31 Mar 2025 09:43:20 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ZR5fW2YFwz1jBWc; Mon, 31 Mar 2025 17:38:27 +0800 (CST) Received: from kwepemd200014.china.huawei.com (unknown [7.221.188.8]) by mail.maildlp.com (Postfix) with ESMTPS id E13F81800B1; Mon, 31 Mar 2025 17:43:12 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by kwepemd200014.china.huawei.com (7.221.188.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1748.10; Mon, 31 Mar 2025 17:43:12 +0800 From: Yicong Yang To: , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 6/6] KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory Date: Mon, 31 Mar 2025 17:43:20 +0800 Message-ID: <20250331094320.35226-7-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20250331094320.35226-1-yangyicong@huawei.com> References: <20250331094320.35226-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemd200014.china.huawei.com (7.221.188.8) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250331_024317_278285_A6AC3603 X-CRM114-Status: GOOD ( 20.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yicong Yang If FEAT_LS64WB not supported, FEAT_LS64* instructions only support to access Device/Uncacheable memory, otherwise a data abort for unsupported Exclusive or atomic access (0x35) is generated per spec. It's implementation defined whether the target exception level is routed and is possible to implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section C3.2.12.2 Single-copy atomic 64-byte load/store: The check is performed against the resulting memory type after all enabled stages of translation. In this case the fault is reported at the final enabled stage of translation. If it's implemented as generate the DABT to the final enabled stage (stage-2), inject a DABT to the guest to handle it. Signed-off-by: Yicong Yang --- arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/kvm/inject_fault.c | 35 ++++++++++++++++++++++++++ arch/arm64/kvm/mmu.c | 37 +++++++++++++++++++++++++++- 3 files changed, 72 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 9165fcf719ab..96701b19982e 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -47,6 +47,7 @@ void kvm_skip_instr32(struct kvm_vcpu *vcpu); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); +void kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_size_fault(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index a640e839848e..1c3643126b92 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -171,6 +171,41 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr) inject_abt64(vcpu, false, addr); } +/** + * kvm_inject_dabt_excl_atomic - inject a data abort for unsupported exclusive + * or atomic access + * @vcpu: The VCPU to receive the data abort + * @addr: The address to report in the DFAR + * + * It is assumed that this code is called from the VCPU thread and that the + * VCPU therefore is not currently executing guest code. + */ +void kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, unsigned long addr) +{ + unsigned long cpsr = *vcpu_cpsr(vcpu); + u64 esr = 0; + + pend_sync_exception(vcpu); + + if (kvm_vcpu_trap_il_is32bit(vcpu)) + esr |= ESR_ELx_IL; + + if ((cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) + esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT; + else + esr |= ESR_ELx_EC_DABT_CUR << ESR_ELx_EC_SHIFT; + + esr |= ESR_ELx_FSC_EXCL_ATOMIC; + + if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC))) { + vcpu_write_sys_reg(vcpu, addr, FAR_EL1); + vcpu_write_sys_reg(vcpu, esr, ESR_EL1); + } else { + vcpu_write_sys_reg(vcpu, addr, FAR_EL2); + vcpu_write_sys_reg(vcpu, esr, ESR_EL2); + } +} + /** * kvm_inject_pabt - inject a prefetch abort into the guest * @vcpu: The VCPU to receive the prefetch abort diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 2feb6c6b63af..7ae4e48fd040 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1658,6 +1658,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; + if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) { + /* + * Target address is normal memory on the Host. We come here + * because: + * 1) Guest map it as device memory and perform LS64 operations + * 2) VMM report it as device memory mistakenly + * Warn the VMM and inject the DABT back to the guest. + */ + if (!device) + kvm_err("memory attributes maybe incorrect for hva 0x%lx\n", hva); + + /* + * Otherwise it's a piece of device memory on the Host. + * Inject the DABT back to the guest since the mapping + * is wrong. + */ + kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu)); + } + /* * Potentially reduce shadow S2 permissions to match the guest's own * S2. For exec faults, we'd only reach this point if the guest @@ -1836,7 +1855,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) /* Check the stage-2 fault is trans. fault or write fault */ if (!esr_fsc_is_translation_fault(esr) && !esr_fsc_is_permission_fault(esr) && - !esr_fsc_is_access_flag_fault(esr)) { + !esr_fsc_is_access_flag_fault(esr) && + !esr_fsc_is_excl_atomic_fault(esr)) { kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n", kvm_vcpu_trap_get_class(vcpu), (unsigned long)kvm_vcpu_trap_get_fault(vcpu), @@ -1919,6 +1939,21 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } + /* + * If instructions of FEAT_{LS64, LS64_V} operated on + * unsupported memory regions, a DABT for unsupported + * Exclusive or atomic access is generated. It's + * implementation defined whether the exception will + * be taken to, a stage-1 DABT or the final enabled + * stage of translation (stage-2 in this case as we + * hit here). Inject a DABT to the guest to handle it + * if it's implemented as a stage-2 DABT. + */ + if (esr_fsc_is_excl_atomic_fault(esr)) { + kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 1; + } + /* * The IPA is reported as [MAX:12], so we need to * complement it with the bottom 12 bits from the