From patchwork Mon Oct 28 09:40:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13853221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7E13D13570 for ; Mon, 28 Oct 2024 09:52:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BAlECAcGnuh4cuscZB5s8TRPHJ/b/S5CapIyxKLOHrA=; b=1LHhdXkfkhmDagxbJY6bjPEmJl vg8/5FtCrYvXILAjhCiLUoA2DgNmUEglpVi/58FcvRVpCXXyI3vRDUxwnFGxDSKCIC/NAX5CC9m2V cTVhcqDI4NDTA+OAIJ1lXL7TxV1C24jj3fRt60vh8wWoZ/DDQGmAxoxivlzwqvfE4Pz8HrVVNuAD4 3iCUQ9ZKU3C235ERhaF/fEDkDgvzfjzk0OSkuelXNWFcj2990p9PkFxcltFJike8NcbMRS0u/152E X778ztuGN0PAOeaRYuDBs8IVJSw6WP+qlKShhFvKFwSUmfkWLuMSQ1skhJe8dNvZQ3C7ciaw56TM6 LBWumNzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t5MQV-0000000AJCI-05mK; Mon, 28 Oct 2024 09:52:43 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t5MEm-0000000AGOB-1AlM for linux-arm-kernel@lists.infradead.org; Mon, 28 Oct 2024 09:40:37 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B587F5C586B; Mon, 28 Oct 2024 09:39:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAD72C4CEC3; Mon, 28 Oct 2024 09:40:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730108435; bh=fOfCtMTTqempwM/M8Zy6RaQ+0MELCkS3ty5RpQLNX8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TMWEb3Q3yr8sCY6ZLd++1Af1iFAAgAf3jSpETEVxbSccuguh6Cym2Gy0O5Ke8gqrC l5hThnrs87kVHIuDBxZmVkSq1poOrkDHM5x5HUXaraiyME1BiLZ2esgzvPO7Y4Ot5e bjGZudSe67agjEV28Y3gCStBWuGGggC97WWT2CcNoUsTBDAWL1FvpPwbas9N+tdNtK w8X2G2eQhWRt7LfDwo4moHlF8IhsecpR/PWczONikDyrTvfKsVbhSaFyFwA8akimW2 rAF62ze9EJPFvPfwyFiRkrpRcBbThL/JNNr5imv6a2GX4RfpPJgaauHUZDlKbVpeoF Ta9UPxKwRrkTQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH 1/4] arm64: Update the values to binary from hex Date: Mon, 28 Oct 2024 15:10:11 +0530 Message-ID: <20241028094014.2596619-2-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241028094014.2596619-1-aneesh.kumar@kernel.org> References: <20241028094014.2596619-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241028_024036_384421_2346DF81 X-CRM114-Status: UNSURE ( 9.36 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This matches the ARM ARM representation. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/memory.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0480c61dbb4f..ca42f6d87c16 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -178,17 +178,17 @@ /* * Memory types for Stage-2 translation */ -#define MT_S2_NORMAL 0xf -#define MT_S2_NORMAL_NC 0x5 -#define MT_S2_DEVICE_nGnRE 0x1 +#define MT_S2_NORMAL 0b1111 +#define MT_S2_NORMAL_NC 0b0101 +#define MT_S2_DEVICE_nGnRE 0b0001 /* * Memory types for Stage-2 translation when ID_AA64MMFR2_EL1.FWB is 0001 * Stage-2 enforces Normal-WB and Device-nGnRE */ -#define MT_S2_FWB_NORMAL 6 -#define MT_S2_FWB_NORMAL_NC 5 -#define MT_S2_FWB_DEVICE_nGnRE 1 +#define MT_S2_FWB_NORMAL 0b0110 +#define MT_S2_FWB_NORMAL_NC 0b0101 +#define MT_S2_FWB_DEVICE_nGnRE 0b0001 #ifdef CONFIG_ARM64_4K_PAGES #define IOREMAP_MAX_ORDER (PUD_SHIFT) From patchwork Mon Oct 28 09:40:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13853223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 503EFD13570 for ; Mon, 28 Oct 2024 09:54:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2r0XkKCkIV0VttGmWfz8BNo6AjiFKEbQQk81gPYGYLY=; b=vvrNlmzh2Mp5KLvxbTjYBycr5t 8S0bp4S98syvQmSGI2vaGO1GCvom12Llcg+iIJ1o34P4579iB11Kx4TzfJcdA13ya/MnRcGhGEzIk Us/+1Yjkhk0jdsGuoXuaYOLPpta/wploZC47Dq4kGHyW4VJ0EvJi9g2a6Dqu8Zmutm5Co1K1raiSL bL8Cw13QUBt18B1E3zEefWgRpD8dRB2lEOLSJWT5z6zJy0gooACdnSiTU2ZxZLiC5zyVFj96sL0oq 8vxuri6HRcY12rYWg+BzsVXkAfgQMA98EWNEsj5BXRhs7N+kLykUsbH4+xq0C8IXaoyzuP9ZomKWy qqO4Zfdw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t5MS5-0000000AJbm-1EqU; Mon, 28 Oct 2024 09:54:21 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t5MEt-0000000AGQs-0yQN for linux-arm-kernel@lists.infradead.org; Mon, 28 Oct 2024 09:40:44 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id D8B66A41B2F; Mon, 28 Oct 2024 09:38:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7022BC4CEC3; Mon, 28 Oct 2024 09:40:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730108441; bh=RL1PZZNf/Qj/0CoaY+xr8w3mCVtSPwhdkys11uyrEvE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HlSAs4nHSi8Xsp1PizspeIZXy72a6PgV2vUn6GZDzYeFo5s/f3OEKccFiiaEfWYo6 KbLSASzGtqlGRfJWRpeJbqkFUcS25er91XoEvH9eGWzp+wUWoGSM69I/9zz3ZG4yKD XDbv63ZzJhUTGzbyDugx79sYl2qD8AAohXOmhVAPtLIHXk4gnGtT+nYnUpYIi04ks5 NJXKMmA196rgn3y6KQEBsJUwCEgvnjD2YRrYEZKAW84wFeUB7JyIr0kzqXBXEV2yMw +tBdbjz+OHqAaHAqDqyaI1Ml64RYfZH1XH+UkuSCkF8/H/l+evz/FTk8pLSI2pNasd ucGbLvsHnKKoQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH 2/4] arm64: cpufeature: add Allocation Tag Access Permission (MTE_PERM) feature Date: Mon, 28 Oct 2024 15:10:12 +0530 Message-ID: <20241028094014.2596619-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241028094014.2596619-1-aneesh.kumar@kernel.org> References: <20241028094014.2596619-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241028_024043_415232_87D6E0E4 X-CRM114-Status: GOOD ( 12.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This indicates if the system supports MTE_PERM. This will be used by KVM for stage 2 mapping. This is a CPUCAP_SYSTEM feature because if we enable the feature all cpus must have it. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/cpufeature.h | 5 +++++ arch/arm64/include/asm/memory.h | 2 ++ arch/arm64/kernel/cpufeature.c | 9 +++++++++ arch/arm64/tools/cpucaps | 1 + 4 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 3d261cc123c1..6e6631890021 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -805,6 +805,11 @@ static inline bool system_supports_mte(void) return alternative_has_cap_unlikely(ARM64_MTE); } +static inline bool system_supports_notagaccess(void) +{ + return alternative_has_cap_unlikely(ARM64_MTE_PERM); +} + static inline bool system_has_prio_mask_debugging(void) { return IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING) && diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index ca42f6d87c16..006a649d4ac7 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -179,6 +179,7 @@ * Memory types for Stage-2 translation */ #define MT_S2_NORMAL 0b1111 +#define MT_S2_NORMAL_NOTAGACCESS 0b0100 #define MT_S2_NORMAL_NC 0b0101 #define MT_S2_DEVICE_nGnRE 0b0001 @@ -187,6 +188,7 @@ * Stage-2 enforces Normal-WB and Device-nGnRE */ #define MT_S2_FWB_NORMAL 0b0110 +#define MT_S2_FWB_NORMAL_NOTAGACCESS 0b1110 #define MT_S2_FWB_NORMAL_NC 0b0101 #define MT_S2_FWB_DEVICE_nGnRE 0b0001 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 718728a85430..608e24e313ad 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -305,6 +305,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64pfr2[] = { ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_FPMR_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR2_EL1_MTEPERM_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -2742,6 +2743,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, MTE, MTE3) }, + { + .desc = "MTE Allocation Tag Access Permission", + .capability = ARM64_MTE_PERM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTEPERM, IMP) + }, + #endif /* CONFIG_ARM64_MTE */ { .desc = "RCpc load-acquire (LDAPR)", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index eedb5acc21ed..81c6599d2a95 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -62,6 +62,7 @@ KVM_PROTECTED_MODE MISMATCHED_CACHE_TYPE MTE MTE_ASYMM +MTE_PERM SME SME_FA64 SME2 From patchwork Mon Oct 28 09:40:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13853288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 409DFD13570 for ; Mon, 28 Oct 2024 11:00:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=JNFPbeDpfcWwNjFqfpFGb1I3h0sHk40o+FNyTY0hGXM=; b=RPhZoZiq54RFg3pi9It54BUXI2 /LO72Htwmxxk/3doP1VpNZxcVnmslsjIM4SphtyG3p4dj1PGDbbd3aPWj41kLYdQs8c9SosofvDIn yP0m3sVQDQwBLCy6EXxpCaipmc/QkKD31H126kWQMtu71V7Y0nyIFitcAmL08Dv22UAeZHCrWsnoO tJBeeJNGvLIOYerydun6KORX6kZTpbHr/1QRTUEbM+Tfndu8TcdXkC9EbHxf4OXSxPVIMXNCgp6pW dlKNs63PNUUyFa33ZPiz+pR++TK9fZ77VUQuS44//YcwQtnaW3roYIWebx0vZf18YVuEMohqElweN hiq/N9fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t5NTv-0000000AUe0-2GPa; Mon, 28 Oct 2024 11:00:19 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t5MEx-0000000AGRm-34pT for linux-arm-kernel@lists.infradead.org; Mon, 28 Oct 2024 09:40:48 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1B75D5C5A10; Mon, 28 Oct 2024 09:40:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41EA3C4CEE3; Mon, 28 Oct 2024 09:40:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730108446; bh=z1mUlgIUaANV/bPlzaFct/onP+Qou91QVjn4WHN+xBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=makORikMMp9/UbWQJ3sLJYYmDweBYANAH51dd82Y2lkQOuR/iidUeUDNFZazncjP9 SXlxlDLkTEuXSuJxdvyV3ic8Z8x8fkCuv5P2eGaDWz96MXcgNl1bu+4ps234VH4hFz hojhDqOkIT0MHmmm27Yqe04+In8D8Unw308HCHt3aw71+SxoM2XbzPIpwKzBZYbNDO CkomdyrKrGxLm7fCWIb77ITglUOIe6z6qrVF/clVOowdB2jt3wT1XARtYa/+jYQ3uo UiBKwAvgSnrqo9SM5rouDqir2whDwApOP26Gxoxe6qY9Zh0LMW8iGZjivdTvPpcIfQ 7UQJd5K8abhYg== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH 3/4] arm64: mte: update code comments Date: Mon, 28 Oct 2024 15:10:13 +0530 Message-ID: <20241028094014.2596619-4-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241028094014.2596619-1-aneesh.kumar@kernel.org> References: <20241028094014.2596619-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241028_024047_835966_A7C67DA0 X-CRM114-Status: GOOD ( 15.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag initialisation") updated the locking such the kernel now allows VM_SHARED mapping with MTE. Update the code comment to reflect this. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/kvm/mmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a509b63bd4dd..b5824e93cee0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * able to see the page's tags and therefore they must be initialised first. If * PG_mte_tagged is set, tags have already been initialised. * - * The race in the test/set of the PG_mte_tagged flag is handled by: - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs - * racing to santise the same page - * - mmap_lock protects between a VM faulting a page in and the VMM performing - * an mprotect() to add VM_MTE + * The race in the test/set of the PG_mte_tagged flag is handled by + * using PG_mte_lock and PG_mte_tagged together. */ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, unsigned long size) @@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (!fault_is_perm && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new disallowed VMA */ + /* + * not a permission fault implies a translation fault which + * means mapping the page for the first time + */ if (mte_allowed) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { From patchwork Mon Oct 28 09:40:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13853225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A23F6D13588 for ; Mon, 28 Oct 2024 09:57:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xCEFUBSeBtH/w8czurRCDNXIaBEQ8K3HP0Tt9FrZzT0=; b=QG78xkqQHd1RFX2ntV3uTwcCAC xsJ2iVIGGCeKO5lolFrSX7hp76e/Ee0TuHEG22DoQWPukVWa5sXB2hforE2m+2I5LBhLdtN1SO/uY imyorp32J8T1fLNXGKz+7g97TNUxtD3WVH6oVvy7MeSV2dpepcPYk2MS79n+VgYuwGxUUTthmmgI5 nZtb51f3ytnbWibi9Km2lCAyKqhVEW7jZnIG2Jt5breAYOSJ4rHFxXRKfu46HxrhBM3/Qujf3HH9n tp5eby1049iZvLSbOOGqB3TBjJfoZM17YYmQ6xkZClI+RqdUS9rC/kaW/tdhZ1eXI7ZhvZuQsPwgL 3yFvmZ6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t5MVD-0000000AK54-3Kaq; Mon, 28 Oct 2024 09:57:35 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t5MF3-0000000AGTm-28Du for linux-arm-kernel@lists.infradead.org; Mon, 28 Oct 2024 09:40:54 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id ACA99A41B42; Mon, 28 Oct 2024 09:38:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 864ABC4CEC3; Mon, 28 Oct 2024 09:40:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730108452; bh=G2GbVBqUrl/bAsu9wwCmWG5UWW1Trj57AeTzu57FnfQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jSS9RfBkSCnnPdCu2wVQvHxuuqy4udopoF4zaUlic+tFZf2ZW8eoma6SZoY4+7Agu ILxq83L5EJ6nt/N6TDvo/HYW6ATlKxGP5T5+r3VIlVFXbcTwK7WddFAbcp7ywPAq0a BWY8wK1oamFTPBHzHf+uASLl/5BjsT3Znk5P/T1zT0VkNUoJ2/IohdIYBfiNRxKXLS YAVzh8OSGb9YPZV7XPLkhJa0pnhUa5qkQsXkKi7x6cRfm/bPqVDkz+m2Io+sXYA4eF jzcr8TIBH50Vqz68u90QC+7pxSIEQxwdLgirrXhRV6yo9TpmKqPQ1Pi3Sj0VkPtI2t G2fVt+FoqpdPA== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Steven Price , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland , Oliver Upton , Joey Gouly , Zenghui Yu , "Aneesh Kumar K.V (Arm)" Subject: [PATCH 4/4] arm64: mte: Use stage-2 NoTagAccess memory attribute if supported Date: Mon, 28 Oct 2024 15:10:14 +0530 Message-ID: <20241028094014.2596619-5-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241028094014.2596619-1-aneesh.kumar@kernel.org> References: <20241028094014.2596619-1-aneesh.kumar@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241028_024053_700193_EF47679E X-CRM114-Status: GOOD ( 23.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, the kernel won't start a guest if the MTE feature is enabled and the guest RAM is backed by memory which doesn't support access tags. Update this such that the kernel uses the NoTagAccess memory attribute while mapping pages from VMAs for which MTE is not allowed. The fault from accessing the access tags with such pages is forwarded to VMM so that VMM can decide to kill the guest or remap the pages so that access tag storage is allowed. NOTE: We could also use KVM_EXIT_MEMORY_FAULT for this. I chose to add a new EXIT type because this is arm64 specific exit type. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/include/asm/kvm_pgtable.h | 1 + arch/arm64/kvm/hyp/pgtable.c | 16 +++++++++++++--- arch/arm64/kvm/mmu.c | 28 ++++++++++++++++++++++------ include/uapi/linux/kvm.h | 7 +++++++ 5 files changed, 48 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index a601a9305b10..fa0149a0606a 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -373,6 +373,11 @@ static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); } +static inline bool kvm_vcpu_trap_is_tagaccess(const struct kvm_vcpu *vcpu) +{ + return !!(ESR_ELx_ISS2(kvm_vcpu_get_esr(vcpu)) & ESR_ELx_TagAccess); +} + static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu) { return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC; diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 03f4c3d7839c..5657ac1998ad 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -252,6 +252,7 @@ enum kvm_pgtable_prot { KVM_PGTABLE_PROT_DEVICE = BIT(3), KVM_PGTABLE_PROT_NORMAL_NC = BIT(4), + KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS = BIT(5), KVM_PGTABLE_PROT_SW0 = BIT(55), KVM_PGTABLE_PROT_SW1 = BIT(56), diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11bcebac908..bc0d9f08c49a 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -677,9 +677,11 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p { kvm_pte_t attr; u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS; + unsigned long prot_mask = KVM_PGTABLE_PROT_DEVICE | + KVM_PGTABLE_PROT_NORMAL_NC | + KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; - switch (prot & (KVM_PGTABLE_PROT_DEVICE | - KVM_PGTABLE_PROT_NORMAL_NC)) { + switch (prot & prot_mask) { case KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_NORMAL_NC: return -EINVAL; case KVM_PGTABLE_PROT_DEVICE: @@ -692,6 +694,12 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p return -EINVAL; attr = KVM_S2_MEMATTR(pgt, NORMAL_NC); break; + case KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS: + if (system_supports_notagaccess()) + attr = KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS); + else + return -EINVAL; + break; default: attr = KVM_S2_MEMATTR(pgt, NORMAL); } @@ -872,7 +880,9 @@ static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte) { u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; - return kvm_pte_valid(pte) && memattr == KVM_S2_MEMATTR(pgt, NORMAL); + return kvm_pte_valid(pte) && + ((memattr == KVM_S2_MEMATTR(pgt, NORMAL)) || + (memattr == KVM_S2_MEMATTR(pgt, NORMAL_NOTAGACCESS))); } static bool stage2_pte_executable(kvm_pte_t pte) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b5824e93cee0..e56c6996332e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1647,12 +1647,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * not a permission fault implies a translation fault which * means mapping the page for the first time */ - if (mte_allowed) { + if (mte_allowed) sanitise_mte_tags(kvm, pfn, vma_pagesize); - } else { - ret = -EFAULT; - goto out_unlock; - } + else + prot |= KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS; } if (writable) @@ -1721,6 +1719,15 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) kvm_set_pfn_accessed(kvm_pte_to_pfn(pte)); } +static inline void kvm_prepare_notagaccess_exit(struct kvm_vcpu *vcpu, + gpa_t gpa, gpa_t size) +{ + vcpu->run->exit_reason = KVM_EXIT_ARM_NOTAG_ACCESS; + vcpu->run->notag_access.flags = 0; + vcpu->run->notag_access.gpa = gpa; + vcpu->run->notag_access.size = size; +} + /** * kvm_handle_guest_abort - handles all 2nd stage aborts * @vcpu: the VCPU pointer @@ -1833,6 +1840,14 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) gfn = ipa >> PAGE_SHIFT; memslot = gfn_to_memslot(vcpu->kvm, gfn); + + if (kvm_vcpu_trap_is_tagaccess(vcpu)) { + /* exit to host and handle the error */ + kvm_prepare_notagaccess_exit(vcpu, gfn << PAGE_SHIFT, PAGE_SIZE); + ret = 0; + goto out; + } + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable); write_fault = kvm_is_write_fault(vcpu); if (kvm_is_error_hva(hva) || (write_fault && !writable)) { @@ -2145,7 +2160,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { + if (kvm_has_mte(kvm) && !system_supports_notagaccess() && + !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc055145..a8268a164c4d 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -178,6 +178,7 @@ struct kvm_xen_exit { #define KVM_EXIT_NOTIFY 37 #define KVM_EXIT_LOONGARCH_IOCSR 38 #define KVM_EXIT_MEMORY_FAULT 39 +#define KVM_EXIT_ARM_NOTAG_ACCESS 40 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -446,6 +447,12 @@ struct kvm_run { __u64 gpa; __u64 size; } memory_fault; + /* KVM_EXIT_ARM_NOTAG_ACCESS */ + struct { + __u64 flags; + __u64 gpa; + __u64 size; + } notag_access; /* Fix the size of the union. */ char padding[256]; };