From patchwork Mon Dec 2 15:47:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2ADC8D78333 for ; Mon, 2 Dec 2024 15:56:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=E6oEF7Jaaiv1EnitfNF1OA2AYcUqORyrnF5e9l/8H8I=; b=GaLlkkjAvFPHdte4iXDXv7LIcP pNslHwiATHtyDwnStaEx7UZR89se+ZeA8vM5iLFgNqsEsSGWnwM2NaLKf79wlMOTTjCcnwtvvCeWP ymPWkA2UYDIaSZE1vYXorcIGlAUkumZCGo0qw5lK1IVi6erF5QMsnQv9dWIZvHdjTo7VMgLWvtq/y x+v3XgqNtA+QVqUE4diUMort0LptgHVoifr0sCyu/zr6tjIf5CimvhZ/qYrEGRPvrlYNtD3PNgiQS SLGfsAmCl8+6Za5ePHWQpDCRpyjMNF67CqsMDne/rK24T4NCTQmkvAr4gN06zURlSL7o75mW06Nan ELbm16KQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8mE-00000006kEs-3OcN; Mon, 02 Dec 2024 15:55:58 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eK-00000006iQP-0vKj for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:49 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-434941aa9c2so24901185e9.3 for ; Mon, 02 Dec 2024 07:47:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154466; x=1733759266; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=E6oEF7Jaaiv1EnitfNF1OA2AYcUqORyrnF5e9l/8H8I=; b=fuJt3S3phS6jbS8QTsEKZrhcjE/Ts83eWpz7OfLSb3denSzabahki9SKI//x3TWxqF vggXXnFDxUZYogFSGBvq9704VWBSem4lyYudm6VB55mbDMHPLhALXl5xzUcXzrWc4Mrh EZsEaajhrvE+RXM/pX4mkYoZPAxEQX0kEeE5ZEGgEMYLvo9LE+cGSzzxDpeMfbcNMD+e V8cAUsnGXYiG21yiO+NkWjgPHiXqnvCXPKa+CyV5nCd8wlpn+JsE6GWP/tWxgDsSYOHw sU9/3pJhWSObLyU80muoZH/l8KABC/q4ZVN5+kXENgc+nCieNkGDnIyebZmcdhODbk/I OxUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154466; x=1733759266; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E6oEF7Jaaiv1EnitfNF1OA2AYcUqORyrnF5e9l/8H8I=; b=V4RyVKv5Ygd6zJ4nHkQ6QwsXosfOqSHWvZzrsR1DQFBhJOWN8Q9jR8Z7JqPplPGfQ+ hlguvXfF4nhnOrYO2nGdRozAEs6tiJTLHY6coLSAhjzWlr5iIVE+JUjLqEKtGt5jWuXs BFrZaVKjHHFR1LGW3SunlNoCSMqwPj7NBGHj+Dcf+rFNDsI7zfyy0BvCrwSCLnSfhvYY 4JOjDtn5FGdm/UvXWMo+y8wk0PzE+p1MImtuvNBNWv5khvtnBAE5VWSOLtzn1+ahsg4S yBXgMCY5yL9LEOcodCnzfpzgPJEeNSHYG4tFTMM2ZXO8JZ8I5kgKVzPqiByBpdYdbS7Y DHfA== X-Forwarded-Encrypted: i=1; AJvYcCWi7sIM5FrnMXW1NQfgZyGZv7zGxthakhHajKHLtx1UyZ0tWlSrK23yudS0FR2lg4/6tqFzM+BmofNMia6zNrny@lists.infradead.org X-Gm-Message-State: AOJu0Yy8FsqsCtiApP4gb9ONkKzLn5ofZl9UX0S7Fv+Env3WMKNmrGIv fnBre/vzShke+kNZohhnJMFnn5fkFlSRgip6bcune+qOh1sqOkgOQWhqIBLcULyIGp0VoPFFlg= = X-Google-Smtp-Source: AGHT+IGx1Gm1TGibmbGetWLRqPiLavb66ky1HG079CGxxvpUIJc/PcWo4Ryn5pwU+NSezOHgFqPttHZURQ== X-Received: from wmz16.prod.google.com ([2002:a05:600c:6b70:b0:434:a0f9:be9e]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c18:b0:431:6153:a258 with SMTP id 5b1f17b1804b1-434a9dc3dc3mr226840565e9.13.1733154466043; Mon, 02 Dec 2024 07:47:46 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:28 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-2-tabba@google.com> Subject: [PATCH v4 01/14] KVM: arm64: Consolidate allowed and restricted VM feature checks From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074748_261133_8F32241B X-CRM114-Status: GOOD ( 20.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The definitions for features allowed and allowed with restrictions for protected guests, which are based on feature registers, were defined and checked for separately, even though they are handled in the same way. This could result in missing checks for certain features, e.g., pointer authentication, causing traps for allowed features. Consolidate the definitions into one. Use that new definition to construct the guest view of the feature registers for consistency. Fixes: 6c30bfb18d0b ("KVM: arm64: Add handlers for protected VM System Registers") Reported-by: Mostafa Saleh Signed-off-by: Fuad Tabba --- Note: This patch ends up being a no-op, since none of the changes in it survive the series. It's included because it makes the rest of the series flow more smoothly. --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 55 +++++++------------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +-- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 +- 3 files changed, 26 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index f957890c7e38..d1e59b88ff66 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -14,11 +14,8 @@ * guest virtual machines, depending on the mode KVM is running in and on the * type of guest that is running. * - * The ALLOW masks represent a bitmask of feature fields that are allowed - * without any restrictions as long as they are supported by the system. - * - * The RESTRICT_UNSIGNED masks, if present, represent unsigned fields for - * features that are restricted to support at most the specified feature. + * Each field in the masks represents the highest supported *unsigned* value for + * the feature, if supported by the system. * * If a feature field is not present in either, than it is not supported. * @@ -34,16 +31,7 @@ * - Floating-point and Advanced SIMD * - Data Independent Timing * - Spectre/Meltdown Mitigation - */ -#define PVM_ID_AA64PFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) \ - ) - -/* + * * Restrict to the following *unsigned* features for protected VMs: * - AArch64 guests only (no support for AArch32 guests): * AArch32 adds complexity in trap handling, emulation, condition codes, @@ -51,7 +39,12 @@ * - RAS (v1) * Supported by KVM */ -#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\ +#define PVM_ID_AA64PFR0_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP) | \ @@ -77,20 +70,16 @@ * - Distinction between Secure and Non-secure Memory * - Mixed-endian at EL0 only * - Non-context synchronizing exception entry and exit + * + * Restrict to the following *unsigned* features for protected VMs: + * - 40-bit IPA + * - 16-bit ASID */ #define PVM_ID_AA64MMFR0_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGEND) | \ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_SNSMEM) | \ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGENDEL0) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) \ - ) - -/* - * Restrict to the following *unsigned* features for protected VMs: - * - 40-bit IPA - * - 16-bit ASID - */ -#define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\ + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) | \ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_PARANGE), ID_AA64MMFR0_EL1_PARANGE_40) | \ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_ASIDBITS), ID_AA64MMFR0_EL1_ASIDBITS_16) \ ) @@ -185,15 +174,6 @@ ) /* Restrict pointer authentication to the basic version. */ -#define PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ - ) - -#define PVM_ID_AA64ISAR2_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ - ) - #define PVM_ID_AA64ISAR1_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DPB) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_JSCVT) | \ @@ -206,13 +186,16 @@ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SPECRES) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_BF16) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DGH) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ ) #define PVM_ID_AA64ISAR2_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_ATS1A)| \ ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \ + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ ) u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 01616c39a810..76a70fee7647 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -36,9 +36,9 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) /* Protected KVM does not support AArch32 guests. */ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL0_IMP); + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL1_IMP); + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); /* * Linux guests assume support for floating-point and Advanced SIMD. Do @@ -362,8 +362,8 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), PVM_ID_AA64PFR0_ALLOW)) set_bit(KVM_ARM_VCPU_SVE, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED)) + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_ALLOW)) set_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, allowed_features); if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), PVM_ID_AA64ISAR1_ALLOW) && diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 2860548d4250..59fb2f056177 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -89,7 +89,7 @@ static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); + PVM_ID_AA64PFR0_ALLOW); return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; } @@ -189,7 +189,7 @@ static u64 get_pvm_id_aa64mmfr0(const struct kvm_vcpu *vcpu) u64 set_mask; set_mask = get_restricted_features_unsigned(id_aa64mmfr0_el1_sys_val, - PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED); + PVM_ID_AA64MMFR0_ALLOW); return (id_aa64mmfr0_el1_sys_val & PVM_ID_AA64MMFR0_ALLOW) | set_mask; } @@ -276,7 +276,7 @@ static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, * of AArch32 feature id registers. */ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_EL1_EL1_IMP); + PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_EL1_EL1_IMP); return pvm_access_raz_wi(vcpu, p, r); } From patchwork Mon Dec 2 15:47:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7169DD7832F for ; Mon, 2 Dec 2024 15:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WafFMpKXruhPY/wS7bqG4euEvXcSbXGRH72JqapojtU=; b=WioFsicAm1DffjKon19slBX2UP m06ViJKM5OJ04lGVv8cPYOcxOX+zBEQVSfi1t5txCiH9ID8ZGHmQqHJ/mvMP9HSJZixr/TRyoEPcw Imheq1JTPudGJr9zNidsZrtcp/LAEda9HcBM5OUOT4yhkNjo9I+JaS0buIEbqZP4VaVGamIb6QTVk sE/GKsb2O7HfjJLk+nyZsaPbp72QNQ1K1zJ7WjFtGsjOUiiMzzE6kWrNk8oDCQxkDYVKGE1e7pTA7 O5bvo+ERpB2ESjWIx6nd7DmiO5R2K1qKWFgVhMMPnLwOuBK2NiEu3BxRe/IADf020me9BQsBc4K2S v1akpl2Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8nC-00000006kSh-1eUj; Mon, 02 Dec 2024 15:56:58 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eL-00000006iRL-26Vk for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:50 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a90fecfeso32789305e9.1 for ; Mon, 02 Dec 2024 07:47:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154468; x=1733759268; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WafFMpKXruhPY/wS7bqG4euEvXcSbXGRH72JqapojtU=; b=2557lgfINYzho8LmCgGMfbmLfW64+fH7gZHW6CcEtIPof0RWNgT1Ee/24Us6/UpFb8 KzpFn44U00i47xmSZwsbKcPwT0XXvl9ctTZ9RLrX+5DbbUKV38Tzq+d7tbQ9DyqSlSEK Pmv250D/A3rM90JZOYdDe9GCkwvshdbQRMZlEXALvjng1tKkhwxa5fRkNRhOzXxNfu5C UoF7g4dxnPXO6YsgS0C92MvsvcKrlNAw0HSpgCf5Hj7DPRqVIRlfrTDJbH4MvtRixzri vOkWrbXjY48Qbshvv+i2jQIl1rwyU+FQ2PaaDubuVWENhXEAxN79DADzf8e8J+6jmRKI 3OLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154468; x=1733759268; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WafFMpKXruhPY/wS7bqG4euEvXcSbXGRH72JqapojtU=; b=QFOQ+1D9+CQsvYcCJl5jpC+tQ+ovo4Zcpb8AZErPSrY0NjsXoDY/7iXhlrVddMZ20E PC1sJUYh1NOz7MS/DujBSPkQbX4GtkkSkQatcDCMygdVG7AidmbPBgjFi8c4A55//dn9 WMDBCReYMoQZQyBcTqe4iZtj7hWsR1S7SFCjbmkKXApmbG+7SjlZHKuhaCr2nVB8Q08k Sf+pF8lZSL5SYWoSrdMWa0LM9lVEcZMcwc2fAC9FNrXbetfZEdNX8bgtcKVl+BHSDA22 D+WpnQl4+3BuPxCKpShXREiuhA9/wPSAh4fPSXF8qqTjIiErVxSZTJp9Nhm88tXsA0It aI/g== X-Forwarded-Encrypted: i=1; AJvYcCX6L94UGJcO1v0+Vc+ZxVY11oNpHQztN3V+HiZhuyrMO1oizKxxKxsnjufwmPwCFR2UnwgMcmQfw74qjYZMi2pZ@lists.infradead.org X-Gm-Message-State: AOJu0YwkMJx5c3ERPascsbiAgucrmjrwVzW8ZunEafXWgIWATBbR+5Q3 vhlYEVePplKjwA93QcVRyPMVtrCIc86xIFF7ljYtw9roEZjgR9KnLpheFyeKYF07aSSrNW/q3w= = X-Google-Smtp-Source: AGHT+IHPT8U6CKlSOS07e21PoVhoMtFzv/DA8fQsWUcQa7msDiIPBVI4ac8qucp3iR95+ge+R4LR3jgAqA== X-Received: from wmsk36.prod.google.com ([2002:a05:600c:1ca4:b0:434:a102:fb6c]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3ba6:b0:434:9e63:faff with SMTP id 5b1f17b1804b1-434a9dbc41bmr211082505e9.2.1733154467976; Mon, 02 Dec 2024 07:47:47 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:29 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-3-tabba@google.com> Subject: [PATCH v4 02/14] KVM: arm64: Group setting traps for protected VMs by control register From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074749_544569_3DDBEC7C X-CRM114-Status: GOOD ( 19.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Group setting protected VM traps by control register rather than feature id register, since some trap values (e.g., PAuth), depend on more than one feature id register. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 317 +++++++++++++++------------------ 1 file changed, 144 insertions(+), 173 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 76a70fee7647..1744574e79b2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,233 +23,204 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; -/* - * Set trap register values based on features in ID_AA64PFR0. - */ -static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - u64 hcr_set = HCR_RW; - u64 hcr_clear = 0; - u64 cptr_set = 0; - u64 cptr_clear = 0; - - /* Protected KVM does not support AArch32 guests. */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); - - /* - * Linux guests assume support for floating-point and Advanced SIMD. Do - * not change the trapping behavior for these from the KVM default. - */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), - PVM_ID_AA64PFR0_ALLOW)); - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), - PVM_ID_AA64PFR0_ALLOW)); + vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; if (has_hvhe()) - hcr_set |= HCR_E2H; + vcpu->arch.hcr_el2 |= HCR_E2H; - /* Trap RAS unless all current versions are supported */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), feature_ids) < - ID_AA64PFR0_EL1_RAS_V1P1) { - hcr_set |= HCR_TERR | HCR_TEA; - hcr_clear |= HCR_FIEN; + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { + /* route synchronous external abort exceptions to EL2 */ + vcpu->arch.hcr_el2 |= HCR_TEA; + /* trap error record accesses */ + vcpu->arch.hcr_el2 |= HCR_TERR; } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), feature_ids)) { - hcr_clear |= HCR_AMVOFFEN; - cptr_set |= CPTR_EL2_TAM; - } + if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) + vcpu->arch.hcr_el2 |= HCR_FWB; - /* Trap SVE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) { - if (has_hvhe()) - cptr_clear |= CPACR_ELx_ZEN; - else - cptr_set |= CPTR_EL2_TZ; - } + if (cpus_have_final_cap(ARM64_HAS_EVT) && + !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) + vcpu->arch.hcr_el2 |= HCR_TID4; + else + vcpu->arch.hcr_el2 |= HCR_TID2; - vcpu->arch.hcr_el2 |= hcr_set; - vcpu->arch.hcr_el2 &= ~hcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; - vcpu->arch.cptr_el2 &= ~cptr_clear; + if (vcpu_has_ptrauth(vcpu)) + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); } -/* - * Set trap register values based on features in ID_AA64PFR1. - */ -static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu) +static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - u64 hcr_set = 0; - u64 hcr_clear = 0; + const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); + const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); + const u64 id_aa64mmfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); + u64 val = vcpu->arch.hcr_el2; + + /* No support for AArch32. */ + val |= HCR_RW; + + if (has_hvhe()) + val |= HCR_E2H; + + /* + * Always trap: + * - Feature id registers: to control features exposed to guests + * - Implementation-defined features + */ + val |= HCR_TACR | HCR_TIDCP | HCR_TID3 | HCR_TID1; + + /* Trap RAS unless all current versions are supported */ + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), id_aa64pfr0) < + ID_AA64PFR0_EL1_RAS_V1P1) { + val |= HCR_TERR | HCR_TEA; + val &= ~(HCR_FIEN); + } + + /* Trap AMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + val &= ~(HCR_AMVOFFEN); /* Memory Tagging: Trap and Treat as Untagged if not supported. */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), feature_ids)) { - hcr_set |= HCR_TID5; - hcr_clear |= HCR_DCT | HCR_ATA; + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), id_aa64pfr1)) { + val |= HCR_TID5; + val &= ~(HCR_DCT | HCR_ATA); } - vcpu->arch.hcr_el2 |= hcr_set; - vcpu->arch.hcr_el2 &= ~hcr_clear; + /* Trap LOR */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), id_aa64mmfr1)) + val |= HCR_TLOR; + + vcpu->arch.hcr_el2 = val; } -/* - * Set trap register values based on features in ID_AA64DFR0. - */ -static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) +static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); - u64 mdcr_set = 0; - u64 mdcr_clear = 0; - u64 cptr_set = 0; + const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); + const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); + const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + u64 val = vcpu->arch.cptr_el2; - /* Trap/constrain PMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), feature_ids)) { - mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; - mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME | - MDCR_EL2_HPMN_MASK; + if (!has_hvhe()) { + val |= CPTR_NVHE_EL2_RES1; + val &= ~(CPTR_NVHE_EL2_RES0); } - /* Trap Debug */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), feature_ids)) - mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE; - - /* Trap OS Double Lock */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), feature_ids)) - mdcr_set |= MDCR_EL2_TDOSA; + /* Trap AMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + val |= CPTR_EL2_TAM; - /* Trap SPE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), feature_ids)) { - mdcr_set |= MDCR_EL2_TPMS; - mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + /* Trap SVE */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), id_aa64pfr0)) { + if (has_hvhe()) + val &= ~(CPACR_ELx_ZEN); + else + val |= CPTR_EL2_TZ; } - /* Trap Trace Filter */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), feature_ids)) - mdcr_set |= MDCR_EL2_TTRF; + /* No SME support in KVM. */ + BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME), id_aa64pfr1)); + if (has_hvhe()) + val &= ~(CPACR_ELx_SMEN); + else + val |= CPTR_EL2_TSM; /* Trap Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), feature_ids)) { + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), id_aa64dfr0)) { if (has_hvhe()) - cptr_set |= CPACR_EL1_TTA; + val |= CPACR_EL1_TTA; else - cptr_set |= CPTR_EL2_TTA; + val |= CPTR_EL2_TTA; } - /* Trap External Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), feature_ids)) - mdcr_clear |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; - - vcpu->arch.mdcr_el2 |= mdcr_set; - vcpu->arch.mdcr_el2 &= ~mdcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; -} - -/* - * Set trap register values based on features in ID_AA64MMFR0. - */ -static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu) -{ - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); - u64 mdcr_set = 0; - - /* Trap Debug Communications Channel registers */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), feature_ids)) - mdcr_set |= MDCR_EL2_TDCC; - - vcpu->arch.mdcr_el2 |= mdcr_set; + vcpu->arch.cptr_el2 = val; } -/* - * Set trap register values based on features in ID_AA64MMFR1. - */ -static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu) -{ - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); - u64 hcr_set = 0; - - /* Trap LOR */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), feature_ids)) - hcr_set |= HCR_TLOR; - - vcpu->arch.hcr_el2 |= hcr_set; -} - -/* - * Set baseline trap register values. - */ -static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) +static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { - const u64 hcr_trap_feat_regs = HCR_TID3; - const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1; - - /* - * Always trap: - * - Feature id registers: to control features exposed to guests - * - Implementation-defined features - */ - vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef; + const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + const u64 id_aa64mmfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); + u64 val = vcpu->arch.mdcr_el2; - /* Clear res0 and set res1 bits to trap potential new features. */ - vcpu->arch.hcr_el2 &= ~(HCR_RES0); - vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); - if (!has_hvhe()) { - vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; - vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0); + /* Trap/constrain PMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), id_aa64dfr0)) { + val |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; + val &= ~(MDCR_EL2_HPME | MDCR_EL2_MTPME | MDCR_EL2_HPMN_MASK); } -} -static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) -{ - vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; + /* Trap Debug */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0)) + val |= MDCR_EL2_TDRA | MDCR_EL2_TDA; - if (has_hvhe()) - vcpu->arch.hcr_el2 |= HCR_E2H; + /* Trap OS Double Lock */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), id_aa64dfr0)) + val |= MDCR_EL2_TDOSA; - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { - /* route synchronous external abort exceptions to EL2 */ - vcpu->arch.hcr_el2 |= HCR_TEA; - /* trap error record accesses */ - vcpu->arch.hcr_el2 |= HCR_TERR; + /* Trap SPE */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), id_aa64dfr0)) { + val |= MDCR_EL2_TPMS; + val &= ~(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT); } - if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) - vcpu->arch.hcr_el2 |= HCR_FWB; + /* Trap Trace Filter */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), id_aa64dfr0)) + val |= MDCR_EL2_TTRF; - if (cpus_have_final_cap(ARM64_HAS_EVT) && - !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) - vcpu->arch.hcr_el2 |= HCR_TID4; - else - vcpu->arch.hcr_el2 |= HCR_TID2; + /* Trap External Trace */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), id_aa64dfr0)) + val |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; - if (vcpu_has_ptrauth(vcpu)) - vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); + /* Trap Debug Communications Channel registers */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), id_aa64mmfr0)) + val |= MDCR_EL2_TDCC; + + vcpu->arch.mdcr_el2 = val; } /* * Initialize trap register values in protected mode. */ -static void pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) { + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; + vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); vcpu->arch.mdcr_el2 = 0; pkvm_vcpu_reset_hcr(vcpu); - if ((!vcpu_is_protected(vcpu))) + if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) return; - pvm_init_trap_regs(vcpu); - pvm_init_traps_aa64pfr0(vcpu); - pvm_init_traps_aa64pfr1(vcpu); - pvm_init_traps_aa64dfr0(vcpu); - pvm_init_traps_aa64mmfr0(vcpu); - pvm_init_traps_aa64mmfr1(vcpu); + /* + * PAuth is allowed if supported by the system and the vcpu. + * Properly checking for PAuth requires checking various fields in + * ID_AA64ISAR1_EL1 and ID_AA64ISAR2_EL1. The way that fixed config + * is controlled now in pKVM does not easily allow that. This will + * change later to follow the changes upstream wrt fixed configuration + * and nested virt. + */ + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), + PVM_ID_AA64ISAR1_ALLOW)); + + /* Protected KVM does not support AArch32 guests. */ + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); + + /* + * Linux guests assume support for floating-point and Advanced SIMD. Do + * not change the trapping behavior for these from the KVM default. + */ + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), + PVM_ID_AA64PFR0_ALLOW)); + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), + PVM_ID_AA64PFR0_ALLOW)); + + pvm_init_traps_hcr(vcpu); + pvm_init_traps_cptr(vcpu); + pvm_init_traps_mdcr(vcpu); } /* @@ -448,7 +419,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); pkvm_vcpu_init_ptrauth(hyp_vcpu); - pkvm_vcpu_init_traps(&hyp_vcpu->vcpu); + pkvm_vcpu_init_traps(hyp_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); From patchwork Mon Dec 2 15:47:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95A2FD78333 for ; Mon, 2 Dec 2024 15:58:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XPI+uRMl4bq1O2uoMReaWsKRsX33Uxa7dW6AA7WPtds=; b=XKplb+n1OfdzeWIBUkhxoNHhCn k7p4K3qlaS8Kr0vcu8gMn40Db/p+TwW8b3L9PA5/aroCKDycbVQl2gJKiQVhjsMCg/2/HMjxOeAH2 LroQn1wHpd67LOoxQQJ0itXWA3FhF2s3jsr0oHLhsfMkrMC7wN9sYBtIFNcm252boJPy8f01uwAoW EQIliQbXuYEJOZEayINnRdLXLycEKi2eh3kld4gwyiLZc6W/Zg7x04+8uahyscht1RJQvdfho/p1Q rJWKIh8HGAQrIJcE5IU3MW5whVs/lZzHEfsdp++iZfjHevIGWeafq7aVRzyucjKHQqQA0Co6tlvrJ qmqM+zHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8oA-00000006kcA-0cJF; Mon, 02 Dec 2024 15:57:58 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eO-00000006iRm-03l1 for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:53 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4349fd2965fso42197925e9.1 for ; Mon, 02 Dec 2024 07:47:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154470; x=1733759270; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XPI+uRMl4bq1O2uoMReaWsKRsX33Uxa7dW6AA7WPtds=; b=37AMNNawuZYU/3uydaxsj3+gWN6PKciLobLiYgCRqwEkfyxghfevXhNdDYQTYBDKBK ooy1aqKa6B7F2SQFFTyKKJNN0U7byjmOTp5g0UGEdEUnCHd88yRNDQJ7prhsvaeEk3/O WJsAF3/CMLSh/rRgfnw42WZZddk2dCHjhSmaUZsc/rVjFFArhtUWZ9J45VUIPOLsoec1 SXQJ1r/iBPjQG2uhobA0LGQuLGPUbbnVAUM5b8Ete8wWbZzUMdhwChUaaRVUMWsXEqtq LJ9E9A9zO1dK7kKdrI0Btw/Y9/6+RyuLeRUl0m6Lh741EgrOFMhp3R99xd2Qs0au5xg7 Jbhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154470; x=1733759270; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XPI+uRMl4bq1O2uoMReaWsKRsX33Uxa7dW6AA7WPtds=; b=h/JYLM+RmtyL5OuEd74iYy6SdAhQsZodK7exdQszOGjGPyVvGzhSRfG5ngERRGuMeM Sevx7UY8yEfZ25GOkDlXyJWB5hTiwAlWNNtu+UFYsySDnjZLsA+wlz2vvjNmuZbz5LxL oJnJWyudxORpCWTOdRDIZv+tpWOgpBa9D4Gooh2cTSE+2bHhCUJwFvHG/I6aY0+9Ma5m WWzGoj/UV4/5iAMVnLkgb87SPktHa7D2d9x5tLU7jo5zoqSCiFBdAtY7AxpA/UHLS0TO aABCVsRKKIzMt4r9aFugFHHjfzJ4/9UNDWkpaA0XwQevIFYQa85/T0cxvR0zeSvT6eYA UnFg== X-Forwarded-Encrypted: i=1; AJvYcCWorqUXjk/RaWfE68+ac2Cx4KV49NMWlDQDfKvE/ACCXyW/1dUfFX5jp2dK28B6C0t0rezctw4KdBUT1gOez8qB@lists.infradead.org X-Gm-Message-State: AOJu0YxXoZiJhYfkFRY0p0L+al9xl7eNCWRmAdvra2ZUK0D6koy1A2R2 PF1AvEPm97jWaz5gy4ElW0kKY2BRvJb21G1u8YHI322vE4SyAoiAt7wknqk1dsWxyXRfF3ynqQ= = X-Google-Smtp-Source: AGHT+IEp7X86inftmGsQ732cABCtsgiMxDhcYbTSYP08sh4XB67rpwq28/V7TmNj/VlEkuloUGb5UIFDxg== X-Received: from wmlv12.prod.google.com ([2002:a05:600c:214c:b0:434:9de6:413a]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d20:b0:434:a5c2:53c1 with SMTP id 5b1f17b1804b1-434a9de8bd4mr219924615e9.23.1733154470020; Mon, 02 Dec 2024 07:47:50 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:30 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-4-tabba@google.com> Subject: [PATCH v4 03/14] KVM: arm64: Move checking protected vcpu features to a separate function From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074752_052515_765E0E9C X-CRM114-Status: GOOD ( 15.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org At the moment, checks for supported vcpu features for protected VMs are build-time bugs. In the following patch, they will become runtime checks based on the vcpu's features registers. Therefore, consolidate them into one function that would return an error if it encounters an unsupported feature. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 45 ++++++++++++++++++++++++---------- 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 1744574e79b2..fb733b36c6c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -178,20 +178,11 @@ static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) } /* - * Initialize trap register values in protected mode. + * Check that cpu features that are neither trapped nor supported are not + * enabled for protected VMs. */ -static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) +static int pkvm_check_pvm_cpu_features(struct kvm_vcpu *vcpu) { - struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; - - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); - vcpu->arch.mdcr_el2 = 0; - - pkvm_vcpu_reset_hcr(vcpu); - - if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) - return; - /* * PAuth is allowed if supported by the system and the vcpu. * Properly checking for PAuth requires checking various fields in @@ -218,9 +209,34 @@ static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), PVM_ID_AA64PFR0_ALLOW)); + return 0; +} + +/* + * Initialize trap register values in protected mode. + */ +static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; + int ret; + + vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); + vcpu->arch.mdcr_el2 = 0; + + pkvm_vcpu_reset_hcr(vcpu); + + if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) + return 0; + + ret = pkvm_check_pvm_cpu_features(vcpu); + if (ret) + return ret; + pvm_init_traps_hcr(vcpu); pvm_init_traps_cptr(vcpu); pvm_init_traps_mdcr(vcpu); + + return 0; } /* @@ -417,9 +433,12 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); hyp_vcpu->vcpu.arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; + ret = pkvm_vcpu_init_traps(hyp_vcpu); + if (ret) + goto done; + pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); pkvm_vcpu_init_ptrauth(hyp_vcpu); - pkvm_vcpu_init_traps(hyp_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); From patchwork Mon Dec 2 15:47:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7554D78333 for ; Mon, 2 Dec 2024 15:59:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+kRQ3NfjIG6Bs79ygogj1t8Kc3423PzxH4sJj7NjZHU=; b=EGzqKIqorNNwjKmtdjDxFijNOO M5JaXqve4HmAr3tUwc6XSRUIWfNqGeqM0RNTueTihBBoKWcsv6VKRv1Iwdkp+8XCKzJQ/0mXm4RBF hP4JQ1cfuWx9Vki8Sqsu9Aljxu7FOx5nzqntNCdPWN0b/lkmHJvLKt2UplnJ+M63qy7FJrAFSWhPi FY8SIrafIog3uPe4wrnM6BbTmrTjy+J8eQBkMio8J89B04nCmJni83PU/3qVEFp6bKg+IlwmmnoK7 RIMs+Cdm9bLgmVc8mEN4o+vO50u8vVlrtZF+5kS7kejLzw/Rg4pxzvTIu83+3sI1OkIVJEK/JRuD1 2PRnlYtg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8p7-00000006kny-3BxS; Mon, 02 Dec 2024 15:58:57 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eP-00000006iSX-3hDT for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:54 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385d52591a6so1889671f8f.0 for ; Mon, 02 Dec 2024 07:47:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154472; x=1733759272; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+kRQ3NfjIG6Bs79ygogj1t8Kc3423PzxH4sJj7NjZHU=; b=U7mD1W+S/YUE72QXalgU+A4kE620tU3jXCmfkGqNCUxdt3qKMMKhPNuhTyZLa+HaTb m4wIRmKG9h4ouHWXv4Ekahj4RhdxYkZ2yg8PLKBoe0LkblcYKB68w3o6uOLLdmByKsdd dxY9FZ1ugz7aJmI0Qh5vfCFbroCpxzb7pb20dFLcKqmMcyUPhws8SkzSUTtdYMdZwH81 gIkMeNHwvqmSHMDSCY4TcQH1NaWowdb/2yDcpkTBDij2UYt5gKhzYLAuh6/39Jmr8Jf0 1f3Eul9aRBLSGE1QusDSoZmtO5aNTTb3ty/CKO0IoF4o52QgFPuACEtP7dOFA9rZHjHP gpdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154472; x=1733759272; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+kRQ3NfjIG6Bs79ygogj1t8Kc3423PzxH4sJj7NjZHU=; b=FSgw8zKZk6v1qzkYaO9ncr8IBpc9O5CCOf9MlTEZHfsBH2FbjK8YMgF1Qx529eSID9 spU3YwJNAC7hCYz5KeNUI6ez32PKxIQkMZeb+GZiMGz+3FbErgg3btJMnx1RYhH5Wjib zzAF7JvfZtfjgw0SRMlXAd+cJE5vZ0EpJS45Q4NF0179x7pDkiHPUQnaApRdiZT+bt+I BWb+FtP6CP4ZV/W20HESkBlkZYfT7V/Uu7bpQ+VLkR9Kk8CyYPYBL/xYK6Z6rLGp69rl TZXpsNDVn/cEkP342OUMgrse9oY2JjAbIkznJvunds+8LpnAT2yk+bWuuDQQYH/4fDKd EWXw== X-Forwarded-Encrypted: i=1; AJvYcCUmCvRnCwFVA5niJEKS5WW/GL4cnqcR0L7sXcB8tmfUftp3pFXQXFQhaERknpiXmCQGlS/OzFzI9aKW0I4oBKQZ@lists.infradead.org X-Gm-Message-State: AOJu0YxBDFQAudsrAQiCaDF1Wk0sz8A63p427jN6Kyxf9fMEj+/j3k3d GLEzGJHMIKFSLfwsfVPAyjaUhpntH2peI0L6epVnaSKt1Tz3RXb7aXn6pXTBAs2U9QM+VHNQUA= = X-Google-Smtp-Source: AGHT+IGIbBiGROzZxhfhrgzxly8aL5HG7mv2gVUKDTXAI4mdZ3DqmJA+jiufZDnf34MfuCMPaai4NjpH5A== X-Received: from wmfu3.prod.google.com ([2002:a05:600c:1383:b0:434:b9be:c3e9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1f89:b0:385:dc45:ea26 with SMTP id ffacd0b85a97d-385dc45f392mr13854189f8f.12.1733154472024; Mon, 02 Dec 2024 07:47:52 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:31 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-5-tabba@google.com> Subject: [PATCH v4 04/14] KVM: arm64: Use KVM extension checks for allowed protected VM capabilities From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074753_919039_C69D1979 X-CRM114-Status: GOOD ( 14.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use KVM extension checks as the source for determining which capabilities are allowed for protected VMs. KVM extension checks is the natural place for this, since it is also the interface exposed to users. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pkvm.h | 25 +++++++++++++++++++++++++ arch/arm64/kvm/arm.c | 29 ++--------------------------- arch/arm64/kvm/hyp/nvhe/pkvm.c | 26 ++++++-------------------- 3 files changed, 33 insertions(+), 47 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..400f7cef1e81 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -20,6 +20,31 @@ int pkvm_init_host_vm(struct kvm *kvm); int pkvm_create_hyp_vm(struct kvm *kvm); void pkvm_destroy_hyp_vm(struct kvm *kvm); +/* + * This functions as an allow-list of protected VM capabilities. + * Features not explicitly allowed by this function are denied. + */ +static inline bool kvm_pvm_ext_allowed(long ext) +{ + switch (ext) { + case KVM_CAP_IRQCHIP: + case KVM_CAP_ARM_PSCI: + case KVM_CAP_ARM_PSCI_0_2: + case KVM_CAP_NR_VCPUS: + case KVM_CAP_MAX_VCPUS: + case KVM_CAP_MAX_VCPU_ID: + case KVM_CAP_MSI_DEVID: + case KVM_CAP_ARM_VM_IPA_SIZE: + case KVM_CAP_ARM_PMU_V3: + case KVM_CAP_ARM_SVE: + case KVM_CAP_ARM_PTRAUTH_ADDRESS: + case KVM_CAP_ARM_PTRAUTH_GENERIC: + return true; + default: + return false; + } +} + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..b295218cdc24 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -80,31 +80,6 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } -/* - * This functions as an allow-list of protected VM capabilities. - * Features not explicitly allowed by this function are denied. - */ -static bool pkvm_ext_allowed(struct kvm *kvm, long ext) -{ - switch (ext) { - case KVM_CAP_IRQCHIP: - case KVM_CAP_ARM_PSCI: - case KVM_CAP_ARM_PSCI_0_2: - case KVM_CAP_NR_VCPUS: - case KVM_CAP_MAX_VCPUS: - case KVM_CAP_MAX_VCPU_ID: - case KVM_CAP_MSI_DEVID: - case KVM_CAP_ARM_VM_IPA_SIZE: - case KVM_CAP_ARM_PMU_V3: - case KVM_CAP_ARM_SVE: - case KVM_CAP_ARM_PTRAUTH_ADDRESS: - case KVM_CAP_ARM_PTRAUTH_GENERIC: - return true; - default: - return false; - } -} - int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -113,7 +88,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (cap->flags) return -EINVAL; - if (kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, cap->cap)) + if (kvm_vm_is_protected(kvm) && !kvm_pvm_ext_allowed(cap->cap)) return -EINVAL; switch (cap->cap) { @@ -311,7 +286,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) { int r; - if (kvm && kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, ext)) + if (kvm && kvm_vm_is_protected(kvm) && !kvm_pvm_ext_allowed(ext)) return 0; switch (ext) { diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index fb733b36c6c1..59ff6aac514c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -329,34 +329,20 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc bitmap_zero(allowed_features, KVM_VCPU_MAX_FEATURES); - /* - * For protected VMs, always allow: - * - CPU starting in poweroff state - * - PSCI v0.2 - */ - set_bit(KVM_ARM_VCPU_POWER_OFF, allowed_features); set_bit(KVM_ARM_VCPU_PSCI_0_2, allowed_features); - /* - * Check if remaining features are allowed: - * - Performance Monitoring - * - Scalable Vectors - * - Pointer Authentication - */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), PVM_ID_AA64DFR0_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PMU_V3)) set_bit(KVM_ARM_VCPU_PMU_V3, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), PVM_ID_AA64PFR0_ALLOW)) - set_bit(KVM_ARM_VCPU_SVE, allowed_features); - - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_ALLOW) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PTRAUTH_ADDRESS)) set_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), PVM_ID_AA64ISAR1_ALLOW) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA), PVM_ID_AA64ISAR1_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PTRAUTH_GENERIC)) set_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, allowed_features); + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SVE)) + set_bit(KVM_ARM_VCPU_SVE, allowed_features); + bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); } From patchwork Mon Dec 2 15:47:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1FC4BD78333 for ; Mon, 2 Dec 2024 16:00:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=57d11+Vl+NAgbTWF6Laovcj658EA1KvcaSswzHFN5Sc=; b=3MBrUgcD0OWFDr9Vdbmj71KmmF VqjGsU9BBo/OK2gsd7NNs+qoTOZH/C4Kichvz+cZ2rRU3XsV02dRhOU2olcyVExbjZ8QxNvMOlu/i GQc264tX+8X+u4u669+PSPhE9rduu4pOPcfQl8JJYoBomitBx6Whi69SJWVPyXp48CQprBDKwwtVl wRltRhlh/nnDfT4gHgnMMSWYMElvVUeLeWujAEQrILgQr54zgYt8FTUn9vupG+lLQcQp1uQ1XyjDU B7Vc7VhZoZoV6D2mU+NEI8SyniRxzyv8RB14gqWJ3e+CISNZrz185OOYgKaTyu7retHdB4RvygPrE B0whPsgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8q5-00000006l1y-1rtg; Mon, 02 Dec 2024 15:59:57 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eR-00000006iSw-2PPy for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:56 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a37a57dfso36241655e9.0 for ; Mon, 02 Dec 2024 07:47:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154474; x=1733759274; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=57d11+Vl+NAgbTWF6Laovcj658EA1KvcaSswzHFN5Sc=; b=h/AfU1vm7g6ahj8ppIpNLdjkvXyxP2ZNDz7nc82Uv2uym2ru5Qu4QWJavt6acDXcl+ JqVH/y7hBb3PxUjm32DVMGvEJZvF0+dNMYoPnMkD9WPanyWSa8oOiT+Nmljx6TzmCpr+ rPJ7dJbM3I8+xW+A/9RUOEnZG+a0HUMi0y8cP1qiXjBt+KpwQazXH1fvOKqpCXWNmUYr GLREmP1kdV7NLoAwhjgrJ5MpECiP2re5XOoaUDxJCJlVTeCEhWG5TBkR0kxoO/19rV2C /yssIdS+kvqAl10sbOG0U0etid6+FOufSetP8KHyGUOQduD25Ln6rw+52PuMz/aV16i+ FawA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154474; x=1733759274; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=57d11+Vl+NAgbTWF6Laovcj658EA1KvcaSswzHFN5Sc=; b=hvOUKdN2WDOEZj8RKGsXPvoPE6ZIOdROIh34xLxHpulfpigSz5wTxZpNPU/8RjqmjT bxGU05dS9gLoHb+1xxXs5PP+AsH9EKMlLpJ8gRebk1UZhEhURLXEc83iMPFIRSqfhGiz L0egHlahygXNNFewOc7IRocvMjzIc79ytT+0dlWnTBnRAfKNnaBTniNvRWKDk40uywHG Ih/yVjlrAYB3jLwjDv14Ppbs72njFyL4tkjgtaue1bFzPfIFlUF62I8eUgvokHy9wFSn sWZnnqfswhNRkZo9fs0WJNI1BfWxWzRsU3bJLedgAzhe+bHE5u8+BSwCREjJtaG+sUcc f8Ug== X-Forwarded-Encrypted: i=1; AJvYcCWA3icEsY47Frj6Y80ZcuRjmsQbRGtqbMY6S/PalCWVBT/rI9i1JfvTWg/eZkbaGJLVuxs+DpPKizggaB5aQA49@lists.infradead.org X-Gm-Message-State: AOJu0YwuqnKZ60uikxsc2KcNN0yDLnPt+xgvQg4igPrBd7ZxvCLGgyvS PAxqSDbdRw7aX6tjoZKDi3oUwyiwZ5mYA672dm5QRx9KhmE2RwZvWKzyfFCYuJUd+kC9FbhfPw= = X-Google-Smtp-Source: AGHT+IEX7W6aLBuxPdDXg5idWYuhPush+WW2wT9LWR15gRkb5XnLTnQL1UMoLIX3aV+mT2vF0Pr+zZMq4A== X-Received: from wmqm13.prod.google.com ([2002:a05:600c:4f4d:b0:434:a346:77e5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4686:b0:434:a923:9313 with SMTP id 5b1f17b1804b1-434a9de55b4mr195058375e9.25.1733154474044; Mon, 02 Dec 2024 07:47:54 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:32 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-6-tabba@google.com> Subject: [PATCH v4 05/14] KVM: arm64: Initialize feature id registers for protected VMs From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074755_605048_366878BE X-CRM114-Status: GOOD ( 17.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor maintains the state of protected VMs. Initialize the values for feature ID registers for protected VMs, to be used when setting traps and when advertising features to protected VMs. Signed-off-by: Fuad Tabba --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 1 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 ++ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 42 +++++++++++++++++-- 3 files changed, 44 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index d1e59b88ff66..69e26d1a0ebe 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -201,6 +201,7 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); int kvm_check_pvm_sysreg_table(void); #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 59ff6aac514c..4ef03294b2b4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -381,6 +381,7 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm, hyp_vm->kvm.created_vcpus = nr_vcpus; hyp_vm->kvm.arch.mmu.vtcr = host_mmu.arch.mmu.vtcr; hyp_vm->kvm.arch.pkvm.enabled = READ_ONCE(host_kvm->arch.pkvm.enabled); + hyp_vm->kvm.arch.flags = 0; pkvm_init_features_from_host(hyp_vm, host_kvm); } @@ -419,6 +420,9 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); hyp_vcpu->vcpu.arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + kvm_init_pvm_id_regs(&hyp_vcpu->vcpu); + ret = pkvm_vcpu_init_traps(hyp_vcpu); if (ret) goto done; diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 59fb2f056177..1261da6a2861 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -204,8 +204,7 @@ static u64 get_pvm_id_aa64mmfr2(const struct kvm_vcpu *vcpu) return id_aa64mmfr2_el1_sys_val & PVM_ID_AA64MMFR2_ALLOW; } -/* Read a sanitized cpufeature ID register by its encoding */ -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +static u64 pvm_calc_id_reg(const struct kvm_vcpu *vcpu, u32 id) { switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -240,10 +239,25 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) } } +/* Read a sanitized cpufeature ID register by its encoding */ +u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +{ + return pvm_calc_id_reg(vcpu, id); +} + static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) { - return pvm_read_id_reg(vcpu, reg_to_encoding(r)); + struct kvm *kvm = vcpu->kvm; + u32 reg = reg_to_encoding(r); + + if (WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags))) + return 0; + + if (reg >= sys_reg(3, 0, 0, 1, 0) && reg <= sys_reg(3, 0, 0, 7, 7)) + return kvm->arch.id_regs[IDREG_IDX(reg)]; + + return 0; } /* Handler to RAZ/WI sysregs */ @@ -448,6 +462,28 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = { /* Performance Monitoring Registers are restricted. */ }; +/* + * Initializes feature registers for protected vms. + */ +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_arch *ka = &kvm->arch; + u32 r; + + if (test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags)) + return; + + /* + * Initialize only AArch64 id registers since AArch32 isn't supported + * for protected VMs. + */ + for (r = sys_reg(3, 0, 0, 4, 0); r <= sys_reg(3, 0, 0, 7, 7); r += sys_reg(0, 0, 0, 0, 1)) + ka->id_regs[IDREG_IDX(r)] = pvm_calc_id_reg(vcpu, r); + + set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags); +} + /* * Checks that the sysreg table is unique and in-order. * From patchwork Mon Dec 2 15:47:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB872D7832F for ; Mon, 2 Dec 2024 16:01:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=G3pSOvieBp2AragegVmEe27DQF30aRIuT+PSuVlRLXk=; b=r/Ij5NIY1onsX5fKHcZj1aBGGw wEUnLgOo+xOlHL9YIJSWtcdquJk4npBjb92jyHs2omZ3zgNpRWgtwMkpSghv5dEe/1f9rS7uVgmUY uwFYZEwFwv34nrIVkJmVr4Mk3EAVnmC3mR7SPQSbOXeZjPRomk+NASAiwfg6sUJXHS2ZECZtL8n3j DRCl1Rn7i91CM//CZbBmkQqICVdsmaul1FaSjix9ZHGxDCcQW0SuiXyviubNJWt/cmcrVQDs9jsOZ +0LB9qTrtO0WshltfBlRE1pNaE2bdglwoUjfXLuCRO4Y5HwKVE4z29XnqQF+IE1HOH598Q68GFx6c uvXyNnsw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8r3-00000006lIv-2mD5; Mon, 02 Dec 2024 16:00:57 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eT-00000006iTZ-47Jh for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:47:59 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-434a467e970so31040385e9.2 for ; Mon, 02 Dec 2024 07:47:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154476; x=1733759276; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=G3pSOvieBp2AragegVmEe27DQF30aRIuT+PSuVlRLXk=; b=LbdT5Ji9gjtaI8e7fjNvmvxyDuuKPlYU381AzO40GAUc056xrOgHVWQqhK5rqaTlB9 7lDBiZS2X3I9Xo+RMpjlAiLree5uslnv6ERH8ddTEBFsthujcrBGktcrDUaNUFEBWhkm S9UQy628P7IgyVPY3r4oTfK4d3Kh3R5LyKfDPj/O187D7gs6QTy2pkbOp03Rybn+SDM+ +gchKEEcKh3z+aFsgdoqa4XTjuojuM1gUm5o2h+I03pOjkggtnAkb7fbGDEvmVoMUhsB 8l1BfP6EMELs76R2GpUETZ6DJWh+QI+pVCgpsjR8xGDHLd3JrAo84Y96M6YT51pvsCRS wMAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154476; x=1733759276; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=G3pSOvieBp2AragegVmEe27DQF30aRIuT+PSuVlRLXk=; b=Vc2imHifEZ+KhuD5krYhId6KcZnoe1d5XwxJY55+WXG42j45LKespoHIiMZEAZPpv7 NiKPo75nMBBk/tGmhUjUod+YjYuEJyjeE5N450zKdnNyxgfL78RnlOnSOk2PjttvQkA5 A/vhTh/+r/FfM2CzYhXKaVeRAa6hV1MkOVaeajrNsMXPmJFrrDh4iITBX1S4dZrrLCKZ 5bvfgU3aEmYhGraBsS4DJIbQQipykoX/oD2dy/T93SpEiWcwnpjnmRRh0a26KPufQZDC ShGzeHYy1L+ZiUYWwhv8AaitYbvme5HRn+H4RA4FyEygL0Jwu0+JCFjzRDFnEmYKjnEI DIwQ== X-Forwarded-Encrypted: i=1; AJvYcCUxCglFpaas3y4enjjWS1TTrmnEuodAFz6hhuXZll56zRDFDbCBuxUPkpTQvhbDzgT21ajtXH+6ndhx7i/24DtN@lists.infradead.org X-Gm-Message-State: AOJu0Yy1VkU90ip+6ruD8nIwmY0lisf4wf1LZfl78a44Q9gfI4sCY2k7 LOsNocNHyUpfnG2f2/FFV8exAtor55GgBnfYjg61wko0yZz+HfaTletJJC8sFZN7Fokds8YPLA= = X-Google-Smtp-Source: AGHT+IEq9o4fPiVVM6t5O2IJw8C/lHZPJCVvC2maJ8cdjSK3nsHt+M+5kgfjdqRAhbxlq51j1zYxpu02Qw== X-Received: from wman26.prod.google.com ([2002:a05:600c:6c5a:b0:434:9fab:eb5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c81:b0:434:9936:c828 with SMTP id 5b1f17b1804b1-434a9dc3db3mr221455355e9.12.1733154476048; Mon, 02 Dec 2024 07:47:56 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:33 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-7-tabba@google.com> Subject: [PATCH v4 06/14] KVM: arm64: Set protected VM traps based on its view of feature registers From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074758_017893_13C8CADC X-CRM114-Status: GOOD ( 17.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the VM's feature id registers are initialized with the values of the supported features, use those values to determine which traps to set using kvm_has_feature(). Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 85 +++++++++++------------------- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 7 --- 2 files changed, 30 insertions(+), 62 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 4ef03294b2b4..3b4ea97148b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -52,9 +52,7 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) { - const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - const u64 id_aa64mmfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.hcr_el2; /* No support for AArch32. */ @@ -70,25 +68,20 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) */ val |= HCR_TACR | HCR_TIDCP | HCR_TID3 | HCR_TID1; - /* Trap RAS unless all current versions are supported */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), id_aa64pfr0) < - ID_AA64PFR0_EL1_RAS_V1P1) { + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) { val |= HCR_TERR | HCR_TEA; val &= ~(HCR_FIEN); } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) val &= ~(HCR_AMVOFFEN); - /* Memory Tagging: Trap and Treat as Untagged if not supported. */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), id_aa64pfr1)) { + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, IMP)) { val |= HCR_TID5; val &= ~(HCR_DCT | HCR_ATA); } - /* Trap LOR */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), id_aa64mmfr1)) + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP)) val |= HCR_TLOR; vcpu->arch.hcr_el2 = val; @@ -96,9 +89,7 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) { - const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.cptr_el2; if (!has_hvhe()) { @@ -106,12 +97,11 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) val &= ~(CPTR_NVHE_EL2_RES0); } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) val |= CPTR_EL2_TAM; - /* Trap SVE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), id_aa64pfr0)) { + /* SVE can be disabled by userspace even if supported. */ + if (!vcpu_has_sve(vcpu)) { if (has_hvhe()) val &= ~(CPACR_ELx_ZEN); else @@ -119,14 +109,13 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) } /* No SME support in KVM. */ - BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME), id_aa64pfr1)); + BUG_ON(kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)); if (has_hvhe()) val &= ~(CPACR_ELx_SMEN); else val |= CPTR_EL2_TSM; - /* Trap Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP)) { if (has_hvhe()) val |= CPACR_EL1_TTA; else @@ -138,40 +127,33 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { - const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); - const u64 id_aa64mmfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.mdcr_el2; - /* Trap/constrain PMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP)) { val |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; val &= ~(MDCR_EL2_HPME | MDCR_EL2_MTPME | MDCR_EL2_HPMN_MASK); } - /* Trap Debug */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DebugVer, IMP)) val |= MDCR_EL2_TDRA | MDCR_EL2_TDA; - /* Trap OS Double Lock */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP)) val |= MDCR_EL2_TDOSA; - /* Trap SPE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP)) { val |= MDCR_EL2_TPMS; val &= ~(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT); } - /* Trap Trace Filter */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP)) val |= MDCR_EL2_TTRF; - /* Trap External Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, ExtTrcBuff, IMP)) val |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; /* Trap Debug Communications Channel registers */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), id_aa64mmfr0)) + if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, FGT, IMP)) val |= MDCR_EL2_TDCC; vcpu->arch.mdcr_el2 = val; @@ -183,31 +165,24 @@ static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) */ static int pkvm_check_pvm_cpu_features(struct kvm_vcpu *vcpu) { - /* - * PAuth is allowed if supported by the system and the vcpu. - * Properly checking for PAuth requires checking various fields in - * ID_AA64ISAR1_EL1 and ID_AA64ISAR2_EL1. The way that fixed config - * is controlled now in pKVM does not easily allow that. This will - * change later to follow the changes upstream wrt fixed configuration - * and nested virt. - */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), - PVM_ID_AA64ISAR1_ALLOW)); + struct kvm *kvm = vcpu->kvm; /* Protected KVM does not support AArch32 guests. */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL0, AARCH32) || + kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL1, AARCH32)) + return -EINVAL; /* * Linux guests assume support for floating-point and Advanced SIMD. Do * not change the trapping behavior for these from the KVM default. */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), - PVM_ID_AA64PFR0_ALLOW)); - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), - PVM_ID_AA64PFR0_ALLOW)); + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, FP, IMP) || + !kvm_has_feat(kvm, ID_AA64PFR0_EL1, AdvSIMD, IMP)) + return -EINVAL; + + /* No SME support in KVM right now. Check to catch if it changes. */ + if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) + return -EINVAL; return 0; } diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 1261da6a2861..39b678d2c120 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -285,13 +285,6 @@ static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, return false; } - /* - * No support for AArch32 guests, therefore, pKVM has no sanitized copy - * of AArch32 feature id registers. - */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_EL1_EL1_IMP); - return pvm_access_raz_wi(vcpu, p, r); } From patchwork Mon Dec 2 15:47:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890978 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDB1ED7832F for ; Mon, 2 Dec 2024 16:02:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tlrkrFFDSGutm1nLVZfaQz5ciGITXu/fnNsjmBq5kzE=; b=YwhJ0PDwcz1JyJC9MnU36Vehb6 eFsbnfvD0zyr4lNzkITfo75f7445GBArMiLzVZoNJn129vmR7MaihudWNTkQojQP8dtHUFMXGPRVD xZipH/y7eTar3L073Qo8by37SUWKbCCQPSiDcSIzBCOO6nTZkI3CbUENkPRA7S0UnQUxUwqGf+2it HxKGjct9rpv9gAqIodP46U46UW7zDh65tNNtLY1OA8mvXEH6UCv98HS3iSo5FW0/DpcCZRucgVFE5 3EtcuhyN/Jr5iFC4qKIexm8fFzPKFN/FtMHBqXOcvAqElVDOLlu9NvYffQ043WBq0gFGX9ViS3m3+ JuiSRBKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8s1-00000006lWJ-1BmE; Mon, 02 Dec 2024 16:01:57 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eV-00000006iUH-3oHZ for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:01 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-434a104896cso36173925e9.1 for ; Mon, 02 Dec 2024 07:47:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154478; x=1733759278; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tlrkrFFDSGutm1nLVZfaQz5ciGITXu/fnNsjmBq5kzE=; b=CmfVsz4mCghg0HJyS0bDu3Oius6WBCRk4YosMi6GP4c87/6QXr4YDTUv65Y7Vo8Grh J2b3F9a1ct/Zn/D4Q1BNtyhMeTBxJQiDa3A0HOUMHqdy3Ydg/Ed4bdZ67RV3GoFfDx2D WXdEMcpb23s2K+ujHYhvDdZ3dgT+dID8wyeymewmdwdrfqRfYNymV8eit4d79eOYnCNL s7QI0ik90AhxMRRzt+TZB0MYUEVYUUh5HhFRNyvjVIQh2ImsI5fmSwqbODhb1q1ompoP Dmd8jB298N5HGHcS7+L521tfaMYIKCZAllEAJAMtTzHTMTXojpcRfHNV6fwDLXJlGah3 TdUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154478; x=1733759278; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tlrkrFFDSGutm1nLVZfaQz5ciGITXu/fnNsjmBq5kzE=; b=UEOy0xzo06XwLFEFKyalWEIx/NSBPmsY6qN7shQqBafPj3QlSHpiNb/Yx+1eaueicr JtDj0bOwSS7TZvNtMm0cCQNWXunaQ12yex9D8X9c49IZqYZhUTUcyZ4PsQq9rsZOButX uXFKokqpZBZnLzicL6grlBludb/BxOKo390+ePC9od1gPpfG7q+qrk+r/GjrR0arjilv i24IjKoyRXYoOrYZ6sD2vkvk26jVlHAfSQ1U+Texgbib8//BhlKeMCYHhLeiBQ2npdVX /SRweXFVBruzAt4uDQP9OWmJQa4NiB8ySGl7CPOUghLh8XcRb8Oem92vAg31jgJNBo7d 8a/Q== X-Forwarded-Encrypted: i=1; AJvYcCVOwBL594/F/r6aQLmStliPCrA5b3tqhF8fzrj3AkkS6vyo6GM9LmXGLbe5i2qJU/PCj0eCDCSMfkDFYfhXbtBN@lists.infradead.org X-Gm-Message-State: AOJu0YznPyLfgB33bPVBSoHfjay2HVXU2tc/CZ/9Bdz7OoFdqP6yOpMH rNVpGlYjATcw6pXbFpXwCd8UqwEWcqLjnKJypeCOGgG6zCLRVNcJ/fHtZylLjJKpNwstvGORvw= = X-Google-Smtp-Source: AGHT+IGTe0YnOAh1dDo7n49QmXQNOFyCZrC3404h0So/y91hGVGrtByEFeppIAAqJ9DkAssJPl/0RLSxRw== X-Received: from wmbjq10.prod.google.com ([2002:a05:600c:55ca:b0:434:a508:6e2e]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5021:b0:434:a706:c0f0 with SMTP id 5b1f17b1804b1-434a9dc16damr226302635e9.14.1733154478014; Mon, 02 Dec 2024 07:47:58 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:34 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-8-tabba@google.com> Subject: [PATCH v4 07/14] KVM: arm64: Rework specifying restricted features for protected VMs From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074759_964213_01D57D5F X-CRM114-Status: GOOD ( 24.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The existing code didn't properly distinguish between signed and unsigned features, and was difficult to read and to maintain. Rework it using the same method used in other parts of KVM when handling vcpu features. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 1 + .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 1 - arch/arm64/kvm/hyp/nvhe/sys_regs.c | 357 +++++++++--------- 3 files changed, 189 insertions(+), 170 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f333b189fb43..230b0638f0c2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1422,6 +1422,7 @@ static inline bool __vcpu_has_feature(const struct kvm_arch *ka, int feature) return test_bit(feature, ka->vcpu_features); } +#define kvm_vcpu_has_feature(k, f) __vcpu_has_feature(&(k)->arch, (f)) #define vcpu_has_feature(v, f) __vcpu_has_feature(&(v)->kvm->arch, (f)) #define kvm_vcpu_initialized(v) vcpu_get_flag(vcpu, VCPU_INITIALIZED) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index 69e26d1a0ebe..37a6d2434e47 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -198,7 +198,6 @@ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ ) -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 39b678d2c120..b6140590b569 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -28,221 +28,240 @@ u64 id_aa64mmfr1_el1_sys_val; u64 id_aa64mmfr2_el1_sys_val; u64 id_aa64smfr0_el1_sys_val; -/* - * Inject an unknown/undefined exception to an AArch64 guest while most of its - * sysregs are live. - */ -static void inject_undef64(struct kvm_vcpu *vcpu) -{ - u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); - - *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); - *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); - - kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC); - - __kvm_adjust_pc(vcpu); - - write_sysreg_el1(esr, SYS_ESR); - write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); - write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); - write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); -} - -/* - * Returns the restricted features values of the feature register based on the - * limitations in restrict_fields. - * A feature id field value of 0b0000 does not impose any restrictions. - * Note: Use only for unsigned feature field values. - */ -static u64 get_restricted_features_unsigned(u64 sys_reg_val, - u64 restrict_fields) -{ - u64 value = 0UL; - u64 mask = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); +struct pvm_ftr_bits { + bool sign; + u8 shift; + u8 width; + u8 max_val; + bool (*vm_supported)(const struct kvm *kvm); +}; - /* - * According to the Arm Architecture Reference Manual, feature fields - * use increasing values to indicate increases in functionality. - * Iterate over the restricted feature fields and calculate the minimum - * unsigned value between the one supported by the system, and what the - * value is being restricted to. - */ - while (sys_reg_val && restrict_fields) { - value |= min(sys_reg_val & mask, restrict_fields & mask); - sys_reg_val &= ~mask; - restrict_fields &= ~mask; - mask <<= ARM64_FEATURE_FIELD_BITS; +#define __MAX_FEAT_FUNC(id, fld, max, func, sgn) \ + { \ + .sign = sgn, \ + .shift = id##_##fld##_SHIFT, \ + .width = id##_##fld##_WIDTH, \ + .max_val = id##_##fld##_##max, \ + .vm_supported = func, \ } - return value; -} - -/* - * Functions that return the value of feature id registers for protected VMs - * based on allowed features, system features, and KVM support. - */ - -static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) -{ - u64 set_mask = 0; - u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; - - set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, - PVM_ID_AA64PFR0_ALLOW); +#define MAX_FEAT_FUNC(id, fld, max, func) \ + __MAX_FEAT_FUNC(id, fld, max, func, id##_##fld##_SIGNED) - return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; -} - -static u64 get_pvm_id_aa64pfr1(const struct kvm_vcpu *vcpu) -{ - const struct kvm *kvm = (const struct kvm *)kern_hyp_va(vcpu->kvm); - u64 allow_mask = PVM_ID_AA64PFR1_ALLOW; +#define MAX_FEAT(id, fld, max) \ + MAX_FEAT_FUNC(id, fld, max, NULL) - if (!kvm_has_mte(kvm)) - allow_mask &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); +#define MAX_FEAT_ENUM(id, fld, max) \ + __MAX_FEAT_FUNC(id, fld, max, NULL, false) - return id_aa64pfr1_el1_sys_val & allow_mask; -} +#define FEAT_END { .width = 0, } -static u64 get_pvm_id_aa64zfr0(const struct kvm_vcpu *vcpu) +static bool vm_has_ptrauth(const struct kvm *kvm) { - /* - * No support for Scalable Vectors, therefore, hyp has no sanitized - * copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL); - return 0; -} - -static u64 get_pvm_id_aa64dfr0(const struct kvm_vcpu *vcpu) -{ - /* - * No support for debug, including breakpoints, and watchpoints, - * therefore, pKVM has no sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL); - return 0; -} - -static u64 get_pvm_id_aa64dfr1(const struct kvm_vcpu *vcpu) -{ - /* - * No support for debug, therefore, hyp has no sanitized copy of the - * feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64DFR1_ALLOW != 0ULL); - return 0; -} + if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH)) + return false; -static u64 get_pvm_id_aa64afr0(const struct kvm_vcpu *vcpu) -{ - /* - * No support for implementation defined features, therefore, hyp has no - * sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64AFR0_ALLOW != 0ULL); - return 0; + return (cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || + cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) && + kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC); } -static u64 get_pvm_id_aa64afr1(const struct kvm_vcpu *vcpu) +static bool vm_has_sve(const struct kvm *kvm) { - /* - * No support for implementation defined features, therefore, hyp has no - * sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64AFR1_ALLOW != 0ULL); - return 0; + return system_supports_sve() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SVE); } -static u64 get_pvm_id_aa64isar0(const struct kvm_vcpu *vcpu) -{ - return id_aa64isar0_el1_sys_val & PVM_ID_AA64ISAR0_ALLOW; -} +/* + * Definitions for features to be allowed or restricted for protected guests. + * + * Each field in the masks represents the highest supported value for the + * feature. If a feature field is not present, it is not supported. Moreover, + * these are used to generate the guest's view of the feature registers. + * + * The approach for protected VMs is to at least support features that are: + * - Needed by common Linux distributions (e.g., floating point) + * - Trivial to support, e.g., supporting the feature does not introduce or + * require tracking of additional state in KVM + * - Cannot be trapped or prevent the guest from using anyway + */ -static u64 get_pvm_id_aa64isar1(const struct kvm_vcpu *vcpu) -{ - u64 allow_mask = PVM_ID_AA64ISAR1_ALLOW; +static const struct pvm_ftr_bits pvmid_aa64pfr0[] = { + MAX_FEAT(ID_AA64PFR0_EL1, EL0, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL1, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL2, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL3, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, FP, FP16), + MAX_FEAT(ID_AA64PFR0_EL1, AdvSIMD, FP16), + MAX_FEAT(ID_AA64PFR0_EL1, GIC, IMP), + MAX_FEAT_FUNC(ID_AA64PFR0_EL1, SVE, IMP, vm_has_sve), + MAX_FEAT(ID_AA64PFR0_EL1, RAS, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, DIT, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, CSV2, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, CSV3, IMP), + FEAT_END +}; - if (!vcpu_has_ptrauth(vcpu)) - allow_mask &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); +static const struct pvm_ftr_bits pvmid_aa64pfr1[] = { + MAX_FEAT(ID_AA64PFR1_EL1, BT, IMP), + MAX_FEAT(ID_AA64PFR1_EL1, SSBS, SSBS2), + MAX_FEAT_ENUM(ID_AA64PFR1_EL1, MTE_frac, NI), + FEAT_END +}; - return id_aa64isar1_el1_sys_val & allow_mask; -} +static const struct pvm_ftr_bits pvmid_aa64mmfr0[] = { + MAX_FEAT_ENUM(ID_AA64MMFR0_EL1, PARANGE, 40), + MAX_FEAT_ENUM(ID_AA64MMFR0_EL1, ASIDBITS, 16), + MAX_FEAT(ID_AA64MMFR0_EL1, BIGEND, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, SNSMEM, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, BIGENDEL0, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, EXS, IMP), + FEAT_END +}; -static u64 get_pvm_id_aa64isar2(const struct kvm_vcpu *vcpu) -{ - u64 allow_mask = PVM_ID_AA64ISAR2_ALLOW; +static const struct pvm_ftr_bits pvmid_aa64mmfr1[] = { + MAX_FEAT(ID_AA64MMFR1_EL1, HAFDBS, DBM), + MAX_FEAT_ENUM(ID_AA64MMFR1_EL1, VMIDBits, 16), + MAX_FEAT(ID_AA64MMFR1_EL1, HPDS, HPDS2), + MAX_FEAT(ID_AA64MMFR1_EL1, PAN, PAN3), + MAX_FEAT(ID_AA64MMFR1_EL1, SpecSEI, IMP), + MAX_FEAT(ID_AA64MMFR1_EL1, ETS, IMP), + MAX_FEAT(ID_AA64MMFR1_EL1, CMOW, IMP), + FEAT_END +}; - if (!vcpu_has_ptrauth(vcpu)) - allow_mask &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); +static const struct pvm_ftr_bits pvmid_aa64mmfr2[] = { + MAX_FEAT(ID_AA64MMFR2_EL1, CnP, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, UAO, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, IESB, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, AT, IMP), + MAX_FEAT_ENUM(ID_AA64MMFR2_EL1, IDS, 0x18), + MAX_FEAT(ID_AA64MMFR2_EL1, TTL, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, BBM, 2), + MAX_FEAT(ID_AA64MMFR2_EL1, E0PD, IMP), + FEAT_END +}; - return id_aa64isar2_el1_sys_val & allow_mask; -} +static const struct pvm_ftr_bits pvmid_aa64isar1[] = { + MAX_FEAT(ID_AA64ISAR1_EL1, DPB, DPB2), + MAX_FEAT_FUNC(ID_AA64ISAR1_EL1, APA, PAuth, vm_has_ptrauth), + MAX_FEAT_FUNC(ID_AA64ISAR1_EL1, API, PAuth, vm_has_ptrauth), + MAX_FEAT(ID_AA64ISAR1_EL1, JSCVT, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, FCMA, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, LRCPC, LRCPC3), + MAX_FEAT(ID_AA64ISAR1_EL1, GPA, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, GPI, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, FRINTTS, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, SB, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX), + MAX_FEAT(ID_AA64ISAR1_EL1, BF16, EBF16), + MAX_FEAT(ID_AA64ISAR1_EL1, DGH, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, I8MM, IMP), + FEAT_END +}; -static u64 get_pvm_id_aa64mmfr0(const struct kvm_vcpu *vcpu) -{ - u64 set_mask; +static const struct pvm_ftr_bits pvmid_aa64isar2[] = { + MAX_FEAT_FUNC(ID_AA64ISAR2_EL1, GPA3, IMP, vm_has_ptrauth), + MAX_FEAT_FUNC(ID_AA64ISAR2_EL1, APA3, PAuth, vm_has_ptrauth), + MAX_FEAT(ID_AA64ISAR2_EL1, ATS1A, IMP), + FEAT_END +}; - set_mask = get_restricted_features_unsigned(id_aa64mmfr0_el1_sys_val, - PVM_ID_AA64MMFR0_ALLOW); +/* + * None of the features in ID_AA64DFR0_EL1 nor ID_AA64MMFR4_EL1 are supported. + * However, both have Not-Implemented values that are non-zero. Define them + * so they can be used when getting the value of these registers. + */ +#define ID_AA64DFR0_EL1_NONZERO_NI \ +( \ + SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DoubleLock, NI) | \ + SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, MTPMU, NI) \ +) - return (id_aa64mmfr0_el1_sys_val & PVM_ID_AA64MMFR0_ALLOW) | set_mask; -} +#define ID_AA64MMFR4_EL1_NONZERO_NI \ + SYS_FIELD_PREP_ENUM(ID_AA64MMFR4_EL1, E2H0, NI) -static u64 get_pvm_id_aa64mmfr1(const struct kvm_vcpu *vcpu) +/* + * Returns the value of the feature registers based on the system register + * value, the vcpu support for the revelant features, and the additional + * restrictions for protected VMs. + */ +static u64 get_restricted_features(const struct kvm_vcpu *vcpu, + u64 sys_reg_val, + const struct pvm_ftr_bits restrictions[]) { - return id_aa64mmfr1_el1_sys_val & PVM_ID_AA64MMFR1_ALLOW; -} + u64 val = 0UL; + int i; + + for (i = 0; restrictions[i].width != 0; i++) { + bool (*vm_supported)(const struct kvm *) = restrictions[i].vm_supported; + bool sign = restrictions[i].sign; + int shift = restrictions[i].shift; + int width = restrictions[i].width; + u64 min_signed = (1UL << width) - 1UL; + u64 sign_bit = 1UL << (width - 1); + u64 mask = GENMASK_ULL(width + shift - 1, shift); + u64 sys_val = (sys_reg_val & mask) >> shift; + u64 pvm_max = restrictions[i].max_val; + + if (vm_supported && !vm_supported(vcpu->kvm)) + val |= (sign ? min_signed : 0) << shift; + else if (sign && (sys_val >= sign_bit || pvm_max >= sign_bit)) + val |= max(sys_val, pvm_max) << shift; + else + val |= min(sys_val, pvm_max) << shift; + } -static u64 get_pvm_id_aa64mmfr2(const struct kvm_vcpu *vcpu) -{ - return id_aa64mmfr2_el1_sys_val & PVM_ID_AA64MMFR2_ALLOW; + return val; } static u64 pvm_calc_id_reg(const struct kvm_vcpu *vcpu, u32 id) { switch (id) { case SYS_ID_AA64PFR0_EL1: - return get_pvm_id_aa64pfr0(vcpu); + return get_restricted_features(vcpu, id_aa64pfr0_el1_sys_val, pvmid_aa64pfr0); case SYS_ID_AA64PFR1_EL1: - return get_pvm_id_aa64pfr1(vcpu); - case SYS_ID_AA64ZFR0_EL1: - return get_pvm_id_aa64zfr0(vcpu); - case SYS_ID_AA64DFR0_EL1: - return get_pvm_id_aa64dfr0(vcpu); - case SYS_ID_AA64DFR1_EL1: - return get_pvm_id_aa64dfr1(vcpu); - case SYS_ID_AA64AFR0_EL1: - return get_pvm_id_aa64afr0(vcpu); - case SYS_ID_AA64AFR1_EL1: - return get_pvm_id_aa64afr1(vcpu); + return get_restricted_features(vcpu, id_aa64pfr1_el1_sys_val, pvmid_aa64pfr1); case SYS_ID_AA64ISAR0_EL1: - return get_pvm_id_aa64isar0(vcpu); + return id_aa64isar0_el1_sys_val; case SYS_ID_AA64ISAR1_EL1: - return get_pvm_id_aa64isar1(vcpu); + return get_restricted_features(vcpu, id_aa64isar1_el1_sys_val, pvmid_aa64isar1); case SYS_ID_AA64ISAR2_EL1: - return get_pvm_id_aa64isar2(vcpu); + return get_restricted_features(vcpu, id_aa64isar2_el1_sys_val, pvmid_aa64isar2); case SYS_ID_AA64MMFR0_EL1: - return get_pvm_id_aa64mmfr0(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr0_el1_sys_val, pvmid_aa64mmfr0); case SYS_ID_AA64MMFR1_EL1: - return get_pvm_id_aa64mmfr1(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr1_el1_sys_val, pvmid_aa64mmfr1); case SYS_ID_AA64MMFR2_EL1: - return get_pvm_id_aa64mmfr2(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr2_el1_sys_val, pvmid_aa64mmfr2); + case SYS_ID_AA64DFR0_EL1: + return ID_AA64DFR0_EL1_NONZERO_NI; + case SYS_ID_AA64MMFR4_EL1: + return ID_AA64MMFR4_EL1_NONZERO_NI; default: /* Unhandled ID register, RAZ */ return 0; } } -/* Read a sanitized cpufeature ID register by its encoding */ -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +/* + * Inject an unknown/undefined exception to an AArch64 guest while most of its + * sysregs are live. + */ +static void inject_undef64(struct kvm_vcpu *vcpu) { - return pvm_calc_id_reg(vcpu, id); + u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); + + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); + *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); + + kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC); + + __kvm_adjust_pc(vcpu); + + write_sysreg_el1(esr, SYS_ESR); + write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); } static u64 read_id_reg(const struct kvm_vcpu *vcpu, From patchwork Mon Dec 2 15:47:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D90BD78333 for ; Mon, 2 Dec 2024 16:03:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=NbfxC+fR9/uDI8W1cpOyy7vtTQ2CjdcHlP59+LYE8Uw=; b=Mpkfo6vQLOY1UNOEZl+BY1mw7U looc11PdiYMwsAyDyL2UIegUF4f4kRksegt4npGDMuW5F++joqvvKREz/dOVfCTQWcMTOBg08an/B MwFDrSnNT/wQ7p8c7Gt5HR17GPP2nV7HozhpjECX3UNuiBjLuuAlLHP/y+Lf+wgXsm6h3dm57551b EhlGbEAHgqLogTD5dTzugy/Zj/hwHbG1ps8Ot3wc4Eh/13U6kcdxtj0X0G7knw6HF5v9soWrhVPIS KbCvrIRpOHR2hQzHSH91fjRjSgFrTrVL3hrT36DlOdX3D8yGCx5FYf5/w+GqotUE9jdaZ9A5CKGoa 8Y2q4PAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8sz-00000006lip-3gq4; Mon, 02 Dec 2024 16:02:57 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eX-00000006iVe-3Exd for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:02 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4349f6b337dso28124585e9.3 for ; Mon, 02 Dec 2024 07:48:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154480; x=1733759280; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NbfxC+fR9/uDI8W1cpOyy7vtTQ2CjdcHlP59+LYE8Uw=; b=p89nhUgRba3K3D9ScbaAfgo6JAvnmaUazaNe7M0zO9agx8fT6BRFC518X9fCty5yXR bMM6QAupKj5BbWnrOVoyQoS8fHrZSpTa8wonC31wXu10idfUDQTRfEyjyADKo1s0GE/O 1Kuj4ZLATIqcCsnldj2BVUQNxnsJhlrPOiWDk8Dng+Px9UjFJDvi94x/QpCdBr2gdZ7R W+qXG0aeRFBWV7Is0cdnZ51Qil/5NFdU8SSP/tUNLU1ub0JZeg8QUEw5yb4HLh0WDFXf qOGh0GBDZC4x9xg+AAWhachwqY+ISyrKcMvlMv5AyA4SbE8JkBYWWGiw1M3GxtqbsnV7 jsPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154480; x=1733759280; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NbfxC+fR9/uDI8W1cpOyy7vtTQ2CjdcHlP59+LYE8Uw=; b=NggQ61glIwXOM+svcvlvnSMjB9+2RUCdSd56i5Ds0CgUkXLFDU4YLRc43Pkuaybp/n tFisB+SGwq6WRwbSIy/EkMnTTImPNol8HrG/SUN1Noz/hkKZ5Gq9NEZ8wvBu73yhaYqh iV680TP+DJD9+hnSuYpm2bQ+Xn8t+oYVc91Pc73TCtxPn8XA6N3Dl/ysUoOKAukHuz8c ELXBfWlfOMknc1DYfs7RNV/MGwxjFGNbJl9vZvr/k+DPcl/tRfx3FWX4Y+AZOJRZyu4m +84AS1yL1DMsgOvkTXz+FQzV4AkGEaxnr5XGjjIdrl+QGpGb7qRmtFm0NfEf/p6kyTHg +W6w== X-Forwarded-Encrypted: i=1; AJvYcCX5RCFOfHn1QyBGthNTlxy4JM4AotBxYwL6Q5G3h+bxmlaiowMrbs338L4M9TOP4QnhNUIBfxfzaAIi0rQ6aa7y@lists.infradead.org X-Gm-Message-State: AOJu0YxGhIJrUhtLerfITJPbh6Gdzlq3xTZUSQ9OGTzdqlDXhQ1hP5JL BaJ+OqFdf/1HRAWvH70UMNLTNz6o6erkGV1hsOWG5ghDGVfXQSibp1gg3TthNNGYHLnbL+Cpcg= = X-Google-Smtp-Source: AGHT+IE9+9RWKEBLBqD2njimlutPRxnVob73f3H3Ulhy+fUDehzZX41Bf18Rqx5qH3eOymnVjulgPzDhSQ== X-Received: from wmj5.prod.google.com ([2002:a05:600c:245:b0:434:a4a6:487a]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1f18:b0:434:a663:a59d with SMTP id 5b1f17b1804b1-434a9dff4edmr189552765e9.32.1733154479929; Mon, 02 Dec 2024 07:47:59 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:35 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-9-tabba@google.com> Subject: [PATCH v4 08/14] KVM: arm64: Remove fixed_config.h header From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074801_807410_3D9C121D X-CRM114-Status: GOOD ( 21.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The few remaining items needed in fixed_config.h are better suited for pkvm.h. Move them there and delete it. No functional change intended. Signed-off-by: Fuad Tabba --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 206 ------------------ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 5 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 1 - arch/arm64/kvm/hyp/nvhe/setup.c | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 1 - arch/arm64/kvm/hyp/nvhe/sys_regs.c | 2 +- 6 files changed, 6 insertions(+), 210 deletions(-) delete mode 100644 arch/arm64/kvm/hyp/include/nvhe/fixed_config.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h deleted file mode 100644 index 37a6d2434e47..000000000000 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ /dev/null @@ -1,206 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2021 Google LLC - * Author: Fuad Tabba - */ - -#ifndef __ARM64_KVM_FIXED_CONFIG_H__ -#define __ARM64_KVM_FIXED_CONFIG_H__ - -#include - -/* - * This file contains definitions for features to be allowed or restricted for - * guest virtual machines, depending on the mode KVM is running in and on the - * type of guest that is running. - * - * Each field in the masks represents the highest supported *unsigned* value for - * the feature, if supported by the system. - * - * If a feature field is not present in either, than it is not supported. - * - * The approach taken for protected VMs is to allow features that are: - * - Needed by common Linux distributions (e.g., floating point) - * - Trivial to support, e.g., supporting the feature does not introduce or - * require tracking of additional state in KVM - * - Cannot be trapped or prevent the guest from using anyway - */ - -/* - * Allow for protected VMs: - * - Floating-point and Advanced SIMD - * - Data Independent Timing - * - Spectre/Meltdown Mitigation - * - * Restrict to the following *unsigned* features for protected VMs: - * - AArch64 guests only (no support for AArch32 guests): - * AArch32 adds complexity in trap handling, emulation, condition codes, - * etc... - * - RAS (v1) - * Supported by KVM - */ -#define PVM_ID_AA64PFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL3, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, RAS, IMP) \ - ) - -/* - * Allow for protected VMs: - * - Branch Target Identification - * - Speculative Store Bypassing - */ -#define PVM_ID_AA64PFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_BT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SSBS) \ - ) - -#define PVM_ID_AA64PFR2_ALLOW 0ULL - -/* - * Allow for protected VMs: - * - Mixed-endian - * - Distinction between Secure and Non-secure Memory - * - Mixed-endian at EL0 only - * - Non-context synchronizing exception entry and exit - * - * Restrict to the following *unsigned* features for protected VMs: - * - 40-bit IPA - * - 16-bit ASID - */ -#define PVM_ID_AA64MMFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGEND) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_SNSMEM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGENDEL0) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_PARANGE), ID_AA64MMFR0_EL1_PARANGE_40) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_ASIDBITS), ID_AA64MMFR0_EL1_ASIDBITS_16) \ - ) - -/* - * Allow for protected VMs: - * - Hardware translation table updates to Access flag and Dirty state - * - Number of VMID bits from CPU - * - Hierarchical Permission Disables - * - Privileged Access Never - * - SError interrupt exceptions from speculative reads - * - Enhanced Translation Synchronization - * - Control for cache maintenance permission - */ -#define PVM_ID_AA64MMFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_VMIDBits) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HPDS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_PAN) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_SpecSEI) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_ETS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_CMOW) \ - ) - -/* - * Allow for protected VMs: - * - Common not Private translations - * - User Access Override - * - IESB bit in the SCTLR_ELx registers - * - Unaligned single-copy atomicity and atomic functions - * - ESR_ELx.EC value on an exception by read access to feature ID space - * - TTL field in address operations. - * - Break-before-make sequences when changing translation block size - * - E0PDx mechanism - */ -#define PVM_ID_AA64MMFR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_CnP) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_UAO) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IESB) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_AT) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IDS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_TTL) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_BBM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_E0PD) \ - ) - -#define PVM_ID_AA64MMFR3_ALLOW (0ULL) - -/* - * No support for Scalable Vectors for protected VMs: - * Requires additional support from KVM, e.g., context-switching and - * trapping at EL2 - */ -#define PVM_ID_AA64ZFR0_ALLOW (0ULL) - -/* - * No support for debug, including breakpoints, and watchpoints for protected - * VMs: - * The Arm architecture mandates support for at least the Armv8 debug - * architecture, which would include at least 2 hardware breakpoints and - * watchpoints. Providing that support to protected guests adds - * considerable state and complexity. Therefore, the reserved value of 0 is - * used for debug-related fields. - */ -#define PVM_ID_AA64DFR0_ALLOW (0ULL) -#define PVM_ID_AA64DFR1_ALLOW (0ULL) - -/* - * No support for implementation defined features. - */ -#define PVM_ID_AA64AFR0_ALLOW (0ULL) -#define PVM_ID_AA64AFR1_ALLOW (0ULL) - -/* - * No restrictions on instructions implemented in AArch64. - */ -#define PVM_ID_AA64ISAR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_AES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA1) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA2) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_CRC32) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_RDM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SM3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SM4) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_DP) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_FHM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_TS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_TLB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_RNDR) \ - ) - -/* Restrict pointer authentication to the basic version. */ -#define PVM_ID_AA64ISAR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DPB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_JSCVT) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_FCMA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_LRCPC) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_FRINTTS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SPECRES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_BF16) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DGH) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ - ) - -#define PVM_ID_AA64ISAR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_ATS1A)| \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ - ) - -bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); -bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); -void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); -int kvm_check_pvm_sysreg_table(void); - -#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..6ff7cbc33000 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,9 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); +bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); +int kvm_check_pvm_sysreg_table(void); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 3b4ea97148b9..ffa500b500f2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -9,7 +9,6 @@ #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..31bd729ea45c 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -12,7 +12,6 @@ #include #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index cc69106734ca..7786a83d0fa8 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,7 +26,6 @@ #include #include -#include #include /* Non-VHE specific context */ diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index b6140590b569..2f2f1ca32b80 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -11,7 +11,7 @@ #include -#include +#include #include "../../sys_regs.h" From patchwork Mon Dec 2 15:47:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890980 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 247A2D7832F for ; Mon, 2 Dec 2024 16:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=AUZZOxNVC1lLwStyPm7LTR4giJNaFKBwnVTKDXsGwgs=; b=4tXjHkb96l71WXR81XrB9YYkiC D2Y9xRoZfoHlaxhhag4YULMQ32+KohvYFQcCJVz7oP3lSrmTr2dNlzeRrbNqdccqwnZMbk8R6UR7k DoRYit6tPIAZ+r16Hu48AMor/rHl3fks305JDkRAh3acroNQHZssSTaZ4qQ0h7Ul7lq18FmspcOOq mJqoE9fMMF75HE7z7bKpaH02MoOLahtRglEiiPgr8Q/dAxb1+vQRN8WzT2VHjm6K7rvXyzgGZw9RU qJctWI87X7ZCTmPSyN7Lwgn3i1d+cpj1/S3Ho+H8j0pX+vf1m7fimqTAaQE8JuZcnFIhoH1GMzyCr OI73EDMw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8tx-00000006lsh-20d7; Mon, 02 Dec 2024 16:03:57 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eZ-00000006iWQ-2MXR for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:04 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4349f32c9a6so36131695e9.3 for ; Mon, 02 Dec 2024 07:48:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154482; x=1733759282; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AUZZOxNVC1lLwStyPm7LTR4giJNaFKBwnVTKDXsGwgs=; b=jhIqi+8Ui/xn128rjSs/GR8Sk6m6HsDCpnk5azjibpW9QrZhuZjhjFmj3gQLGX0j/J ZH9uqDzKS0JEHVYTAttVAs6Va7qdZ3gSblNeHHNMLRe/LJWSWEDXPcBIzLxMFOvX1mbm dMyfssHyMA0j08g0ms/d4Y3EVHOWVITqyxTs6feBW+NPM1zjQUijR0fof63QHmo9j4CC L/ui8mLF3eecr2Et1g8NjSjl4OLbGRToivpWnyl1SCOoUO9fMDea7XVTiKqOwCzaA98B s+yszXu4QuJZd8TlOWCBNGFHZiBUxvvm2Fb4jlByUx3zb2GLWAhdQfadCRdSsZhyycyx noMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154482; x=1733759282; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AUZZOxNVC1lLwStyPm7LTR4giJNaFKBwnVTKDXsGwgs=; b=Mkhc22dIEsbZ2AC/KuMp807eSl9yQtxuW9ual0wBUIpvGWR6+UaZszbA4KmW7/K5Ar sgJEgR05ecqGVKQzy5/YMROFjzgGOGZ+nUez4jV8OH8nmVH4roZiq1oF/GIcCChVqHLL QqP5nGdRNMr9VOX7W42Ldj3cw+xslF/GGBFjk4VBop18KP+IlJoifmCfLg7XMIDimf4e gQVtMbAWeyHIVAnYCjwa7Ngkytgd//YSpwW4jWYVPltXhjZ5q43+LVuXGfUYg37znmIe jpXg6od2tKysITwNR2Q43K6xk+mhTfzSKqfbmFiDHir4QeKKfG2aosdeA2a6rrF9tcqv F+vw== X-Forwarded-Encrypted: i=1; AJvYcCWeaxlvvFComqJB3UqI8jf7Q46WtswE+GWGaX6fhaNiHHkV6X6ueXVc2s5DEig+wBYnlgDmCv3MiPGUE9ZImFyI@lists.infradead.org X-Gm-Message-State: AOJu0Yz7L20VDoXqrPhOPQVTh/a/rdOHK6G2/7rp69tMv0GWtWzKkG2t D4yaAkXfly1jkb+0EpF0FBGN+gcONrE5ZKh0pwxb4N9m+cjwwpQ/HXIxQFlgPRdWoztrDW+HGg= = X-Google-Smtp-Source: AGHT+IG8LYyQFRvZAn+1psKDX3FASYdfmcKZwHsSrb04cMVuPz9aKF9pw4V3JWI4Rapc9chI4XydfWhjUA== X-Received: from wmsl21.prod.google.com ([2002:a05:600c:1d15:b0:434:9ff1:db95]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e24:b0:431:5f55:28ad with SMTP id 5b1f17b1804b1-434a9de5657mr193461355e9.22.1733154481889; Mon, 02 Dec 2024 07:48:01 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:36 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-10-tabba@google.com> Subject: [PATCH v4 09/14] KVM: arm64: Remove redundant setting of HCR_EL2 trap bit From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074803_593308_C7CF9B1B X-CRM114-Status: UNSURE ( 9.77 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In hVHE mode, HCR_E2H should be set for both protected and non-protected VMs. Since commit 23c9d58cb458 ("KVM: arm64: Fix initializing traps in protected mode"), this has been fixed, and the setting of the flag here is redundant. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index ffa500b500f2..cede527a59d4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -57,9 +57,6 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) /* No support for AArch32. */ val |= HCR_RW; - if (has_hvhe()) - val |= HCR_E2H; - /* * Always trap: * - Feature id registers: to control features exposed to guests From patchwork Mon Dec 2 15:47:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DBBFD78333 for ; Mon, 2 Dec 2024 16:05:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=F/8DKR9VojN+CM/GyAHC15iuTbMv0usRa/8w1XuoQEo=; b=Dd5UTLqrWJ1Fn1cE0UKG1hzoi4 5nPYixDiV18Q5wvJKQ19+0KT+0TONSjKMgTNka+aaH8zq6OMgpopPOYqNI0RCdvUxq2tJAuVLwMAL 56fgTnQ5dJE/aFg5GvtwRWT1GiAQkZ5f5UZ8dHgSsPQpiZ6arIOyXnbr5jrbSjT2/boKksJqLblmk LKeUP1hcdywdPvJ/xG/9JrA/6ZOEV1qRLh/mejUhm5Fqh2rQqQK4lJrroB7f+8O3TUCUkIZvyOTPA bSb8lsU+bztJ+Mz1LjyJsbV9GmFQ4yjW/U+eyqifTQ09gQiWjYLe+wL2koVOWLe6jSbKM5RayALO4 A4x0tU+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8uv-00000006m3E-0eDu; Mon, 02 Dec 2024 16:04:57 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8eb-00000006iX4-3rdK for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:07 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-434a04437cdso32761395e9.2 for ; Mon, 02 Dec 2024 07:48:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154484; x=1733759284; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=F/8DKR9VojN+CM/GyAHC15iuTbMv0usRa/8w1XuoQEo=; b=DpmNPwfHqZKdUhSGqLF9Bp9Ld/uHOMduVEFDD36XSelXmxoI+UtVaVDib3Krard+ik 6W5skecqXJzHjjYs8CBQbQxqJEsfgUt/5lorwK/m8vRTuy894bfwcS48gcf6LYx/ZUMn 6jNlf9fOK93uA0h6/3l8lfJkcmdcEjb10K26CqZb+UX1mLCdH/bjNJNAyFeASPLC4vST iXDWcQvWdnXpvkqGaDzWtpBBjgRhJl29t7q5gHXW3HhPX8KG2KJZkIDk/Pr7CU0iEXpJ vURi3Y4mtQBjxZaKf2s9UEjUo1ho1SULPrprORxiy19ra+SU8KObD6QLmoCsaGi43lVw wc8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154484; x=1733759284; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=F/8DKR9VojN+CM/GyAHC15iuTbMv0usRa/8w1XuoQEo=; b=DPk2+6JigSYMt74X6lQ/Gyk7qaDpE8i3Zv/dHBlV9VC53MiGAaP5pVERD/lIVNCOx9 jjScJKEa53oH2tDtPAdSw4gZrpwvmhRl+ePaxYB53w3ec/2hkF4a+cxSlaqFQCE3U5L3 UaEw8p+pO/J1HC/ADQPAM02OaAKZkCaBW/WQGROa9/3L5Fo69FzBLe/XOcixwq3uRbTP hIc7TKU7L8iSYP2vx7q7OIzPUHXVYe0M+Roy5BBnjmLfD2JhIHMHf85zH/L9H6jpyGXo ECg4QhMXn4eqHDMBzI9TPaUtxB2gUh82Bq3nkI1pHbYi//tBxbhr1Q/K5xq+HaWLz76c 5Erw== X-Forwarded-Encrypted: i=1; AJvYcCV+KNL1RXnGX901bjJX/PmmgzHFpu+/vP4m2Qm3ebISp8CnlfVLEOZdxjoI1thcUagRRTZsQqWhWnu7Au9EHagf@lists.infradead.org X-Gm-Message-State: AOJu0Yyz6f4FJ6weupp29MaJnGnZ0uQnIY5bqwX5LCJ1Gh+oxx/DxpTz IkRZMqjYUmDfforcECBe5WY/90jk/ztWhOvQTd+vdtfVETsggJ9iYLGvAdUM6D2xfOe+XlYJIg= = X-Google-Smtp-Source: AGHT+IFGpe+CWMEKYJ0YxbBae+4BqtuK0b5U0g8mhVe5XnRj2ei3rUenFTPDYzFSm+x8FJliE7hGOCJBgQ== X-Received: from wmbjx4.prod.google.com ([2002:a05:600c:5784:b0:434:9e06:f9cd]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3ba6:b0:434:9e63:faff with SMTP id 5b1f17b1804b1-434a9dbc41bmr211090445e9.2.1733154484157; Mon, 02 Dec 2024 07:48:04 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:37 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-11-tabba@google.com> Subject: [PATCH v4 10/14] KVM: arm64: Calculate cptr_el2 traps on activating traps From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074805_967954_C7F8D1A2 X-CRM114-Status: GOOD ( 17.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to VHE, calculate the value of cptr_el2 from scratch on activate traps. This removes the need to store cptr_el2 in every vcpu structure. Moreover, some traps, such as whether the guest owns the fp registers, need to be set on every vcpu run. Reported-by: James Clark Fixes: 5294afdbf45a ("KVM: arm64: Exclude FP ownership from kvm_vcpu_arch") Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 1 - arch/arm64/kvm/hyp/nvhe/pkvm.c | 42 ------------------------- arch/arm64/kvm/hyp/nvhe/switch.c | 51 +++++++++++++++++++------------ 4 files changed, 32 insertions(+), 63 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 230b0638f0c2..69cb88c9ce3e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -708,7 +708,6 @@ struct kvm_vcpu_arch { u64 hcr_el2; u64 hcrx_el2; u64 mdcr_el2; - u64 cptr_el2; /* Exception Information */ struct kvm_vcpu_fault_info fault; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b295218cdc24..8a3d02cf0a7a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1546,7 +1546,6 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); /* * Handle the "start in power-off" case. diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index cede527a59d4..c8ab3e59f4b1 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -83,44 +83,6 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 = val; } -static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = vcpu->kvm; - u64 val = vcpu->arch.cptr_el2; - - if (!has_hvhe()) { - val |= CPTR_NVHE_EL2_RES1; - val &= ~(CPTR_NVHE_EL2_RES0); - } - - if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) - val |= CPTR_EL2_TAM; - - /* SVE can be disabled by userspace even if supported. */ - if (!vcpu_has_sve(vcpu)) { - if (has_hvhe()) - val &= ~(CPACR_ELx_ZEN); - else - val |= CPTR_EL2_TZ; - } - - /* No SME support in KVM. */ - BUG_ON(kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)); - if (has_hvhe()) - val &= ~(CPACR_ELx_SMEN); - else - val |= CPTR_EL2_TSM; - - if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP)) { - if (has_hvhe()) - val |= CPACR_EL1_TTA; - else - val |= CPTR_EL2_TTA; - } - - vcpu->arch.cptr_el2 = val; -} - static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; @@ -191,7 +153,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; int ret; - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); vcpu->arch.mdcr_el2 = 0; pkvm_vcpu_reset_hcr(vcpu); @@ -204,7 +165,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) return ret; pvm_init_traps_hcr(vcpu); - pvm_init_traps_cptr(vcpu); pvm_init_traps_mdcr(vcpu); return 0; @@ -644,8 +604,6 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, return ret; } - hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu); - return 0; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 7786a83d0fa8..0ebf84a9f9e2 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -35,33 +35,46 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); -static void __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_cptr_traps(struct kvm_vcpu *vcpu) { - u64 val; + u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ - ___activate_traps(vcpu, vcpu->arch.hcr_el2); - __activate_traps_common(vcpu); + if (has_hvhe()) { + val |= CPACR_ELx_TTA; - val = vcpu->arch.cptr_el2; - val |= CPTR_EL2_TAM; /* Same bit irrespective of E2H */ - val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA; - if (cpus_have_final_cap(ARM64_SME)) { - if (has_hvhe()) - val &= ~CPACR_ELx_SMEN; - else - val |= CPTR_EL2_TSM; - } + if (guest_owns_fp_regs()) { + val |= CPACR_ELx_FPEN; + if (vcpu_has_sve(vcpu)) + val |= CPACR_ELx_ZEN; + } + } else { + val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; - if (!guest_owns_fp_regs()) { - if (has_hvhe()) - val &= ~(CPACR_ELx_FPEN | CPACR_ELx_ZEN); - else - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; + /* + * Always trap SME since it's not supported in KVM. + * TSM is RES1 if SME isn't implemented. + */ + val |= CPTR_EL2_TSM; - __activate_traps_fpsimd32(vcpu); + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TZ; + + if (!guest_owns_fp_regs()) + val |= CPTR_EL2_TFP; } + if (!guest_owns_fp_regs()) + __activate_traps_fpsimd32(vcpu); + kvm_write_cptr_el2(val); +} + +static void __activate_traps(struct kvm_vcpu *vcpu) +{ + ___activate_traps(vcpu, vcpu->arch.hcr_el2); + __activate_traps_common(vcpu); + __activate_cptr_traps(vcpu); + write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { From patchwork Mon Dec 2 15:47:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890982 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88EFED7832F for ; Mon, 2 Dec 2024 16:06:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=C3AEmgqEm+cvkyX+Y7gfd4xBQC+JlaYoJMIK/6vkrhU=; b=nHzOTa2+tm4bZGdqVukKNfhpJW 5q9YJ3XzfNqhiuF/NmaXRsVoKvIj3MdL8WKryUFTQzzutvlRUuTHHkV9rZVWVO3O3nB9pkiIycqL3 kNYkmIWMiWaJREh6FTwzUUbmrCjuiKfwS4VyVdvARdo96dtf9t/Gn4NW905Ik4kdfKyeM2aTDFH9H OYgwUMPiRByl4Ed9vWLqdwK0WwzSYHNJWS6EwYNj5HrGwmDq95x+TzQDTDaecs9AQpEby4trgFF2j pMVeBqVM6w7AWcomHyzXH6bK7JA64DL2MZfTO5D5E+4zy1ow5oyB4hxwJcAGOco7rq0o88SxW+NF1 Yr/u504Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8vt-00000006mH4-37PD; Mon, 02 Dec 2024 16:05:57 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8ee-00000006iYK-1u3r for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:09 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-385d7d51ac8so1885165f8f.1 for ; Mon, 02 Dec 2024 07:48:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154486; x=1733759286; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=C3AEmgqEm+cvkyX+Y7gfd4xBQC+JlaYoJMIK/6vkrhU=; b=UPQSZoV7C5RbJpvPaE7lEI2g93Gkfi1R9n6pwv8GcfhNCTLYfN3OIooiOmXHHNYnYF BNwylGFo7+qeuKJbz1LHUdY9sRt11Lj1LgSzW4RvN8ozVrXKsOffQUi4ajq+mRCyYArK O+x/cjl7Jk9J3POMWVKFQueTxe131mD2Wt+Ze1FMXmLtG95RxuKmp5uZbOiFTuS08MiE Zc+HioTq9cAgvwe9k2uoQayDv6C1sgoXu+fYndtKtsKioJBzlur3s8tCe7vtnglyK514 JpnZrRmoTCDjdG1dghiNrVskieFWI7qmM/u3AEyt/MJwi5OSpfH9D3XznwDZzx+I/YVB 1m8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154486; x=1733759286; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C3AEmgqEm+cvkyX+Y7gfd4xBQC+JlaYoJMIK/6vkrhU=; b=nejzPyU7PEeidAj1T2MumwlBKSvmc9WOKrfWxr3oAQJuMrU/AnwZ/9trsPjXfo3unj d0p/8f/qYSbVRFJin/yM6mcIshCUZcEqfftBNlZ9xoog8PcvhSr4rDdhJVc0YUFgJ+yX kMFXKnuWY9ph1IVQ4bgkJpLGbuRKAv5Ii708+I+f8mHOkKAgL/AQGxkHid4RbLvDYN8y jXpt9+vlEGBGH8OiNdPuRWd1kbNb7HYSOs1oMcI1ooiEbMriLmBiv1Bfo3/o18ibxKO1 BQhEA7vxaJ83kUyMBBwipjrUryVdOe0ZM5H5dLLuhpxjdGk2jYCVylzeR8VckTnDBUhO bv5Q== X-Forwarded-Encrypted: i=1; AJvYcCVZumjpvgqHwEVtcvx4CnNYFlfxRstS3gnNL6lTHxDXBPg4S5fgzTBU1WvKNhodo8W1YxdM3U1f4r3b7a71F8Js@lists.infradead.org X-Gm-Message-State: AOJu0Yy9eL0Oj5RBeW2hHP3QBY+GxgWSBaSTeOIarS2MtxjFtvk6Llhg 86xOLxHm1evogcdXPipQYf6ebJLrBDFviDTGJ8y8fm23nUcqMBastllmJRXSsrqniN822O93kQ= = X-Google-Smtp-Source: AGHT+IGhSynrcs4SqVVKs22wLLKLNJi/dXD0I1g5U0lYlcBLFc8w5xASAAEhBPJ+vcKS1qXzNPJOrCrAwQ== X-Received: from wmjy25.prod.google.com ([2002:a7b:cd99:0:b0:431:1c66:db91]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4617:b0:385:e16d:f89 with SMTP id ffacd0b85a97d-385e16d1024mr11521621f8f.34.1733154486245; Mon, 02 Dec 2024 07:48:06 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:38 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-12-tabba@google.com> Subject: [PATCH v4 11/14] KVM: arm64: Refactor kvm_reset_cptr_el2() From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074808_488928_83FBFFF1 X-CRM114-Status: GOOD ( 11.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Fold kvm_get_reset_cptr_el2() into kvm_reset_cptr_el2(), since it is its only caller. Add a comment to clarify that this function is meant for the host value of cptr_el2. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index cf811009a33c..7b3dc52248ce 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -619,7 +619,8 @@ static __always_inline void kvm_write_cptr_el2(u64 val) write_sysreg(val, cptr_el2); } -static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) +/* Resets the value of cptr_el2 when returning to the host. */ +static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) { u64 val; @@ -643,13 +644,6 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) val &= ~CPTR_EL2_TSM; } - return val; -} - -static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) -{ - u64 val = kvm_get_reset_cptr_el2(vcpu); - kvm_write_cptr_el2(val); } From patchwork Mon Dec 2 15:47:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8107ED7832F for ; Mon, 2 Dec 2024 16:07:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rGNx4KryUAWkUrEyK556nEdWmNkAb7CXLB+nZikEwWs=; b=ghKIQ5N5+003gcRklP3IUkwzf2 wWVdvqU9yWLFom8yQT8o+j2lxGDqkOzYd2G2bGlM7vrmXab55o4yPhGdjTu6kB3zJUfgiAVkQLap2 T7W3PzZHP8fX21WeuVECo2dcVjLWWoDUk53pZu3XqJgBYatlI+qfFxSVRjkSq1F0ZmvWc42seUpgD En+3HXRJ+4IcIWh0kE/qqJoMz2WJ7W9O8HpGXYX1sgggmFbFIt9v3hfmyINCNQ//zhT+CCjdFm6Pl 4OyRMPIAsQrGhgihfTUu3iwRzrehbG4nDWyGz6BuLCE1MBK2Dtc1p2T8sjAz+3dtaqgZ4K4egOcem l8C4KW/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8wr-00000006mRv-1TBt; Mon, 02 Dec 2024 16:06:57 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8ef-00000006iZL-3gOX for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:10 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4349d895ef8so42278075e9.0 for ; Mon, 02 Dec 2024 07:48:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154488; x=1733759288; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rGNx4KryUAWkUrEyK556nEdWmNkAb7CXLB+nZikEwWs=; b=tS5n0AOe4yH9hRpX35ao/x6NlxTnONpy9kYl32s1yV3pTlYp8FUg0ytEijFv0lU8H5 84nKXT6cbNKrYDrQhdymiHnMeblkwd3zmW9XhGLRB1a4v5YzfrMhoqqQ6q5qxHJ9Ckqi 06hyxtdElGPV5FQ+HyQUN3+4F+/vrwsINfGtUoW3KLEx1Imu2XBNrK0K7c7alFSQXUHQ U7jxuzkajE/Y6dFUkQ0fdIY74Xu5mb5vfizTgfURTasXAjo5IfgRrscLyK7CSpYLGGe0 cISkbUvnhbyhXR7Hva7LTwbSzmxr+e7ckIXjZIMHenGJ6PHl/y5rqERbmUVVHUVA5aRn 8F4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154488; x=1733759288; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rGNx4KryUAWkUrEyK556nEdWmNkAb7CXLB+nZikEwWs=; b=DDwaI/zs/LcwhLjDsMsIhmVafjSndxMbXK2ULoiQecwii7ATmUghQK5Hi/f+LfZs8V inG5RhgcxjauOmzMmOu2QwWIC4LffCVSr1MR97fEFIOKgOrhlkvxSQ4Cy2Pct+6GwROx KOVkEkPnz5WeuP5ZjDsGot5/8YlJ3hjR2XlfDrzmQw9Bj/fl4gBwTt4tWXNsyB0WfGFc 3CDhYzIBR7DctshTGEBgDQ6dw1npCDNkp4jV57gX+t876f8Feya6RLSC8/ZsNt44b/gO 9TfW9q+zOMOgVRtlbxW20Gzg89ZBQhKK8eUoh74Vj8iZKbOPUocRQPgiYrnQY6UIkpat ZN5Q== X-Forwarded-Encrypted: i=1; AJvYcCXOCinEHrdxGiQQeQBdOQsIgTu+aXHhH6f7ceEp0GO8PEkHSIrf9MAPooJmoKhgJXPk25tMJ1iQVTH6RfqQbvRJ@lists.infradead.org X-Gm-Message-State: AOJu0YwgeBM8eo6TfWQj4x8NgsYBLTnMlGDPCSrWZS7na60GVlHxr6xU hVS4QRIifDgSuy7foEV6DGlQocTNKegx5XQ/NBPLSPzAbuE56EAm6tpfLE9Mjw/H/OpcfScW2w= = X-Google-Smtp-Source: AGHT+IE+TCs+se5ONyXKqcos0wiRI1E/t7OZ3N4PjzO6JlH3kIzyJ0HixavxVlrNIbbs/fsbkOGIjUeMMQ== X-Received: from wmaq11.prod.google.com ([2002:a05:600c:6c8b:b0:434:a981:26f1]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:156e:b0:385:f062:c2d4 with SMTP id ffacd0b85a97d-385f062c49emr5948044f8f.37.1733154488244; Mon, 02 Dec 2024 07:48:08 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:39 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-13-tabba@google.com> Subject: [PATCH v4 12/14] KVM: arm64: Fix the value of the CPTR_EL2 RES1 bitmask for nVHE From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074809_908457_0E24762F X-CRM114-Status: GOOD ( 13.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since the introduction of SME, bit 12 in CPTR_EL2 (nVHE) is TSM for trapping SME, instead of RES1, as per ARM ARM DDI 0487K.a, section D23.2.34. Fix the value of CPTR_NVHE_EL2_RES1 to reflect that, and adjust the code that relies on it accordingly. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 2 +- arch/arm64/include/asm/kvm_emulate.h | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 3e0f0de1d2da..24e4ac7c50f2 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -300,7 +300,7 @@ #define CPTR_EL2_TSM (1 << 12) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) -#define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ +#define CPTR_NVHE_EL2_RES1 (BIT(13) | BIT(9) | GENMASK(7, 0)) #define CPTR_NVHE_EL2_RES0 (GENMASK(63, 32) | \ GENMASK(29, 21) | \ GENMASK(19, 14) | \ diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 7b3dc52248ce..6602a4c091ac 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -640,8 +640,8 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) if (vcpu_has_sve(vcpu) && guest_owns_fp_regs()) val |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - val &= ~CPTR_EL2_TSM; + if (!cpus_have_final_cap(ARM64_SME)) + val |= CPTR_EL2_TSM; } kvm_write_cptr_el2(val); From patchwork Mon Dec 2 15:47:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98CB6D78333 for ; Mon, 2 Dec 2024 16:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nNriDIcshtJkMosS2LovJ2HK/mvE14mb05+LrMMa6to=; b=q7JW8MrkPMyNGC3Bdww7Bt/eCX gkWSlRhETmLjHpiQUNSIibzk2p3/Lzb2+9Re7BYZwvDFmxBDoziqVr5IZWgeIeyNvPPZzL0h3o4+4 QoNk6Na9TDNEzlk2Jsv3+7KhD8+/Zc1RwFpHllxsWKKoz6p9Sq2l9x243fdXQ7z4TRkfRYACZORrh s0Ki9Tngi0ViEgR6Et6/GsooQ8mzXZqYI3lr5BVYzdp+uuqVohgJEtvww6tRiZKZo6orR52OMe/l/ xwnG7gBX4bUB9JvkHrzBsIY5NdsMwaVPGzgLDvI6R2DoaxIV2E/KDQ/jlyh/OZJ0hKVt+MKEJB/0Z ySD1f6OA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8xo-00000006maD-46RL; Mon, 02 Dec 2024 16:07:56 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8ei-00000006iao-0udB for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:13 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385dcadffebso1514007f8f.0 for ; Mon, 02 Dec 2024 07:48:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154490; x=1733759290; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nNriDIcshtJkMosS2LovJ2HK/mvE14mb05+LrMMa6to=; b=W4YTZu4q9Wjzs0slj8kKkpbIAlNh2bi9XvJLmjmiEi5lbrlSAuJCE3dWyI4GLwmF5w VJSj3TJTibGNDi8YxXp8Ssl51SjpDFZwIYBWd4u6t7+VIbd6mjd87QJxJ0feBNR1vTT+ sQ55cn0lnnRi6NBlGRtkqjH2XTJBSt0Oy+FqkkcMP8eeUD4rL3F+poopatp4TWqOhKsD SbLnPKqbo/O2FNZCaFWlgFJ8UG5Jz1LHkddzuIzEXl0wxISsY8WPnhk2VnFmyWQD7x3N fa0j8VIaIpnWbOkmAVOAaWeKFi0d0nPlqPS9FdguAAx1uk9uKRLybi6e1i5J5+SkydPc DlLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154490; x=1733759290; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nNriDIcshtJkMosS2LovJ2HK/mvE14mb05+LrMMa6to=; b=sLVfldAQZ1H3FfYa/vGOWTahWXuKhSJRZH3lR/WF1ODa8a6/4JaVI3Gxxce6ZIfmww Tb9MTNZ0tOEjrBQ+PXWo0bL7/z2vW69ZcegPNSrUDWYPQogXd3suIPOh3P3j42puUKTX 32/0GhHlpMXexqGTuVeFHBYj9f8GOTUttxLnYSIiatalOSORiSziS7nxILq2n+YxNxCc tAmjyxhiSHIpQYmtS3svMolO+kqjNv55iFRHGiRPWo1jCm3KHGkap3pajJN/RZIiv0/Q CC4LN6GF3mOkaAr+JBMDjaxw58JHJaPxNg64li3NytiGVdGZn66JNu1KAGtCh4qpaxun Rsrg== X-Forwarded-Encrypted: i=1; AJvYcCU51ma84Fo8GXSRpWldWqE4h5y8kBAYv71IXP7DuYeJ12nex/ClMGlwUE8SzJCVshjWFUxUIcPw2wGDHKVeRNTb@lists.infradead.org X-Gm-Message-State: AOJu0YyDZ0Zz7o95L3vaX2C894azCh1s05VZuLuGDpBC/cvIih0qwtCI /CDK5YCK9tqMBDrFymAvyDnbW3MMEc2Gnrm+CGoT1LCd1Zo8Cnv+8k2sBfObuM4YJHrYIMGOGA= = X-Google-Smtp-Source: AGHT+IFMWiIi0v51OtFih5Zeq632c5WjlhMWfVw/sUtZadDZJoAaiVmEgqqMRPRI8VUya2bOd/51D3qZvQ== X-Received: from wryi8.prod.google.com ([2002:a05:6000:1ac8:b0:382:321f:137f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:18ac:b0:385:e1a8:e2a1 with SMTP id ffacd0b85a97d-385e1a8e594mr9562794f8f.3.1733154490186; Mon, 02 Dec 2024 07:48:10 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:40 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-14-tabba@google.com> Subject: [PATCH v4 13/14] KVM: arm64: Remove PtrAuth guest vcpu flag From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074812_255246_2F23C2BB X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vcpu flag GUEST_HAS_PTRAUTH is always associated with the vcpu PtrAuth features, which are defined per vm rather than per vcpu. Remove the flag, and replace it with checks for the features instead. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 5 ----- arch/arm64/include/asm/kvm_host.h | 7 +++---- arch/arm64/kvm/hyp/nvhe/pkvm.c | 13 ------------- arch/arm64/kvm/reset.c | 4 ---- 4 files changed, 3 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 6602a4c091ac..406e99a452bf 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -691,9 +691,4 @@ static inline bool guest_hyp_sve_traps_enabled(const struct kvm_vcpu *vcpu) { return __guest_hyp_cptr_xen_trap_enabled(vcpu, ZEN); } - -static inline void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) -{ - vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH); -} #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 69cb88c9ce3e..e6be8fe6627a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -866,10 +866,8 @@ struct kvm_vcpu_arch { #define GUEST_HAS_SVE __vcpu_single_flag(cflags, BIT(0)) /* SVE config completed */ #define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) -/* PTRAUTH exposed to guest */ -#define GUEST_HAS_PTRAUTH __vcpu_single_flag(cflags, BIT(2)) /* KVM_ARM_VCPU_INIT completed */ -#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(3)) +#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(2)) /* Exception pending */ #define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) @@ -965,7 +963,8 @@ struct kvm_vcpu_arch { #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) && \ - vcpu_get_flag(vcpu, GUEST_HAS_PTRAUTH)) + (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || \ + vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC))) #else #define vcpu_has_ptrauth(vcpu) false #endif diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index c8ab3e59f4b1..dfd031acde31 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -278,18 +278,6 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc allowed_features, KVM_VCPU_MAX_FEATURES); } -static void pkvm_vcpu_init_ptrauth(struct pkvm_hyp_vcpu *hyp_vcpu) -{ - struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; - - if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || - vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) { - kvm_vcpu_enable_ptrauth(vcpu); - } else { - vcpu_clear_flag(&hyp_vcpu->vcpu, GUEST_HAS_PTRAUTH); - } -} - static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) @@ -359,7 +347,6 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, goto done; pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); - pkvm_vcpu_init_ptrauth(hyp_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 470524b31951..1cfab6a5d8a5 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -211,10 +211,6 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) kvm_vcpu_reset_sve(vcpu); } - if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || - vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) - kvm_vcpu_enable_ptrauth(vcpu); - if (vcpu_el1_is_32bit(vcpu)) pstate = VCPU_RESET_PSTATE_SVC; else if (vcpu_has_nv(vcpu)) From patchwork Mon Dec 2 15:47:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13890985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31814D78333 for ; Mon, 2 Dec 2024 16:09:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lnQzw7GXZoQZEyFGDmh2iP65GddGe2quAc7qCM8laWA=; b=KNUg0DeeP+txMkrKAhCc6yA04N h5Eg3pKpa9SQC5Uwqp9lrRPKQTwYrM07shoyZxWlkZzDQMCT3MRUdECP3poYKdKFO0zOIfMk7cxm5 aQxyBYoqEb4OgSIaEEpZRyAAFt7ZvQHl9EK1mNUvw6v6q0wYrUdeNQtt1AALEtQvVNEFfSVSk0z4Z k2MH8/0WzIYZsN5RJ30kwQQeNfqP/3GX8O4B1eiO9mYMfsU34KouQGCHr4U5OJOEx3pH46KblDXLX aVdDWzedUUA0HF54r+KpS/OkY4u4x1TTyJNBd3u9mRDyPISmGSImF/LdWLd4M34vQcSDCf3eHjUvJ XFJA3ZiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tI8ym-00000006mmR-2aGf; Mon, 02 Dec 2024 16:08:56 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tI8ek-00000006ibh-0G3b for linux-arm-kernel@lists.infradead.org; Mon, 02 Dec 2024 15:48:15 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-385e2579507so990652f8f.1 for ; Mon, 02 Dec 2024 07:48:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1733154492; x=1733759292; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lnQzw7GXZoQZEyFGDmh2iP65GddGe2quAc7qCM8laWA=; b=f6tFYO6nms53nTZZnEfbWXQa/wpEITUTLx5KgnqfTXMcecoSnAP4SLHfyldBJFr6P0 qbzKduOiNfxMV1uOXu8DEHdkfqbLcanqaN9tUMDdNctbx3cb3dub0SjG6tESnE0bYrm5 UK8JLtQlZKJFawGeOSuWJvKNySagYE8eWxKF4nu7OWiCDFSbh5C5XIOljepK+zoZONJk fUruGUHiCnlqzXWwC7h5HXaiiobnmBZskepHyRC9yyUfy5ZQSuatnXBKUTpxbfSz8G1Y d8WEPJbvhW79YSFYYr3Y7h0q3pS0FBg2dYje8HvanZFYYu/YAhd3Yq+pI6euylOcLJTa Eu0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733154492; x=1733759292; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lnQzw7GXZoQZEyFGDmh2iP65GddGe2quAc7qCM8laWA=; b=btgrHWhL9T6fZ9feuiwOKklKbG2VFdoZsKc+oy0sZpB3HC0sqUqr2au92jhuIKgc83 CQgCc+j60srKoIYaDUZFsidLD9MHLqHpW9FL9KS6FC+60/gm0+Q4+mJRSUHz3FZ/nbh/ UYM4GgMBs6P5ibVqfJs+GauZqZjLdHd+V/zHLFy5PX4b3V299E7IPCso/IMRKTYPh2nQ nVKE0VZ22rdkrci+ddmZ+4WxogxTeTfeaQyYCjQfZnL+zYgrXuMKgTcnq5JDe7bTheQZ 9PPvilBu2xCLn6AKSk1fLMudanRtwVZMe4OjYDnIl4Q36FDxoThisnJyTZcQaLrzUSJ8 gCAQ== X-Forwarded-Encrypted: i=1; AJvYcCXPi1p0wmvrf1SXEgNP5zypF6snIzWaVpIckjNs6VfrThw2XwXYiGRSdaR+nuKCB5Gi2ipTUqfUm6ggOUJyNccP@lists.infradead.org X-Gm-Message-State: AOJu0Yy7T6CCw6Jx0vxYN85OQ3DGJkPiy2xytmn/8QA2T7218c/J6ZI1 ttoZ9MJm0WDuR5DBrg2PNbTePwWgSjO+J8Mynu1IX2OvBBDRxU1eNADigSKb4hnQCprrar06Sw= = X-Google-Smtp-Source: AGHT+IFy6a6TuSNKKxoPdUR7i+hSqxgXnZND2/jd1bnVIE+WVX9SFuEs9tUkydF9BdD5AikuWy+HqIBKag== X-Received: from wmbjz10.prod.google.com ([2002:a05:600c:580a:b0:431:4bef:d181]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4915:b0:385:e8b4:aca2 with SMTP id ffacd0b85a97d-385e8b4af8dmr7961023f8f.58.1733154492610; Mon, 02 Dec 2024 07:48:12 -0800 (PST) Date: Mon, 2 Dec 2024 15:47:41 +0000 In-Reply-To: <20241202154742.3611749-1-tabba@google.com> Mime-Version: 1.0 References: <20241202154742.3611749-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241202154742.3611749-15-tabba@google.com> Subject: [PATCH v4 14/14] KVM: arm64: Convert the SVE guest vcpu flag to a vm flag From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, kristina.martsenko@arm.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241202_074814_098226_C723CE1B X-CRM114-Status: GOOD ( 18.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The vcpu flag GUEST_HAS_SVE is per-vcpu, but it is based on what is now a per-vm feature. Make the flag per-vm. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 12 +++++++++--- arch/arm64/include/asm/kvm_host.h | 18 ++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 11 +++++++---- arch/arm64/kvm/reset.c | 2 +- 4 files changed, 29 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 406e99a452bf..2d91fb88298a 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -620,7 +620,7 @@ static __always_inline void kvm_write_cptr_el2(u64 val) } /* Resets the value of cptr_el2 when returning to the host. */ -static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) +static __always_inline void __kvm_reset_cptr_el2(struct kvm *kvm) { u64 val; @@ -631,14 +631,14 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) } else if (has_hvhe()) { val = CPACR_ELx_FPEN; - if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) + if (!kvm_has_sve(kvm) || !guest_owns_fp_regs()) val |= CPACR_ELx_ZEN; if (cpus_have_final_cap(ARM64_SME)) val |= CPACR_ELx_SMEN; } else { val = CPTR_NVHE_EL2_RES1; - if (vcpu_has_sve(vcpu) && guest_owns_fp_regs()) + if (kvm_has_sve(kvm) && guest_owns_fp_regs()) val |= CPTR_EL2_TZ; if (!cpus_have_final_cap(ARM64_SME)) val |= CPTR_EL2_TSM; @@ -647,6 +647,12 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) kvm_write_cptr_el2(val); } +#ifdef __KVM_NVHE_HYPERVISOR__ +#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2(kern_hyp_va((v)->kvm)) +#else +#define kvm_reset_cptr_el2(v) __kvm_reset_cptr_el2((v)->kvm) +#endif + /* * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE * format if E2H isn't set. diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e6be8fe6627a..c834b6768247 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -331,6 +331,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7 /* Fine-Grained UNDEF initialised */ #define KVM_ARCH_FLAG_FGU_INITIALIZED 8 + /* SVE exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SVE 9 unsigned long flags; /* VM-wide vCPU feature set */ @@ -862,12 +864,10 @@ struct kvm_vcpu_arch { #define vcpu_set_flag(v, ...) __vcpu_set_flag((v), __VA_ARGS__) #define vcpu_clear_flag(v, ...) __vcpu_clear_flag((v), __VA_ARGS__) -/* SVE exposed to guest */ -#define GUEST_HAS_SVE __vcpu_single_flag(cflags, BIT(0)) +/* KVM_ARM_VCPU_INIT completed */ +#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) /* SVE config completed */ #define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) -/* KVM_ARM_VCPU_INIT completed */ -#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(2)) /* Exception pending */ #define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) @@ -956,8 +956,14 @@ struct kvm_vcpu_arch { KVM_GUESTDBG_USE_HW | \ KVM_GUESTDBG_SINGLESTEP) -#define vcpu_has_sve(vcpu) (system_supports_sve() && \ - vcpu_get_flag(vcpu, GUEST_HAS_SVE)) +#define kvm_has_sve(kvm) (system_supports_sve() && \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &(kvm)->arch.flags)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sve(vcpu) kvm_has_sve(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) +#endif #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index dfd031acde31..8a80e494f20c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -248,10 +248,13 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struct kvm *host_kvm) { struct kvm *kvm = &hyp_vm->kvm; + unsigned long host_arch_flags = READ_ONCE(host_kvm->arch.flags); DECLARE_BITMAP(allowed_features, KVM_VCPU_MAX_FEATURES); /* No restrictions for non-protected VMs. */ if (!kvm_vm_is_protected(kvm)) { + hyp_vm->kvm.arch.flags = host_arch_flags; + bitmap_copy(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES); @@ -271,8 +274,10 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PTRAUTH_GENERIC)) set_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, allowed_features); - if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SVE)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SVE)) { set_bit(KVM_ARM_VCPU_SVE, allowed_features); + kvm->arch.flags |= host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SVE); + } bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); @@ -308,10 +313,8 @@ static void pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu * { struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; - if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { - vcpu_clear_flag(vcpu, GUEST_HAS_SVE); + if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); - } } static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 1cfab6a5d8a5..803e11b0dc8f 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -85,7 +85,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * KVM_REG_ARM64_SVE_VLS. Allocation is deferred until * kvm_arm_vcpu_finalize(), which freezes the configuration. */ - vcpu_set_flag(vcpu, GUEST_HAS_SVE); + set_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &vcpu->kvm->arch.flags); } /*