From patchwork Fri Nov 22 11:06:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18428D75E27 for ; Fri, 22 Nov 2024 11:08:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FmWMl4XT0lqvfdXLFjuJH15ZxrAFu3IIWPF9uRtBrzQ=; b=LS0MsDmkDDHJA5GgCKbnJN9q3l 2BYUZzV9ISFPshCFlzgoljX1VqQW18p1o67paQBn2YftdoTXaYHkzc07JPareDicD9guJ8mbgBmIt AA2376arE4Y9n0A2+Q+vrUKM6HuX+nXS6S4nXui6oLq9a+gDmrOsrLPFvwdAiA2rGnCjacRnc4ANC EH2aULNwsh3qJFyLwiSRn6MdyU4Dfv9/rvXqe4m8HW0Z471YjTWtSsl49O+/20L74dq+v9w/CarpR 9A3N8XNXo0pBO3/zt7jA7KwL3gwFEGVRzGf84LI7m4U1JcFVkjK0+b82U/1YmnLh7OOSqZMKZ7ybn KECX1Ssw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERWQ-00000002JfK-253J; Fri, 22 Nov 2024 11:08:22 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUb-00000002JQ7-3oRJ for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:31 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-38256bf4828so1026813f8f.1 for ; Fri, 22 Nov 2024 03:06:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273587; x=1732878387; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FmWMl4XT0lqvfdXLFjuJH15ZxrAFu3IIWPF9uRtBrzQ=; b=EUon66+aNaLFhKjB5ViGiabLTsVIH5VjtGDO3TiOctw46EfdBSpGiajEXgZf+itwUB +sQYC1mQEWdLEXDzlNkc2HAPafxaM8PfEelTskKUtdZ5W5ipcjOYvFzn9KDNswIiJL+E udtlWL3Zjh8QIs032398b4pN+5MbeuEOM7HxNWqa+R5J8qsJBfAU7/P1zQL4Ut7E7ZuC eQWlpJGOPdwQMPZmQk412+a8nNxEMyAjag+3d3v/+MXcps6cVHrdOKMrsn/bTtXmiAB2 Jtn8+b6I9yeBDtdrzjHX+if1i8VgJJybFxsatalz/FheUFImpoUDfUtD9ZYEHvmFAxR0 YcCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273587; x=1732878387; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FmWMl4XT0lqvfdXLFjuJH15ZxrAFu3IIWPF9uRtBrzQ=; b=EJkGCQNZcHr9PKjJEYAezCsIdEyq9qwxrFDes0Z8/aziu6NzJolr3/EKjWKIcH+6J0 604rD48aYiwTxdDu2Zjd49sR3Km1oNSsB7hau19XTEoHY8Jo6aqvG1s6vn0Pk6M/mn3b +PnmiN0M67FXiHApThk8cCTSthOIjbMtPHO25av72YIdk2tQQCpfZlPyKmJDbF5kPlPF u13NNwuvqKnNNcpr0m1W3iZTaxyTF2XtQl3Zz4hcjtLEHzR07FtXf+A0lIRaWb9aW26h psRLhRLrZmKo31LyNTz5SuoupddQFKY2vwBbSvz5FpXyew2mLw/ewgX4wmlHkidcZR+2 8ToA== X-Forwarded-Encrypted: i=1; AJvYcCVuGXxqSPD9jR8xfnhgveOvq4AmA74764eRQmsKc+Dha/N5k5PV+JyAXJZE5ctfY4IihHdWW0Foad5/tuYYWpKt@lists.infradead.org X-Gm-Message-State: AOJu0YyrvSqK+u8hR2s3Mr0tYjut3vssg5FUXMXUCOHH7RKpVgSbnmup yMc2gax+j6ULD63SvgDhD4/tNRQsrg3sQErsJplsDK7ENrjeHip12zYFoB9SST1v5kxgUivObA= = X-Google-Smtp-Source: AGHT+IFIgw76uBu7rGBQqMAEO9Mn12qxizq3rBi7ZeOebJxeYiFcz8q93JA3obiJYuzFyQYU1jx/Ego02g== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:adf:f050:0:b0:382:3936:1948 with SMTP id ffacd0b85a97d-382604ef4ecmr880f8f.1.1732273587268; Fri, 22 Nov 2024 03:06:27 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:11 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-2-tabba@google.com> Subject: [PATCH v2 01/12] KVM: arm64: Consolidate allowed and restricted VM feature checks From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030629_948701_A5C194E7 X-CRM114-Status: GOOD ( 19.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The definitions for features allowed and allowed with restrictions for protected guests, which are based on feature registers, were defined and checked for separately, even though they are handled in the same way. This could result in missing checks for certain features, e.g., pointer authentication, causing traps for allowed features. Consolidate the definitions into one. Use that new definition to construct the guest view of the feature registers for consistency. Fixes: 6c30bfb18d0b ("KVM: arm64: Add handlers for protected VM System Registers") Reported-by: Mostafa Saleh Signed-off-by: Fuad Tabba --- Note: This patch ends up being a no-op, since none of the changes in it survive the series. It's included because it makes the rest of the series flow more smoothly. --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 55 +++++++------------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 8 +-- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 +- 3 files changed, 26 insertions(+), 43 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index f957890c7e38..d1e59b88ff66 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -14,11 +14,8 @@ * guest virtual machines, depending on the mode KVM is running in and on the * type of guest that is running. * - * The ALLOW masks represent a bitmask of feature fields that are allowed - * without any restrictions as long as they are supported by the system. - * - * The RESTRICT_UNSIGNED masks, if present, represent unsigned fields for - * features that are restricted to support at most the specified feature. + * Each field in the masks represents the highest supported *unsigned* value for + * the feature, if supported by the system. * * If a feature field is not present in either, than it is not supported. * @@ -34,16 +31,7 @@ * - Floating-point and Advanced SIMD * - Data Independent Timing * - Spectre/Meltdown Mitigation - */ -#define PVM_ID_AA64PFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) \ - ) - -/* + * * Restrict to the following *unsigned* features for protected VMs: * - AArch64 guests only (no support for AArch32 guests): * AArch32 adds complexity in trap handling, emulation, condition codes, @@ -51,7 +39,12 @@ * - RAS (v1) * Supported by KVM */ -#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\ +#define PVM_ID_AA64PFR0_ALLOW (\ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP) | \ SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP) | \ @@ -77,20 +70,16 @@ * - Distinction between Secure and Non-secure Memory * - Mixed-endian at EL0 only * - Non-context synchronizing exception entry and exit + * + * Restrict to the following *unsigned* features for protected VMs: + * - 40-bit IPA + * - 16-bit ASID */ #define PVM_ID_AA64MMFR0_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGEND) | \ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_SNSMEM) | \ ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGENDEL0) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) \ - ) - -/* - * Restrict to the following *unsigned* features for protected VMs: - * - 40-bit IPA - * - 16-bit ASID - */ -#define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\ + ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) | \ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_PARANGE), ID_AA64MMFR0_EL1_PARANGE_40) | \ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_ASIDBITS), ID_AA64MMFR0_EL1_ASIDBITS_16) \ ) @@ -185,15 +174,6 @@ ) /* Restrict pointer authentication to the basic version. */ -#define PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ - ) - -#define PVM_ID_AA64ISAR2_RESTRICT_UNSIGNED (\ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ - ) - #define PVM_ID_AA64ISAR1_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DPB) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_JSCVT) | \ @@ -206,13 +186,16 @@ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SPECRES) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_BF16) | \ ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DGH) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ ) #define PVM_ID_AA64ISAR2_ALLOW (\ ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_ATS1A)| \ ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \ + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) | \ + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ ) u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 01616c39a810..76a70fee7647 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -36,9 +36,9 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) /* Protected KVM does not support AArch32 guests. */ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL0_IMP); + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL1_IMP); + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); /* * Linux guests assume support for floating-point and Advanced SIMD. Do @@ -362,8 +362,8 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), PVM_ID_AA64PFR0_ALLOW)) set_bit(KVM_ARM_VCPU_SVE, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_RESTRICT_UNSIGNED)) + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_ALLOW) && + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_ALLOW)) set_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, allowed_features); if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), PVM_ID_AA64ISAR1_ALLOW) && diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 2860548d4250..59fb2f056177 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -89,7 +89,7 @@ static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); + PVM_ID_AA64PFR0_ALLOW); return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; } @@ -189,7 +189,7 @@ static u64 get_pvm_id_aa64mmfr0(const struct kvm_vcpu *vcpu) u64 set_mask; set_mask = get_restricted_features_unsigned(id_aa64mmfr0_el1_sys_val, - PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED); + PVM_ID_AA64MMFR0_ALLOW); return (id_aa64mmfr0_el1_sys_val & PVM_ID_AA64MMFR0_ALLOW) | set_mask; } @@ -276,7 +276,7 @@ static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, * of AArch32 feature id registers. */ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_EL1_EL1_IMP); + PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_EL1_EL1_IMP); return pvm_access_raz_wi(vcpu, p, r); } From patchwork Fri Nov 22 11:06:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 458B0D75E27 for ; Fri, 22 Nov 2024 11:09:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SDTSvwB1XGJ6hjWinN2ooB2qw2MQpVlCzYkvOY/NvJc=; b=hIMa8+MEPpX6s8Dqq61xLu3/L3 EWPNbMugF3LbpSeTZEjDKxy2Ka4iT7pLawsBfgAYraq67awhCUkhuK8BfoOkE6MpTAzdwFE2I+WgC SnGtvjdrB5T78RZ9W/rLBZadqEIINFrVzCoIcyBQEQ594kIsJUboWWXQx7d8lbQbn+FrFSNHoTvt8 4CXv/W1S4AGs9pOKPjhyQ/PnA+qLPKePmZ1jiQTzgwkV7u+JZXXiAGMZRfJP81Xs/R5+/RvWnHAWS NqXKJQ64/Y7fwsbw18skbaIF7LWdIwRmvgH9+rIkN4Jk4J5cIJAo4JobuWOKLP+hpaIN6bBXPD13H CiVTLgqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERXL-00000002Jk5-0axe; Fri, 22 Nov 2024 11:09:19 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUd-00000002JQT-28KV for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:32 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43154a0886bso13339225e9.0 for ; Fri, 22 Nov 2024 03:06:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273589; x=1732878389; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SDTSvwB1XGJ6hjWinN2ooB2qw2MQpVlCzYkvOY/NvJc=; b=oLOv86LmIoM9lC8wk1WEHRE8yzjD7XPJS1flk0VGFi7tw5NwCUetZMWOBuddkA6sAi FlSc4AmHyZcS8fDA49O66rlud5cJRjFtCUrNHMfgQPXshZZ3+1mXi2/Rd34METj8uTem 2zzp6iNZ7GfWXgqGresd+ApoqQYYA+6pN+MUe/DWkfNniZEsJ9QDfhNfsbM9jIIIo05O pceoyXXXPARq2RGvv3tBMkTH+Xin9uIL4i2be1ieGWZGlJv3McqqmFf7m02H81IpOSiy 08SnUfXhqGJR6i+QC+qqDT07E0aFPIui184W1vm2hDxWnBtPfcY4AG8WvUhJ2qmUvewK sUJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273589; x=1732878389; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SDTSvwB1XGJ6hjWinN2ooB2qw2MQpVlCzYkvOY/NvJc=; b=dtSMDdg1nJXXRUl5YMzUgOkMZ12fX9rki/PSNigwby5EAtSiCaQBuUAN6tQ5a3aCsb vPENM8NqUBbLp/vujDtky51A6ycwGvTYcDEO5x6iedN78On4a0YruXCp3t5F5b6ucERB /0bljeqEsrafoj9F3r87PWJFPd0zm9vu2GSMSPomkps+YMO314I5kYHEucYIlRfRUqd7 iqPAAfBRUQr8F5+c3N2oJoaIYM4SgsnjTKYp8//nTADkXSoIg669cwmZl+eaaKe8Gs/D uofvEP0OQVSBBsCN83mw9JBU2ZWWNc/W6+9OBF7Rtevy5SDfZ2F6heQyk+3Om+D8xC3H Tq9g== X-Forwarded-Encrypted: i=1; AJvYcCUHxgY6li3/UtEB6hGaVHO4R7OpK/RGGoTA72vZtI3Sz5xXikreKXGZ53HtAis/tip6mdD535IP8hck6XhP4/zn@lists.infradead.org X-Gm-Message-State: AOJu0Yxo6d6HeMSmSrAQ52ISLIDq8trCTWsaicYgnE7VwxGiS1ykSupz mRqK4jE/Ipj4tsJKmA0ds+ra6owU5bpiTUP1eyFyP0aZtH6Hfo5QM3FBPodCGoTucBY+fMBkig= = X-Google-Smtp-Source: AGHT+IGwEtvB5ymwhnpM/fkYMDl3Becp4+Xxr+agN5M0hJD9/Mecqp/zmE2JSGbJRhyEWxGES//bB4g20Q== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:6005:b0:42c:a8b5:c1b with SMTP id 5b1f17b1804b1-433ce41bee9mr8645e9.2.1732273589522; Fri, 22 Nov 2024 03:06:29 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:12 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-3-tabba@google.com> Subject: [PATCH v2 02/12] KVM: arm64: Group setting traps for protected VMs by control register From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030631_549659_381CB643 X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Group setting protected VM traps by control register rather than feature id register, since some trap values (e.g., PAuth), depend on more than one feature id register. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 317 +++++++++++++++------------------ 1 file changed, 144 insertions(+), 173 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 76a70fee7647..1744574e79b2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,233 +23,204 @@ unsigned int kvm_arm_vmid_bits; unsigned int kvm_host_sve_max_vl; -/* - * Set trap register values based on features in ID_AA64PFR0. - */ -static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - u64 hcr_set = HCR_RW; - u64 hcr_clear = 0; - u64 cptr_set = 0; - u64 cptr_clear = 0; - - /* Protected KVM does not support AArch32 guests. */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); - - /* - * Linux guests assume support for floating-point and Advanced SIMD. Do - * not change the trapping behavior for these from the KVM default. - */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), - PVM_ID_AA64PFR0_ALLOW)); - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), - PVM_ID_AA64PFR0_ALLOW)); + vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; if (has_hvhe()) - hcr_set |= HCR_E2H; + vcpu->arch.hcr_el2 |= HCR_E2H; - /* Trap RAS unless all current versions are supported */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), feature_ids) < - ID_AA64PFR0_EL1_RAS_V1P1) { - hcr_set |= HCR_TERR | HCR_TEA; - hcr_clear |= HCR_FIEN; + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { + /* route synchronous external abort exceptions to EL2 */ + vcpu->arch.hcr_el2 |= HCR_TEA; + /* trap error record accesses */ + vcpu->arch.hcr_el2 |= HCR_TERR; } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), feature_ids)) { - hcr_clear |= HCR_AMVOFFEN; - cptr_set |= CPTR_EL2_TAM; - } + if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) + vcpu->arch.hcr_el2 |= HCR_FWB; - /* Trap SVE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) { - if (has_hvhe()) - cptr_clear |= CPACR_ELx_ZEN; - else - cptr_set |= CPTR_EL2_TZ; - } + if (cpus_have_final_cap(ARM64_HAS_EVT) && + !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) + vcpu->arch.hcr_el2 |= HCR_TID4; + else + vcpu->arch.hcr_el2 |= HCR_TID2; - vcpu->arch.hcr_el2 |= hcr_set; - vcpu->arch.hcr_el2 &= ~hcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; - vcpu->arch.cptr_el2 &= ~cptr_clear; + if (vcpu_has_ptrauth(vcpu)) + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); } -/* - * Set trap register values based on features in ID_AA64PFR1. - */ -static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu) +static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - u64 hcr_set = 0; - u64 hcr_clear = 0; + const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); + const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); + const u64 id_aa64mmfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); + u64 val = vcpu->arch.hcr_el2; + + /* No support for AArch32. */ + val |= HCR_RW; + + if (has_hvhe()) + val |= HCR_E2H; + + /* + * Always trap: + * - Feature id registers: to control features exposed to guests + * - Implementation-defined features + */ + val |= HCR_TACR | HCR_TIDCP | HCR_TID3 | HCR_TID1; + + /* Trap RAS unless all current versions are supported */ + if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), id_aa64pfr0) < + ID_AA64PFR0_EL1_RAS_V1P1) { + val |= HCR_TERR | HCR_TEA; + val &= ~(HCR_FIEN); + } + + /* Trap AMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + val &= ~(HCR_AMVOFFEN); /* Memory Tagging: Trap and Treat as Untagged if not supported. */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), feature_ids)) { - hcr_set |= HCR_TID5; - hcr_clear |= HCR_DCT | HCR_ATA; + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), id_aa64pfr1)) { + val |= HCR_TID5; + val &= ~(HCR_DCT | HCR_ATA); } - vcpu->arch.hcr_el2 |= hcr_set; - vcpu->arch.hcr_el2 &= ~hcr_clear; + /* Trap LOR */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), id_aa64mmfr1)) + val |= HCR_TLOR; + + vcpu->arch.hcr_el2 = val; } -/* - * Set trap register values based on features in ID_AA64DFR0. - */ -static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu) +static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) { - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); - u64 mdcr_set = 0; - u64 mdcr_clear = 0; - u64 cptr_set = 0; + const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); + const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); + const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + u64 val = vcpu->arch.cptr_el2; - /* Trap/constrain PMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), feature_ids)) { - mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; - mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME | - MDCR_EL2_HPMN_MASK; + if (!has_hvhe()) { + val |= CPTR_NVHE_EL2_RES1; + val &= ~(CPTR_NVHE_EL2_RES0); } - /* Trap Debug */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), feature_ids)) - mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE; - - /* Trap OS Double Lock */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), feature_ids)) - mdcr_set |= MDCR_EL2_TDOSA; + /* Trap AMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + val |= CPTR_EL2_TAM; - /* Trap SPE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), feature_ids)) { - mdcr_set |= MDCR_EL2_TPMS; - mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + /* Trap SVE */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), id_aa64pfr0)) { + if (has_hvhe()) + val &= ~(CPACR_ELx_ZEN); + else + val |= CPTR_EL2_TZ; } - /* Trap Trace Filter */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), feature_ids)) - mdcr_set |= MDCR_EL2_TTRF; + /* No SME support in KVM. */ + BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME), id_aa64pfr1)); + if (has_hvhe()) + val &= ~(CPACR_ELx_SMEN); + else + val |= CPTR_EL2_TSM; /* Trap Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), feature_ids)) { + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), id_aa64dfr0)) { if (has_hvhe()) - cptr_set |= CPACR_EL1_TTA; + val |= CPACR_EL1_TTA; else - cptr_set |= CPTR_EL2_TTA; + val |= CPTR_EL2_TTA; } - /* Trap External Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), feature_ids)) - mdcr_clear |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; - - vcpu->arch.mdcr_el2 |= mdcr_set; - vcpu->arch.mdcr_el2 &= ~mdcr_clear; - vcpu->arch.cptr_el2 |= cptr_set; -} - -/* - * Set trap register values based on features in ID_AA64MMFR0. - */ -static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu) -{ - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); - u64 mdcr_set = 0; - - /* Trap Debug Communications Channel registers */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), feature_ids)) - mdcr_set |= MDCR_EL2_TDCC; - - vcpu->arch.mdcr_el2 |= mdcr_set; + vcpu->arch.cptr_el2 = val; } -/* - * Set trap register values based on features in ID_AA64MMFR1. - */ -static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu) -{ - const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); - u64 hcr_set = 0; - - /* Trap LOR */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), feature_ids)) - hcr_set |= HCR_TLOR; - - vcpu->arch.hcr_el2 |= hcr_set; -} - -/* - * Set baseline trap register values. - */ -static void pvm_init_trap_regs(struct kvm_vcpu *vcpu) +static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { - const u64 hcr_trap_feat_regs = HCR_TID3; - const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1; - - /* - * Always trap: - * - Feature id registers: to control features exposed to guests - * - Implementation-defined features - */ - vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef; + const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + const u64 id_aa64mmfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); + u64 val = vcpu->arch.mdcr_el2; - /* Clear res0 and set res1 bits to trap potential new features. */ - vcpu->arch.hcr_el2 &= ~(HCR_RES0); - vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0); - if (!has_hvhe()) { - vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1; - vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0); + /* Trap/constrain PMU */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), id_aa64dfr0)) { + val |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; + val &= ~(MDCR_EL2_HPME | MDCR_EL2_MTPME | MDCR_EL2_HPMN_MASK); } -} -static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) -{ - vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; + /* Trap Debug */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0)) + val |= MDCR_EL2_TDRA | MDCR_EL2_TDA; - if (has_hvhe()) - vcpu->arch.hcr_el2 |= HCR_E2H; + /* Trap OS Double Lock */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), id_aa64dfr0)) + val |= MDCR_EL2_TDOSA; - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { - /* route synchronous external abort exceptions to EL2 */ - vcpu->arch.hcr_el2 |= HCR_TEA; - /* trap error record accesses */ - vcpu->arch.hcr_el2 |= HCR_TERR; + /* Trap SPE */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), id_aa64dfr0)) { + val |= MDCR_EL2_TPMS; + val &= ~(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT); } - if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) - vcpu->arch.hcr_el2 |= HCR_FWB; + /* Trap Trace Filter */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), id_aa64dfr0)) + val |= MDCR_EL2_TTRF; - if (cpus_have_final_cap(ARM64_HAS_EVT) && - !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) - vcpu->arch.hcr_el2 |= HCR_TID4; - else - vcpu->arch.hcr_el2 |= HCR_TID2; + /* Trap External Trace */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), id_aa64dfr0)) + val |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; - if (vcpu_has_ptrauth(vcpu)) - vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); + /* Trap Debug Communications Channel registers */ + if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), id_aa64mmfr0)) + val |= MDCR_EL2_TDCC; + + vcpu->arch.mdcr_el2 = val; } /* * Initialize trap register values in protected mode. */ -static void pkvm_vcpu_init_traps(struct kvm_vcpu *vcpu) +static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) { + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; + vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); vcpu->arch.mdcr_el2 = 0; pkvm_vcpu_reset_hcr(vcpu); - if ((!vcpu_is_protected(vcpu))) + if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) return; - pvm_init_trap_regs(vcpu); - pvm_init_traps_aa64pfr0(vcpu); - pvm_init_traps_aa64pfr1(vcpu); - pvm_init_traps_aa64dfr0(vcpu); - pvm_init_traps_aa64mmfr0(vcpu); - pvm_init_traps_aa64mmfr1(vcpu); + /* + * PAuth is allowed if supported by the system and the vcpu. + * Properly checking for PAuth requires checking various fields in + * ID_AA64ISAR1_EL1 and ID_AA64ISAR2_EL1. The way that fixed config + * is controlled now in pKVM does not easily allow that. This will + * change later to follow the changes upstream wrt fixed configuration + * and nested virt. + */ + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), + PVM_ID_AA64ISAR1_ALLOW)); + + /* Protected KVM does not support AArch32 guests. */ + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); + BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), + PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); + + /* + * Linux guests assume support for floating-point and Advanced SIMD. Do + * not change the trapping behavior for these from the KVM default. + */ + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), + PVM_ID_AA64PFR0_ALLOW)); + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), + PVM_ID_AA64PFR0_ALLOW)); + + pvm_init_traps_hcr(vcpu); + pvm_init_traps_cptr(vcpu); + pvm_init_traps_mdcr(vcpu); } /* @@ -448,7 +419,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); pkvm_vcpu_init_ptrauth(hyp_vcpu); - pkvm_vcpu_init_traps(&hyp_vcpu->vcpu); + pkvm_vcpu_init_traps(hyp_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); From patchwork Fri Nov 22 11:06:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 567A4D75E27 for ; Fri, 22 Nov 2024 11:10:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gur2xynCjgcWp41HkQooTRKN+u+15be4XbuVrf1N/Og=; b=E/BSVwrcJoQv2K6ff2/w+NtllY FUdcXZ780To2B2IxaokSGv5s3xxtMWy4ZkQ08tmP0KFa57f4X0QL6tYtpLPLTPFx56mFeB/5FkWoc wurNzLOkCkoYGdgaD2rcoQuQjDgmMT9QviV6F9wylZQjNw9nBn2utf4coZyS0PXulnR3DQub82ump ZGMPDXDMycOUB1dt7Q8uT3P4/M6mBbzjaTK0F8L+0DcXrzhFDmgh37eDVladfVLBGo/HQQI7QENZ6 CQNhzOUUGyIW4BDQBU5pRYlIM1A9C2q38/Rhby/WAt1QO//bXjjD4SKAPo16nL0Rpqp7MHhd+LGK7 dLJxxMDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERYG-00000002Jyf-3Xrx; Fri, 22 Nov 2024 11:10:16 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUf-00000002JQq-3ZaV for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:34 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4315e8e9b1cso11199675e9.1 for ; Fri, 22 Nov 2024 03:06:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273592; x=1732878392; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gur2xynCjgcWp41HkQooTRKN+u+15be4XbuVrf1N/Og=; b=2puyc1Rw+OCzUYH1KEtM0lXx28Tc3loG83ly36D3H5KjyKnIYm/BIMXV5fUp7JNDwm jXqZUlcOD0deyyuyTs/LDwKGgyTOnKPBX51iCXuLNK4L3TEPDdrNXv3iT2Ve0dDzpixr CmQ6KxX4wHu9zBeYqpvrAQzuTcVSHWXvxCnmB/Jds1kTvveOe4ReDAjjPT6Sx+M9wI/u 9/3Ri/lh6E16qS8iU8XoxmcuhfUtE4TOdi9TPLdVlVbByXg2ilRaE+UYRPp5my5CyEhv rIVIhBiQ1zyjpCtvaDEdUCU230STwDQnWVHEgqqKtsO4XwqpO7/ZDZC/lKg4aPJNiz3u X9Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273592; x=1732878392; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gur2xynCjgcWp41HkQooTRKN+u+15be4XbuVrf1N/Og=; b=O39GzW8R2TCHW5n9w1+1/N/x1hF8JGlJVsKMZLMZ7gx300MiKizybmHeWzPIj/3E9x R3w5zUTlVrXRoeKpSgY5KRsJZS8RDa6J3LXtYSylJtIQAcRBEgUbAPUWwBG9HZKGxVTT GBCxaHNx5AWyWWVSYJaAfPDWd2YVAU8Fd0gtJH37GOLBE0nvRJgBwyo6FkmQvwXUM8Z7 QD+yZhhOnhlbX1LRIy+6Vzf21DK805vs52LmqJCytG6ZTIlmHIr70idPLuYHAhE40bMK Pg0c5QWr8+x3Sjd9A1o2+4OaoyuY9rqgY2JBSy9xC0Fv4yVjbBGcubofoBfqTLY86Z+B NiCA== X-Forwarded-Encrypted: i=1; AJvYcCUg6RlBrfupDOn/WT6gqLMCo6jYlWmPOE1K0zL06ZZdnNIOrTLGkwqATIsXXN/NfZGFIua/05dLJA1U9imu/yq1@lists.infradead.org X-Gm-Message-State: AOJu0YwSgjc1QDhgxWA3gjFleO2maa/X2dmpGOTx7e3bpV2zlMLMy+iM mLqHLYThVuMX3gs5UriH1qR20OCsnfIrt7SWt9RboYhIzlCVkUUxgpU+7rilPnxqSEb58ff4yQ= = X-Google-Smtp-Source: AGHT+IEGR5iaos9a96LoeCuq8AX/zU97pwwRDaEVQhWXXwWDVUaSc8yCOJEzvYtzi12QNeFZMM4NmTsDZg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:4594:b0:42c:acd5:c641 with SMTP id 5b1f17b1804b1-433cdb1836dmr141865e9.2.1732273591798; Fri, 22 Nov 2024 03:06:31 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:13 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-4-tabba@google.com> Subject: [PATCH v2 03/12] KVM: arm64: Move checking protected vcpu features to a separate function From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030633_900707_55D23C90 X-CRM114-Status: GOOD ( 14.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org At the moment, checks for supported vcpu features for protected VMs are build-time bugs. In the following patch, they will become runtime checks based on the vcpu's features registers. Therefore, consolidate them into one function that would return an error if it encounters an unsupported feature. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 45 ++++++++++++++++++++++++---------- 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 1744574e79b2..fb733b36c6c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -178,20 +178,11 @@ static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) } /* - * Initialize trap register values in protected mode. + * Check that cpu features that are neither trapped nor supported are not + * enabled for protected VMs. */ -static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) +static int pkvm_check_pvm_cpu_features(struct kvm_vcpu *vcpu) { - struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; - - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); - vcpu->arch.mdcr_el2 = 0; - - pkvm_vcpu_reset_hcr(vcpu); - - if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) - return; - /* * PAuth is allowed if supported by the system and the vcpu. * Properly checking for PAuth requires checking various fields in @@ -218,9 +209,34 @@ static void pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), PVM_ID_AA64PFR0_ALLOW)); + return 0; +} + +/* + * Initialize trap register values in protected mode. + */ +static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; + int ret; + + vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); + vcpu->arch.mdcr_el2 = 0; + + pkvm_vcpu_reset_hcr(vcpu); + + if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu))) + return 0; + + ret = pkvm_check_pvm_cpu_features(vcpu); + if (ret) + return ret; + pvm_init_traps_hcr(vcpu); pvm_init_traps_cptr(vcpu); pvm_init_traps_mdcr(vcpu); + + return 0; } /* @@ -417,9 +433,12 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); hyp_vcpu->vcpu.arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; + ret = pkvm_vcpu_init_traps(hyp_vcpu); + if (ret) + goto done; + pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); pkvm_vcpu_init_ptrauth(hyp_vcpu); - pkvm_vcpu_init_traps(hyp_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); From patchwork Fri Nov 22 11:06:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 984A1D75E25 for ; Fri, 22 Nov 2024 11:11:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=03ADDHPjMckoqnn+XULhyS8aszt0kusQxTsXc/QnmUQ=; b=0k6LYdhhe3Y/28AxVVhpwX7DKF ONEFhYSYN7pi6JQ0Q2gOJmydRcIBElz4o/mNnWVbw6lvV1bHt1WWsBYrw4mxhL/Gqa7TkM7pmLBzw uXI/bRMABJdEIG3uYGs0N4gQXlPhby5hYxCjxAHbG4Ed/oASdyuQLtE/k85LIee+RQWtqatlA8Ewo eyExnqJZUdT4Md7JVdO8rOsr0WpGp7+jCHV9ZFqhhtNx5BEqn72P2NIGozL3yxLhcP8sXjjtilD1R Vv1w4yBxl08ydeytivHdv8qAYbkkdWHrVdCeDJVifIHvmPVixJqJ7QdwK13BB4tIExl53Ow7g6hU7 CSfdus0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERZB-00000002KAZ-2Giu; Fri, 22 Nov 2024 11:11:13 +0000 Received: from mail-ej1-x64a.google.com ([2a00:1450:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUi-00000002JRI-1Sgu for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:37 +0000 Received: by mail-ej1-x64a.google.com with SMTP id a640c23a62f3a-a9a0c259715so103116366b.0 for ; Fri, 22 Nov 2024 03:06:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273594; x=1732878394; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=03ADDHPjMckoqnn+XULhyS8aszt0kusQxTsXc/QnmUQ=; b=ofpLMyJDF9aw8RBfytLmNAJg9ldn/uN4iys/IBsffw7ZogjoFOsUEoSfMqClv7yCfI J1PeCqv7DzhDKdr3H4b+lHPir5jt6Cp1+dCmqiVeVX2WQ4I4TVJsZc1JgJL7hEXlG2i5 Ry+6+iOJMiz4NOBMasGu0Q3nyNG6XLAi3shj83VbJ6ZJm1hxRE/EqgY2fo6VnRHTCki7 CBzoJnjjR8gy4op/82spu0KpNKiWxVAfC6vF04P3zdjl6rP5tyUBYTLpQWOFyMbc7U4G fCAeDtwmhxEtUBRs3YB5+oPhMcdTBBH0n2FjXAwj2vYc/03UcLqcPHUfDcf6Stktu7jn Fn/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273594; x=1732878394; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=03ADDHPjMckoqnn+XULhyS8aszt0kusQxTsXc/QnmUQ=; b=iY20ZNEi823myRhHTucXAc3gf+iCr2+xVMBMOKHTwXL5/nkwhD5FXkhYgev4hm29Th UaWcB4IM34SbfAF/lFlNoFTvLijKpz9u/tXGSWbxyDUKMI82gLtC4zVp/RqH5r4XY28t /mUgcqfpdo00MwhCKnfHaqv6vIMULburOk3xiYtTjQssScsIEgaZYXLwfdt0SJusdovB 0cMsSCMeKt93FgyN4wMF2GcGn0e/g9YC5lqwiH6G4+9yN0BZHWvHAa1uafQcDHZtMdeE ZXheYZO1bja53wIJW87HMzhpojCKn8pSAtIw4pFGwx0NbVWaqTtqBJE/hdmnm+4PRdBj E2tQ== X-Forwarded-Encrypted: i=1; AJvYcCVDS88ctA4i+zTWYYCrhj3s2Jo5NgSrZ80uusEOeHtCBRSFRWyEQNixYN3G40GYVtUcntelG7VZjXzmuROqkuv7@lists.infradead.org X-Gm-Message-State: AOJu0Yw+PSQQt1Yd9/ETF9WfK2PDD9wAAdy/y78Gfgv5GnggeFckHdsv /4j1ejCRavmdUKCwHlU7NYkiQZMz+eiiA216NjLdCxFnw3SaQXeHbklYHrmzwGnMlzIw0m4llA= = X-Google-Smtp-Source: AGHT+IFkqsL/7nV3OCA7LeaMTv0AATuFBBnKpKksAIodWgFHVj+S5jvSChmVxxDMkCAwalcwr+FiwFgJ8w== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a17:906:279a:b0:a99:3fbe:d50a with SMTP id a640c23a62f3a-aa50993b3a4mr44366b.1.1732273594298; Fri, 22 Nov 2024 03:06:34 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:14 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-5-tabba@google.com> Subject: [PATCH v2 04/12] KVM: arm64: Use KVM extension checks for allowed protected VM capabilities From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030636_385576_4D2744DB X-CRM114-Status: GOOD ( 14.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use KVM extension checks as the source for determining which capabilities are allowed for protected VMs. KVM extension checks is the natural place for this, since it is also the interface exposed to users. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pkvm.h | 25 +++++++++++++++++++++++++ arch/arm64/kvm/arm.c | 29 ++--------------------------- arch/arm64/kvm/hyp/nvhe/pkvm.c | 26 ++++++-------------------- 3 files changed, 33 insertions(+), 47 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index cd56acd9a842..400f7cef1e81 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -20,6 +20,31 @@ int pkvm_init_host_vm(struct kvm *kvm); int pkvm_create_hyp_vm(struct kvm *kvm); void pkvm_destroy_hyp_vm(struct kvm *kvm); +/* + * This functions as an allow-list of protected VM capabilities. + * Features not explicitly allowed by this function are denied. + */ +static inline bool kvm_pvm_ext_allowed(long ext) +{ + switch (ext) { + case KVM_CAP_IRQCHIP: + case KVM_CAP_ARM_PSCI: + case KVM_CAP_ARM_PSCI_0_2: + case KVM_CAP_NR_VCPUS: + case KVM_CAP_MAX_VCPUS: + case KVM_CAP_MAX_VCPU_ID: + case KVM_CAP_MSI_DEVID: + case KVM_CAP_ARM_VM_IPA_SIZE: + case KVM_CAP_ARM_PMU_V3: + case KVM_CAP_ARM_SVE: + case KVM_CAP_ARM_PTRAUTH_ADDRESS: + case KVM_CAP_ARM_PTRAUTH_GENERIC: + return true; + default: + return false; + } +} + extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..b295218cdc24 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -80,31 +80,6 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; } -/* - * This functions as an allow-list of protected VM capabilities. - * Features not explicitly allowed by this function are denied. - */ -static bool pkvm_ext_allowed(struct kvm *kvm, long ext) -{ - switch (ext) { - case KVM_CAP_IRQCHIP: - case KVM_CAP_ARM_PSCI: - case KVM_CAP_ARM_PSCI_0_2: - case KVM_CAP_NR_VCPUS: - case KVM_CAP_MAX_VCPUS: - case KVM_CAP_MAX_VCPU_ID: - case KVM_CAP_MSI_DEVID: - case KVM_CAP_ARM_VM_IPA_SIZE: - case KVM_CAP_ARM_PMU_V3: - case KVM_CAP_ARM_SVE: - case KVM_CAP_ARM_PTRAUTH_ADDRESS: - case KVM_CAP_ARM_PTRAUTH_GENERIC: - return true; - default: - return false; - } -} - int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -113,7 +88,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (cap->flags) return -EINVAL; - if (kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, cap->cap)) + if (kvm_vm_is_protected(kvm) && !kvm_pvm_ext_allowed(cap->cap)) return -EINVAL; switch (cap->cap) { @@ -311,7 +286,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) { int r; - if (kvm && kvm_vm_is_protected(kvm) && !pkvm_ext_allowed(kvm, ext)) + if (kvm && kvm_vm_is_protected(kvm) && !kvm_pvm_ext_allowed(ext)) return 0; switch (ext) { diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index fb733b36c6c1..59ff6aac514c 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -329,34 +329,20 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc bitmap_zero(allowed_features, KVM_VCPU_MAX_FEATURES); - /* - * For protected VMs, always allow: - * - CPU starting in poweroff state - * - PSCI v0.2 - */ - set_bit(KVM_ARM_VCPU_POWER_OFF, allowed_features); set_bit(KVM_ARM_VCPU_PSCI_0_2, allowed_features); - /* - * Check if remaining features are allowed: - * - Performance Monitoring - * - Scalable Vectors - * - Pointer Authentication - */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), PVM_ID_AA64DFR0_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PMU_V3)) set_bit(KVM_ARM_VCPU_PMU_V3, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), PVM_ID_AA64PFR0_ALLOW)) - set_bit(KVM_ARM_VCPU_SVE, allowed_features); - - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), PVM_ID_AA64ISAR1_ALLOW) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), PVM_ID_AA64ISAR1_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PTRAUTH_ADDRESS)) set_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, allowed_features); - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), PVM_ID_AA64ISAR1_ALLOW) && - FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA), PVM_ID_AA64ISAR1_ALLOW)) + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_PTRAUTH_GENERIC)) set_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, allowed_features); + if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SVE)) + set_bit(KVM_ARM_VCPU_SVE, allowed_features); + bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); } From patchwork Fri Nov 22 11:06:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39237D75E28 for ; Fri, 22 Nov 2024 11:12:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3ME9UsezPGt4WGSKA1VLgt1h56+I05ddJ2XsR+2perQ=; b=gd/5Z6bESzL1557sWWCIS1wDbK 1FJxbqdlByzOyv2E7T5fYT1alzaVu7Sk1V+6co5LcBB/saiGgOT00RW3b5VONU1hsRFrmTXX//QIq m+X2glN3NwTxGqTnPGRthMLCh9cSvr5yiPkhK/A4hmnHWQDLjMnJpbn92dVYc3W9AY+iMzYY46IQ9 bDYA3XM7WoP99kP6mk1Wuj8AtBd9BcjeM+CCPs6JRP8EqlfBzcL73mWMA7KW0WXvZLOF8UQavE1nc gTeogNlbpbgi0niKgym81k8aofpGUeudEp1Ly78hl9kMF4AicwGLGLncvVa6W6Bqh8DZaTcLP3bCb ctmp5OBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERa7-00000002KOB-0pI9; Fri, 22 Nov 2024 11:12:11 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUk-00000002JRf-2Rh4 for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:39 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-e293150c2c6so3731702276.1 for ; Fri, 22 Nov 2024 03:06:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273597; x=1732878397; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3ME9UsezPGt4WGSKA1VLgt1h56+I05ddJ2XsR+2perQ=; b=dxniO3PBJKOo+M7GEAk3QaX80+fBy4UMpW6ca69qRIPJP4UhySCw8xA1q6ep1q4evb u19kB4aIv14fmr8nKat2/TAzDqUW0B7HgfYg5Slm5QRyisS8N8ITOaTREElJ4dPIdsa0 NcD5s7RmipesFjMMJvSBB0R8EUjaRB36JGjcE26cVHzg4VCXI0peiuEAyHaS7Z9EEpcC KOsQP4znlVHeHcBoRg9NsLv3oxM7ptfkMD5+NlNNMFfzxowbGRPFVQmx67yWMYHqLRjl N3mthX//fU2c4xIncD8WGAmEEZYk0QWItmqx2qnW+CzfNlJzqzbWNKxkp5n/avKd0GJJ wUXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273597; x=1732878397; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3ME9UsezPGt4WGSKA1VLgt1h56+I05ddJ2XsR+2perQ=; b=MUPJgmUGaX/tLtmCoBawKEayMNjO+Ujlrac2zmsTnUp2A2yth1E0L3inmUDFGJ+XaO //lgSd07ziDHGZjrmkFlUeOfZGApVvvTbJvcVxjnwr8iRBbWOJ6LjlPv/SAIhmiqD9b4 xcHqlMdOMsHprWgef5CgrzfmFZ09Hx2dntC50SlzorW1HcBjs/JjAP6iY/9UZJ8boSN+ xy6tIYTg/jqqG/vP7DXI8fiRcc7AaSwTYkoeU7RNupQcdlHg97UqxM5t7sa/HKXy4lSo fKooY9O2SSUoMCm5QnYvFLDB56UxV4m43LSb4e0M/nCnisaU1WQLMXrKVW2J4UCkg38c 9bOw== X-Forwarded-Encrypted: i=1; AJvYcCXsIh/2ZUpyjtyjmN+Xgzkg1v/8T/bwv6OedNrHGEkUjFAeCUfu+mEG85BHAJJ6GEvmev60+qDKzvXK6Yibk4ao@lists.infradead.org X-Gm-Message-State: AOJu0YxO1JxqA7CIGGNS6DsUoHjgjiCzo6F6RQqNkyHGIjp8dXWVqLHo N5rpFdmZc5/I3BOwLNclGL42SYs3WXGMdSzBzEMvgQS2Ju+RJfZ11a/+1xETjACVvUtg9WZPtQ= = X-Google-Smtp-Source: AGHT+IFUC+e6lyJm7zGvYcxKJA1IAW1b9/r6RFtf7kaaoJE5CoVHncZPv0RFttxq9UWxn9otjbK1Y2/pAQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a25:2b07:0:b0:e38:1d14:64f4 with SMTP id 3f1490d57ef6-e38f8b9ef19mr996276.6.1732273596763; Fri, 22 Nov 2024 03:06:36 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:15 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-6-tabba@google.com> Subject: [PATCH v2 05/12] KVM: arm64: Initialize feature id registers for protected VMs From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030638_652322_ED80E995 X-CRM114-Status: GOOD ( 18.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hypervisor maintains the state of protected VMs. Initialize the values for feature ID registers for protected VMs, to be used when setting traps and when advertising features to protected VMs. Signed-off-by: Fuad Tabba --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 1 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 ++ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 54 +++++++++++++++++-- 3 files changed, 56 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index d1e59b88ff66..69e26d1a0ebe 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -201,6 +201,7 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); int kvm_check_pvm_sysreg_table(void); #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 59ff6aac514c..4ef03294b2b4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -381,6 +381,7 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm, hyp_vm->kvm.created_vcpus = nr_vcpus; hyp_vm->kvm.arch.mmu.vtcr = host_mmu.arch.mmu.vtcr; hyp_vm->kvm.arch.pkvm.enabled = READ_ONCE(host_kvm->arch.pkvm.enabled); + hyp_vm->kvm.arch.flags = 0; pkvm_init_features_from_host(hyp_vm, host_kvm); } @@ -419,6 +420,9 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); hyp_vcpu->vcpu.arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + kvm_init_pvm_id_regs(&hyp_vcpu->vcpu); + ret = pkvm_vcpu_init_traps(hyp_vcpu); if (ret) goto done; diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 59fb2f056177..7008e9641f41 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -204,8 +204,7 @@ static u64 get_pvm_id_aa64mmfr2(const struct kvm_vcpu *vcpu) return id_aa64mmfr2_el1_sys_val & PVM_ID_AA64MMFR2_ALLOW; } -/* Read a sanitized cpufeature ID register by its encoding */ -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +static u64 pvm_calc_id_reg(const struct kvm_vcpu *vcpu, u32 id) { switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -240,10 +239,25 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) } } +/* Read a sanitized cpufeature ID register by its encoding */ +u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +{ + return pvm_calc_id_reg(vcpu, id); +} + static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) { - return pvm_read_id_reg(vcpu, reg_to_encoding(r)); + struct kvm *kvm = vcpu->kvm; + u32 reg = reg_to_encoding(r); + + if (WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags))) + return 0; + + if (reg >= sys_reg(3, 0, 0, 4, 0) && reg <= sys_reg(3, 0, 0, 7, 7)) + return kvm->arch.id_regs[IDREG_IDX(reg)]; + + return 0; } /* Handler to RAZ/WI sysregs */ @@ -448,6 +462,40 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = { /* Performance Monitoring Registers are restricted. */ }; +/* + * Initializes feature registers for protected vms. + */ +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu) +{ + const u32 pvm_feat_id_regs[] = { + SYS_ID_AA64PFR0_EL1, + SYS_ID_AA64PFR1_EL1, + SYS_ID_AA64ISAR0_EL1, + SYS_ID_AA64ISAR1_EL1, + SYS_ID_AA64ISAR2_EL1, + SYS_ID_AA64ZFR0_EL1, + SYS_ID_AA64MMFR0_EL1, + SYS_ID_AA64MMFR1_EL1, + SYS_ID_AA64MMFR2_EL1, + SYS_ID_AA64MMFR4_EL1, + SYS_ID_AA64DFR0_EL1, + }; + struct kvm *kvm = vcpu->kvm; + unsigned long i; + + if (WARN_ON_ONCE(test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags))) + return; + + for (i = 0; i < ARRAY_SIZE(pvm_feat_id_regs); i++) { + struct kvm_arch *ka = &kvm->arch; + u32 reg = pvm_feat_id_regs[i]; + + ka->id_regs[IDREG_IDX(reg)] = pvm_calc_id_reg(vcpu, reg); + } + + set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags); +} + /* * Checks that the sysreg table is unique and in-order. * From patchwork Fri Nov 22 11:06:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5E1AD75E28 for ; Fri, 22 Nov 2024 11:13:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2s/8KRJVAKxiMRmai3rKrsIxiRRp3j6RS9TwsnwcjBg=; b=eEqcv6rYJEzrF16z+IER7oSZr1 IxKWb+1ofgjnU2eDoXONfaFn35ERqwfQryaZxxqnsEWnhYmo1DZcSKAFmiiHV6dyw2xIgSO7FlRHg cnP2oQh0+gxNzpgsqlNmvII7IYMkBurvzpO/N1By2YI8ypkphsEseaInSkjLTxyy7fc545y/3DOuT OCBn+YH3yapANbB9je3SLfbq/qZWDWRC8K51mjeG2abRfBk68xW8Np62XysNXpYbr9n5hGWYVPYVT ymvfM2N1kUqgoP5gsoQI4Zm5GQs9D0y+6H0u4vkrmdp9xxtkY3Bu9zwGYAZG2l4Ur8XtfDB8r9Mgo VX2H8Eaw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERb2-00000002KYb-3Qwa; Fri, 22 Nov 2024 11:13:08 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUm-00000002JSD-2hC8 for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:41 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4315eaa3189so17279915e9.1 for ; Fri, 22 Nov 2024 03:06:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273599; x=1732878399; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2s/8KRJVAKxiMRmai3rKrsIxiRRp3j6RS9TwsnwcjBg=; b=n+zVNFIE85DwErsrow6U/pa8wKnZfUb0cvCpbxX3jEiRjHSmwfpKqbbeWAsXf0wu+a zu3X88ukvfyS6CPV9ed49nhwyjsKca1UM5cMdCpXbwr7KnVIG61x8zb2WHDGavMAHT/s qeqZANhnVMBWRO5Nhab4EGR1TsTt2iantUJlUI1MCH3sx1RoHK2j7Z22nd/x0/tvIpUx uzDL/HdtungdgHGiPViw7u8B3tRsnGZbxulufCQizj0zp6YZ35oLijoYc7UIIwd03nHj JSDNXJq1uvfOaqG6Vx5JLQB5+d1lQDo+S9R9iaDEoEs3XUHYl1q0xj7Q4i7zi6kmidEl g0rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273599; x=1732878399; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2s/8KRJVAKxiMRmai3rKrsIxiRRp3j6RS9TwsnwcjBg=; b=SoqhgegEr/lBkiXNyEoaDeALW2FodMqeOMBHdC6n6R1BCLogZe2b3Eih++nhKNQKAh rrdZlFBOUhQbrYfXIsCFdZBZtYbJJDrvvC7dTb439mIk4Ex2Nn2Ff5hiHDskEITWYHHM oD1fokh1br4wO3ocOBdJeljNoqfxbtvelqpceFg86Je5AQdGq/1a6eez+rUuWwDa7txN Ueb81DwdMMlTGL+BcGV/vwHj7KN4LuKNLknhzsMHXAppSTrfVSIXeYVziF7zBDHRbck1 h8KfypC1T4qf79QJJmLgABxn2ycSdb55XsY3Kr7T5QXTVvSa8Uyi1bCE+BHmh26QZXXJ iOjg== X-Forwarded-Encrypted: i=1; AJvYcCV5H0alKJbMLgHqD92rom8ssyZHL++N+KpeXz/GRXnMbZhMbxBAXZPL8gygiUSjrTKxRBJP4D7hOCg6QumPZZTZ@lists.infradead.org X-Gm-Message-State: AOJu0YzqnKz3rMtnj58E4yKeDvlcxVStJPMDoxBaQnqXc06duQYwB0LA jFV0t3GvvBerzV9+484DbkAHbCWTToAWw1HV9f+geDApNHaYG4p7+Q1kz9tx7vbTmdk6+/bMlg= = X-Google-Smtp-Source: AGHT+IFEyT4vKoZmjF7IO2HV+HlHbIHy8GHMD0X32IpW0RU2LNY9Og+3IwPMY0UVh21KvGVRxbzu7NjHGw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:3541:b0:428:1fd6:1642 with SMTP id 5b1f17b1804b1-433ce41ce03mr166315e9.2.1732273599040; Fri, 22 Nov 2024 03:06:39 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:16 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-7-tabba@google.com> Subject: [PATCH v2 06/12] KVM: arm64: Set protected VM traps based on its view of feature registers From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030640_703930_A1699990 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the VM's feature id registers are initialized with the values of the supported features, use those values to determine which traps to set using kvm_has_feature(). Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 85 +++++++++++------------------- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 7 --- 2 files changed, 30 insertions(+), 62 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 4ef03294b2b4..3b4ea97148b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -52,9 +52,7 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu) static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) { - const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - const u64 id_aa64mmfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.hcr_el2; /* No support for AArch32. */ @@ -70,25 +68,20 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) */ val |= HCR_TACR | HCR_TIDCP | HCR_TID3 | HCR_TID1; - /* Trap RAS unless all current versions are supported */ - if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), id_aa64pfr0) < - ID_AA64PFR0_EL1_RAS_V1P1) { + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) { val |= HCR_TERR | HCR_TEA; val &= ~(HCR_FIEN); } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) val &= ~(HCR_AMVOFFEN); - /* Memory Tagging: Trap and Treat as Untagged if not supported. */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), id_aa64pfr1)) { + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, IMP)) { val |= HCR_TID5; val &= ~(HCR_DCT | HCR_ATA); } - /* Trap LOR */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_LO), id_aa64mmfr1)) + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP)) val |= HCR_TLOR; vcpu->arch.hcr_el2 = val; @@ -96,9 +89,7 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) { - const u64 id_aa64pfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); - const u64 id_aa64pfr1 = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR1_EL1); - const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.cptr_el2; if (!has_hvhe()) { @@ -106,12 +97,11 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) val &= ~(CPTR_NVHE_EL2_RES0); } - /* Trap AMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), id_aa64pfr0)) + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) val |= CPTR_EL2_TAM; - /* Trap SVE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), id_aa64pfr0)) { + /* SVE can be disabled by userspace even if supported. */ + if (!vcpu_has_sve(vcpu)) { if (has_hvhe()) val &= ~(CPACR_ELx_ZEN); else @@ -119,14 +109,13 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) } /* No SME support in KVM. */ - BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME), id_aa64pfr1)); + BUG_ON(kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)); if (has_hvhe()) val &= ~(CPACR_ELx_SMEN); else val |= CPTR_EL2_TSM; - /* Trap Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP)) { if (has_hvhe()) val |= CPACR_EL1_TTA; else @@ -138,40 +127,33 @@ static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { - const u64 id_aa64dfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); - const u64 id_aa64mmfr0 = pvm_read_id_reg(vcpu, SYS_ID_AA64MMFR0_EL1); + struct kvm *kvm = vcpu->kvm; u64 val = vcpu->arch.mdcr_el2; - /* Trap/constrain PMU */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP)) { val |= MDCR_EL2_TPM | MDCR_EL2_TPMCR; val &= ~(MDCR_EL2_HPME | MDCR_EL2_MTPME | MDCR_EL2_HPMN_MASK); } - /* Trap Debug */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DebugVer, IMP)) val |= MDCR_EL2_TDRA | MDCR_EL2_TDA; - /* Trap OS Double Lock */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DoubleLock), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP)) val |= MDCR_EL2_TDOSA; - /* Trap SPE */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer), id_aa64dfr0)) { + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP)) { val |= MDCR_EL2_TPMS; val &= ~(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT); } - /* Trap Trace Filter */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP)) val |= MDCR_EL2_TTRF; - /* Trap External Trace */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), id_aa64dfr0)) + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, ExtTrcBuff, IMP)) val |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT; /* Trap Debug Communications Channel registers */ - if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_FGT), id_aa64mmfr0)) + if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, FGT, IMP)) val |= MDCR_EL2_TDCC; vcpu->arch.mdcr_el2 = val; @@ -183,31 +165,24 @@ static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) */ static int pkvm_check_pvm_cpu_features(struct kvm_vcpu *vcpu) { - /* - * PAuth is allowed if supported by the system and the vcpu. - * Properly checking for PAuth requires checking various fields in - * ID_AA64ISAR1_EL1 and ID_AA64ISAR2_EL1. The way that fixed config - * is controlled now in pKVM does not easily allow that. This will - * change later to follow the changes upstream wrt fixed configuration - * and nested virt. - */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI), - PVM_ID_AA64ISAR1_ALLOW)); + struct kvm *kvm = vcpu->kvm; /* Protected KVM does not support AArch32 guests. */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL0_IMP); - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) != ID_AA64PFR0_EL1_EL1_IMP); + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL0, AARCH32) || + kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL1, AARCH32)) + return -EINVAL; /* * Linux guests assume support for floating-point and Advanced SIMD. Do * not change the trapping behavior for these from the KVM default. */ - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP), - PVM_ID_AA64PFR0_ALLOW)); - BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD), - PVM_ID_AA64PFR0_ALLOW)); + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, FP, IMP) || + !kvm_has_feat(kvm, ID_AA64PFR0_EL1, AdvSIMD, IMP)) + return -EINVAL; + + /* No SME support in KVM right now. Check to catch if it changes. */ + if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) + return -EINVAL; return 0; } diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 7008e9641f41..3aa76b018f70 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -285,13 +285,6 @@ static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu, return false; } - /* - * No support for AArch32 guests, therefore, pKVM has no sanitized copy - * of AArch32 feature id registers. - */ - BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), - PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_EL1_EL1_IMP); - return pvm_access_raz_wi(vcpu, p, r); } From patchwork Fri Nov 22 11:06:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E147D75E28 for ; Fri, 22 Nov 2024 11:14:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uCXVCKDQTGyAYU6AmmroBCaYCVOBQ+j8/lDUk/CzRGk=; b=DVprmv+8MNEmnT+qpeDB6vRyXr KxheiqQn45lcmS28VtTTrJt34nvRsHQl+7zVwjpzM+Fv0+vAHBDsTbx1zA8LBkIfpk4VCUgIGoB8P tmDVjGL/fQao6l/D7e6SLtJZk2IhhM06E4EpyxZU5n09f9xRsBgNSyC0s1nvEARtbCVZ4PRimp7dt JhnVEr93eIJc55BNSYr9So6z02JTlEZIQg96Mo8y4gaWjtigb/OLBB+yfwj1Fcg0gTgLuohukXVb4 cssiFrHBKRTmVbXaQ8/uPmAl2D3ocR4fdcbGR3lFxo3NCGCGAxEE57Fm0oHSZKCBWSKXKRqwY0IKA GSqjAT4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERbx-00000002KkR-2CcA; Fri, 22 Nov 2024 11:14:05 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUp-00000002JSu-0baa for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:44 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4316ac69e6dso15029225e9.0 for ; Fri, 22 Nov 2024 03:06:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273601; x=1732878401; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uCXVCKDQTGyAYU6AmmroBCaYCVOBQ+j8/lDUk/CzRGk=; b=C0dMJzoQJEAPf0tKVzzKyCroMJSfjEvCe0tBOLaq5Tgv9imXrnLBlpRtjHg8phZsM9 qUWbMJOCy09nMWXYCTqkXMxsXzOWBs763BtdNoY0hhO/DXChfGYOF/MoRySZ3yvS6gC+ WKNoPwA5Qq7Dn5KU2ETslqDk/qXjPmf/666aSjyzQy28n0jLecPrBpDYEBBlMCOOdX3j EAD2Oep/bT+1Ja+tounZKXoubip0KtW+8j/ZjIq2FmYFNuSal/WaggpavMliq3aGkqbV rongg8TQqgcne+NmJOTb1uaMDE/Mw6agLdSim/31CB9lb22US0BZIDDji+7HhNqWzai1 VnDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273601; x=1732878401; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uCXVCKDQTGyAYU6AmmroBCaYCVOBQ+j8/lDUk/CzRGk=; b=Q3+GjzOdBZKC6Yl5vEEk4+E4tNjp38UcV9lXzd4Vq50AIjWnXbLHseM57EXpJrC1fa Ys37auTt7IdfPFtD2C3SybllG8jiVHdvB6jb2fDdxEOPvOEeUzA7Kf4kS/qdMTxo0i/M Ra3QBBJFJkCNN74262v3pvwo1VwyNYZwPob9B0Ft+nNfYPlWAndn1rJsahXMJs5B4ZRP S+1W06nQSsJ7bWaeI+zTCM35MskrdRk5Dns27gPNnccXdHKYR0puF+22CRoBZfEixYlx Wq/l/FWzzuovr+Z4/FHgCkV0H2W88SNdSfrHDXLDInCG3I6NEdXp5ztrOnJw+8HyJy8t tUeg== X-Forwarded-Encrypted: i=1; AJvYcCUYCOZIoxfaGOZHE1UaiLqkBvNlHTtqlXQTmQnzYapMdcPAop5fddRLz4F+zTN/9WJUUF4ytrDzYMN+Pit0zzcd@lists.infradead.org X-Gm-Message-State: AOJu0YyAvpN1A/jWILEHFhjvVWVRZYrWV8ahSWTLYZGExu8Y31tV+KPK HuuRXJBvCkrRWPB7baNO10euTblX9QopvpIde2Phpxil/zwW+L4FGjSy+E7ca375k2NTVdPIsA= = X-Google-Smtp-Source: AGHT+IGWAyXY0Bh1E2MYCfi/KOe+LZQPdMWKfR6NfznITVmatROatXLwrChmHshc6ll/AiGuAf6h3sAxhA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:5102:b0:42c:b4ca:768c with SMTP id 5b1f17b1804b1-433ce4af47dmr10215e9.3.1732273601234; Fri, 22 Nov 2024 03:06:41 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:17 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-8-tabba@google.com> Subject: [PATCH v2 07/12] KVM: arm64: Rework specifying restricted features for protected VMs From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030643_184516_97C3BD58 X-CRM114-Status: GOOD ( 24.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The existing code didn't properly distinguish between signed and unsigned features, and was difficult to read and to maintain. Rework it using the same method used in other parts of KVM when handling vcpu features. Signed-off-by: Fuad Tabba --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 1 - arch/arm64/kvm/hyp/nvhe/sys_regs.c | 356 +++++++++--------- 2 files changed, 186 insertions(+), 171 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h index 69e26d1a0ebe..37a6d2434e47 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h @@ -198,7 +198,6 @@ FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ ) -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 3aa76b018f70..cea3b099dc56 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -28,221 +28,237 @@ u64 id_aa64mmfr1_el1_sys_val; u64 id_aa64mmfr2_el1_sys_val; u64 id_aa64smfr0_el1_sys_val; -/* - * Inject an unknown/undefined exception to an AArch64 guest while most of its - * sysregs are live. - */ -static void inject_undef64(struct kvm_vcpu *vcpu) -{ - u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); - - *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); - *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); - - kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC); - - __kvm_adjust_pc(vcpu); - - write_sysreg_el1(esr, SYS_ESR); - write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); - write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); - write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); -} +struct pvm_feature { + int shift; + int width; + u64 mask; + u64 max_supported; + bool is_signed; + bool (*vcpu_supported)(const struct kvm_vcpu *vcpu); +}; -/* - * Returns the restricted features values of the feature register based on the - * limitations in restrict_fields. - * A feature id field value of 0b0000 does not impose any restrictions. - * Note: Use only for unsigned feature field values. - */ -static u64 get_restricted_features_unsigned(u64 sys_reg_val, - u64 restrict_fields) -{ - u64 value = 0UL; - u64 mask = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); - - /* - * According to the Arm Architecture Reference Manual, feature fields - * use increasing values to indicate increases in functionality. - * Iterate over the restricted feature fields and calculate the minimum - * unsigned value between the one supported by the system, and what the - * value is being restricted to. - */ - while (sys_reg_val && restrict_fields) { - value |= min(sys_reg_val & mask, restrict_fields & mask); - sys_reg_val &= ~mask; - restrict_fields &= ~mask; - mask <<= ARM64_FEATURE_FIELD_BITS; +#define __MAX_FEAT_FUNC(id, fld, max, func, sign) \ + { \ + .shift = id##_##fld##_SHIFT, \ + .width = id##_##fld##_WIDTH, \ + .mask = id##_##fld##_MASK, \ + .max_supported = id##_##fld##_##max, \ + .is_signed = sign, \ + .vcpu_supported = func, \ } - return value; -} - -/* - * Functions that return the value of feature id registers for protected VMs - * based on allowed features, system features, and KVM support. - */ - -static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) -{ - u64 set_mask = 0; - u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; - - set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, - PVM_ID_AA64PFR0_ALLOW); - - return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; -} - -static u64 get_pvm_id_aa64pfr1(const struct kvm_vcpu *vcpu) -{ - const struct kvm *kvm = (const struct kvm *)kern_hyp_va(vcpu->kvm); - u64 allow_mask = PVM_ID_AA64PFR1_ALLOW; - - if (!kvm_has_mte(kvm)) - allow_mask &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); +#define MAX_FEAT_FUNC(id, fld, max, func) \ + __MAX_FEAT_FUNC(id, fld, max, func, id##_##fld##_SIGNED) - return id_aa64pfr1_el1_sys_val & allow_mask; -} +#define MAX_FEAT(id, fld, max) \ + MAX_FEAT_FUNC(id, fld, max, NULL) -static u64 get_pvm_id_aa64zfr0(const struct kvm_vcpu *vcpu) -{ - /* - * No support for Scalable Vectors, therefore, hyp has no sanitized - * copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL); - return 0; -} +#define MAX_FEAT_ENUM(id, fld, max) \ + __MAX_FEAT_FUNC(id, fld, max, NULL, false) -static u64 get_pvm_id_aa64dfr0(const struct kvm_vcpu *vcpu) -{ - /* - * No support for debug, including breakpoints, and watchpoints, - * therefore, pKVM has no sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL); - return 0; -} +#define FEAT_END { .shift = -1, } -static u64 get_pvm_id_aa64dfr1(const struct kvm_vcpu *vcpu) +static bool _vcpu_has_ptrauth(const struct kvm_vcpu *vcpu) { - /* - * No support for debug, therefore, hyp has no sanitized copy of the - * feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64DFR1_ALLOW != 0ULL); - return 0; + return vcpu_has_ptrauth(vcpu); } -static u64 get_pvm_id_aa64afr0(const struct kvm_vcpu *vcpu) +static bool _vcpu_has_sve(const struct kvm_vcpu *vcpu) { - /* - * No support for implementation defined features, therefore, hyp has no - * sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64AFR0_ALLOW != 0ULL); - return 0; + return vcpu_has_sve(vcpu); } -static u64 get_pvm_id_aa64afr1(const struct kvm_vcpu *vcpu) -{ - /* - * No support for implementation defined features, therefore, hyp has no - * sanitized copy of the feature id register. - */ - BUILD_BUG_ON(PVM_ID_AA64AFR1_ALLOW != 0ULL); - return 0; -} - -static u64 get_pvm_id_aa64isar0(const struct kvm_vcpu *vcpu) -{ - return id_aa64isar0_el1_sys_val & PVM_ID_AA64ISAR0_ALLOW; -} +/* + * Definitions for features to be allowed or restricted for protected guests. + * + * Each field in the masks represents the highest supported value for the + * feature. If a feature field is not present, it is not supported. Moreover, + * these are used to generate the guest's view of the feature registers. + * + * The approach for protected VMs is to at least support features that are: + * - Needed by common Linux distributions (e.g., floating point) + * - Trivial to support, e.g., supporting the feature does not introduce or + * require tracking of additional state in KVM + * - Cannot be trapped or prevent the guest from using anyway + */ -static u64 get_pvm_id_aa64isar1(const struct kvm_vcpu *vcpu) -{ - u64 allow_mask = PVM_ID_AA64ISAR1_ALLOW; +static const struct pvm_feature pvmid_aa64pfr0[] = { + MAX_FEAT(ID_AA64PFR0_EL1, EL0, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL1, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL2, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, EL3, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, FP, FP16), + MAX_FEAT(ID_AA64PFR0_EL1, AdvSIMD, FP16), + MAX_FEAT(ID_AA64PFR0_EL1, GIC, IMP), + MAX_FEAT_FUNC(ID_AA64PFR0_EL1, SVE, IMP, _vcpu_has_sve), + MAX_FEAT(ID_AA64PFR0_EL1, RAS, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, DIT, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, CSV2, IMP), + MAX_FEAT(ID_AA64PFR0_EL1, CSV3, IMP), + FEAT_END +}; - if (!vcpu_has_ptrauth(vcpu)) - allow_mask &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); +static const struct pvm_feature pvmid_aa64pfr1[] = { + MAX_FEAT(ID_AA64PFR1_EL1, BT, IMP), + MAX_FEAT(ID_AA64PFR1_EL1, SSBS, SSBS2), + MAX_FEAT_ENUM(ID_AA64PFR1_EL1, MTE_frac, NI), + FEAT_END +}; - return id_aa64isar1_el1_sys_val & allow_mask; -} +static const struct pvm_feature pvmid_aa64mmfr0[] = { + MAX_FEAT_ENUM(ID_AA64MMFR0_EL1, PARANGE, 40), + MAX_FEAT_ENUM(ID_AA64MMFR0_EL1, ASIDBITS, 16), + MAX_FEAT(ID_AA64MMFR0_EL1, BIGEND, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, SNSMEM, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, BIGENDEL0, IMP), + MAX_FEAT(ID_AA64MMFR0_EL1, EXS, IMP), + FEAT_END +}; -static u64 get_pvm_id_aa64isar2(const struct kvm_vcpu *vcpu) -{ - u64 allow_mask = PVM_ID_AA64ISAR2_ALLOW; +static const struct pvm_feature pvmid_aa64mmfr1[] = { + MAX_FEAT(ID_AA64MMFR1_EL1, HAFDBS, DBM), + MAX_FEAT_ENUM(ID_AA64MMFR1_EL1, VMIDBits, 16), + MAX_FEAT(ID_AA64MMFR1_EL1, HPDS, HPDS2), + MAX_FEAT(ID_AA64MMFR1_EL1, PAN, PAN3), + MAX_FEAT(ID_AA64MMFR1_EL1, SpecSEI, IMP), + MAX_FEAT(ID_AA64MMFR1_EL1, ETS, IMP), + MAX_FEAT(ID_AA64MMFR1_EL1, CMOW, IMP), + FEAT_END +}; - if (!vcpu_has_ptrauth(vcpu)) - allow_mask &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); +static const struct pvm_feature pvmid_aa64mmfr2[] = { + MAX_FEAT(ID_AA64MMFR2_EL1, CnP, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, UAO, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, IESB, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, AT, IMP), + MAX_FEAT_ENUM(ID_AA64MMFR2_EL1, IDS, 0x18), + MAX_FEAT(ID_AA64MMFR2_EL1, TTL, IMP), + MAX_FEAT(ID_AA64MMFR2_EL1, BBM, 2), + MAX_FEAT(ID_AA64MMFR2_EL1, E0PD, IMP), + FEAT_END +}; - return id_aa64isar2_el1_sys_val & allow_mask; -} +static const struct pvm_feature pvmid_aa64isar1[] = { + MAX_FEAT(ID_AA64ISAR1_EL1, DPB, DPB2), + MAX_FEAT_FUNC(ID_AA64ISAR1_EL1, APA, PAuth, _vcpu_has_ptrauth), + MAX_FEAT_FUNC(ID_AA64ISAR1_EL1, API, PAuth, _vcpu_has_ptrauth), + MAX_FEAT(ID_AA64ISAR1_EL1, JSCVT, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, FCMA, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, LRCPC, LRCPC3), + MAX_FEAT(ID_AA64ISAR1_EL1, GPA, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, GPI, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, FRINTTS, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, SB, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX), + MAX_FEAT(ID_AA64ISAR1_EL1, BF16, EBF16), + MAX_FEAT(ID_AA64ISAR1_EL1, DGH, IMP), + MAX_FEAT(ID_AA64ISAR1_EL1, I8MM, IMP), + FEAT_END +}; -static u64 get_pvm_id_aa64mmfr0(const struct kvm_vcpu *vcpu) -{ - u64 set_mask; +static const struct pvm_feature pvmid_aa64isar2[] = { + MAX_FEAT_FUNC(ID_AA64ISAR2_EL1, GPA3, IMP, _vcpu_has_ptrauth), + MAX_FEAT_FUNC(ID_AA64ISAR2_EL1, APA3, PAuth, _vcpu_has_ptrauth), + MAX_FEAT(ID_AA64ISAR2_EL1, ATS1A, IMP), + FEAT_END +}; - set_mask = get_restricted_features_unsigned(id_aa64mmfr0_el1_sys_val, - PVM_ID_AA64MMFR0_ALLOW); +/* + * None of the features in ID_AA64DFR0_EL1 nor ID_AA64MMFR4_EL1 are supported. + * However, both have Not-Implemented values that are non-zero. Define them + * so they can be used when getting the value of these registers. + */ +#define ID_AA64DFR0_EL1_NONZERO_NI \ +( \ + SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DoubleLock, NI) | \ + SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, MTPMU, NI) \ +) - return (id_aa64mmfr0_el1_sys_val & PVM_ID_AA64MMFR0_ALLOW) | set_mask; -} +#define ID_AA64MMFR4_EL1_NONZERO_NI \ + SYS_FIELD_PREP_ENUM(ID_AA64MMFR4_EL1, E2H0, NI) -static u64 get_pvm_id_aa64mmfr1(const struct kvm_vcpu *vcpu) +/* + * Returns the value of the feature registers based on the system register + * value, the vcpu support for the revelant features, and the additional + * restrictions for protected VMs. + */ +static u64 get_restricted_features(const struct kvm_vcpu *vcpu, + u64 sys_reg_val, + const struct pvm_feature restrictions[]) { - return id_aa64mmfr1_el1_sys_val & PVM_ID_AA64MMFR1_ALLOW; -} + u64 val = 0UL; + int i; + + for (i = 0; restrictions[i].shift != -1; i++) { + bool (*vcpu_supported)(const struct kvm_vcpu *) = restrictions[i].vcpu_supported; + bool is_signed = restrictions[i].is_signed; + int shift = restrictions[i].shift; + int width = restrictions[i].width; + u64 min_signed = (1UL << width) - 1UL; + u64 sign_bit = 1UL << (width - 1); + u64 mask = restrictions[i].mask; + u64 sys_val = (sys_reg_val & mask) >> shift; + u64 pvm_max = restrictions[i].max_supported; + + if (vcpu_supported && !vcpu_supported(vcpu)) + val |= (is_signed ? min_signed : 0) << shift; + else if (is_signed && (sys_val >= sign_bit || pvm_max >= sign_bit)) + val |= max(sys_val, pvm_max) << shift; + else + val |= min(sys_val, pvm_max) << shift; + } -static u64 get_pvm_id_aa64mmfr2(const struct kvm_vcpu *vcpu) -{ - return id_aa64mmfr2_el1_sys_val & PVM_ID_AA64MMFR2_ALLOW; + return val; } static u64 pvm_calc_id_reg(const struct kvm_vcpu *vcpu, u32 id) { switch (id) { case SYS_ID_AA64PFR0_EL1: - return get_pvm_id_aa64pfr0(vcpu); + return get_restricted_features(vcpu, id_aa64pfr0_el1_sys_val, pvmid_aa64pfr0); case SYS_ID_AA64PFR1_EL1: - return get_pvm_id_aa64pfr1(vcpu); - case SYS_ID_AA64ZFR0_EL1: - return get_pvm_id_aa64zfr0(vcpu); - case SYS_ID_AA64DFR0_EL1: - return get_pvm_id_aa64dfr0(vcpu); - case SYS_ID_AA64DFR1_EL1: - return get_pvm_id_aa64dfr1(vcpu); - case SYS_ID_AA64AFR0_EL1: - return get_pvm_id_aa64afr0(vcpu); - case SYS_ID_AA64AFR1_EL1: - return get_pvm_id_aa64afr1(vcpu); + return get_restricted_features(vcpu, id_aa64pfr1_el1_sys_val, pvmid_aa64pfr1); case SYS_ID_AA64ISAR0_EL1: - return get_pvm_id_aa64isar0(vcpu); + return id_aa64isar0_el1_sys_val; case SYS_ID_AA64ISAR1_EL1: - return get_pvm_id_aa64isar1(vcpu); + return get_restricted_features(vcpu, id_aa64isar1_el1_sys_val, pvmid_aa64isar1); case SYS_ID_AA64ISAR2_EL1: - return get_pvm_id_aa64isar2(vcpu); + return get_restricted_features(vcpu, id_aa64isar2_el1_sys_val, pvmid_aa64isar2); case SYS_ID_AA64MMFR0_EL1: - return get_pvm_id_aa64mmfr0(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr0_el1_sys_val, pvmid_aa64mmfr0); case SYS_ID_AA64MMFR1_EL1: - return get_pvm_id_aa64mmfr1(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr1_el1_sys_val, pvmid_aa64mmfr1); case SYS_ID_AA64MMFR2_EL1: - return get_pvm_id_aa64mmfr2(vcpu); + return get_restricted_features(vcpu, id_aa64mmfr2_el1_sys_val, pvmid_aa64mmfr2); + case SYS_ID_AA64DFR0_EL1: + return ID_AA64DFR0_EL1_NONZERO_NI; + case SYS_ID_AA64MMFR4_EL1: + return ID_AA64MMFR4_EL1_NONZERO_NI; default: /* Unhandled ID register, RAZ */ return 0; } } -/* Read a sanitized cpufeature ID register by its encoding */ -u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +/* + * Inject an unknown/undefined exception to an AArch64 guest while most of its + * sysregs are live. + */ +static void inject_undef64(struct kvm_vcpu *vcpu) { - return pvm_calc_id_reg(vcpu, id); + u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); + + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); + *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); + + kvm_pend_exception(vcpu, EXCEPT_AA64_EL1_SYNC); + + __kvm_adjust_pc(vcpu); + + write_sysreg_el1(esr, SYS_ESR); + write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR); + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); } static u64 read_id_reg(const struct kvm_vcpu *vcpu, From patchwork Fri Nov 22 11:06:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7784D75E25 for ; Fri, 22 Nov 2024 11:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=oGuGk/OpliIcouOXkCuM9Gfx22ccMK3OyWc84pSHDQA=; b=Om8PqISHq2ttqHQzrS9IxtBANN r57dxClRNjU4M6+Ebpd593lJAr8aOL4HhWvKRjFt+tTT9Rdly/SNgT/3WhHjzeypsSjFCBo5rFSqP sYEzgte/m6FmHAmbadq681byq+lD5U7HRONypuELaCqPrnETCDcFrndj82xzwl0r5JCr6SZWuzH2Q uJ/sKLeHUmcuUHQMr2lF7RKpNJ+ltkWxw3dps1lDCQ+EI3xkdekdab4RwkT2uCGOv3lFXvaLZ0UQi P7Y4dgDbgUGbmQXe8swLJ5pIwngVL+lMkESMDuUqKqCzGn7BdjerQ4keAYA6YJz3XFJ26JiauNDeB aS9kGrkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERct-00000002Krn-0gH6; Fri, 22 Nov 2024 11:15:03 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUr-00000002JTl-1Q3w for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:46 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e38100afa78so3759044276.0 for ; Fri, 22 Nov 2024 03:06:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273603; x=1732878403; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oGuGk/OpliIcouOXkCuM9Gfx22ccMK3OyWc84pSHDQA=; b=p5WNfAATLaBi8QbDS7m4ODVEe0kBZocqwgYaZ4/CofedQ7gfQHt3fwsvM4ELpnmlMO /BaIoBYHi/rAfy5isxieD4q6c4rhdLLhFDFb2yehUpqa3ynj9lR9USadXIdipDRsG5fV 8tER2XyyQEIF/S3NpDy9BHB7hl6DHdWQ/69L8e7Ep/r28RiIJ0gy4OM05aTG1UAgtJfb KaudD3tbBzI1av/zKxn1+83crhvSzNMneBL5xSL1qAK15QPdGinm4oUhIdbo+D66dsWI vhANot/1LHx5LfD48shRadptQSWjquOEeoNWtoS3JDSzraNWRCrsIHsX3rndaTJibE5E ORmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273603; x=1732878403; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oGuGk/OpliIcouOXkCuM9Gfx22ccMK3OyWc84pSHDQA=; b=PQeU5ZCtXKu+JqaJ0/qOvGFpPvQqx/I90FsTY9oWIu8XyxVVEUhWHLC1KkikuSd/Mr oyRDejbz7DFEPgvrYNH1XKaZ3si0MZmWOJ/S3wR5SablqSvMje4iQGK3qgE2+7v8uDYD nSuBwhp9IuDYjrdDXU/wm9qGeNG6NDGlbMyPeZuVX030WyB7Gkm8xcnaDf3F18ub7cKw 3/oPk2ujJOMiOZbllHtS7p6IfLepWVXA3tP1pIBC5yny9PDfcwZpmK1YCKPGFvTjYNMg F0Yh1wWJ7KpeAV/glEeQJR4tMKiESttMDu09DAOWdqO4T63k9gsNS4hexQDurqFcm9tH 2QzA== X-Forwarded-Encrypted: i=1; AJvYcCX1NvtYtFawE2jWs9PQRGb60uL1NAo85pNrI/v5ZfCXfql/kYpQG1L0fYipRCyj5y2QPlfz4UK+FKOhT0p0mzE/@lists.infradead.org X-Gm-Message-State: AOJu0Yz2/R34ct7NGGQLIZCoEUjT8FiE+XXD7+3Omv6IfM/AAqvIC7D1 JMhGsnLwsSq1UEMFS7WCbjZa8ihqK7PXyj2XS2ywMU5Ee3wg9lSJWdMTfGSmKfCnjU12DC5Ixw= = X-Google-Smtp-Source: AGHT+IEHfm8aAphhrE8wOMAO7oupswXdHf+9swaEHi1h1t8UNMprA+ep0lq3gaYyKgE7EbK/2CY9rmP4vg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a25:910f:0:b0:e38:8de0:f96b with SMTP id 3f1490d57ef6-e38f8c09885mr1377276.9.1732273603700; Fri, 22 Nov 2024 03:06:43 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:18 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-9-tabba@google.com> Subject: [PATCH v2 08/12] KVM: arm64: Remove fixed_config.h header From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030645_398985_6063140C X-CRM114-Status: GOOD ( 21.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The few remaining items needed in fixed_config.h are better suited for pkvm.h. Move them there and delete it. No functional change intended. Signed-off-by: Fuad Tabba --- .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 206 ------------------ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 5 + arch/arm64/kvm/hyp/nvhe/pkvm.c | 1 - arch/arm64/kvm/hyp/nvhe/setup.c | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 1 - arch/arm64/kvm/hyp/nvhe/sys_regs.c | 2 +- 6 files changed, 6 insertions(+), 210 deletions(-) delete mode 100644 arch/arm64/kvm/hyp/include/nvhe/fixed_config.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h b/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h deleted file mode 100644 index 37a6d2434e47..000000000000 --- a/arch/arm64/kvm/hyp/include/nvhe/fixed_config.h +++ /dev/null @@ -1,206 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2021 Google LLC - * Author: Fuad Tabba - */ - -#ifndef __ARM64_KVM_FIXED_CONFIG_H__ -#define __ARM64_KVM_FIXED_CONFIG_H__ - -#include - -/* - * This file contains definitions for features to be allowed or restricted for - * guest virtual machines, depending on the mode KVM is running in and on the - * type of guest that is running. - * - * Each field in the masks represents the highest supported *unsigned* value for - * the feature, if supported by the system. - * - * If a feature field is not present in either, than it is not supported. - * - * The approach taken for protected VMs is to allow features that are: - * - Needed by common Linux distributions (e.g., floating point) - * - Trivial to support, e.g., supporting the feature does not introduce or - * require tracking of additional state in KVM - * - Cannot be trapped or prevent the guest from using anyway - */ - -/* - * Allow for protected VMs: - * - Floating-point and Advanced SIMD - * - Data Independent Timing - * - Spectre/Meltdown Mitigation - * - * Restrict to the following *unsigned* features for protected VMs: - * - AArch64 guests only (no support for AArch32 guests): - * AArch32 adds complexity in trap handling, emulation, condition codes, - * etc... - * - RAS (v1) - * Supported by KVM - */ -#define PVM_ID_AA64PFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL3, IMP) | \ - SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, RAS, IMP) \ - ) - -/* - * Allow for protected VMs: - * - Branch Target Identification - * - Speculative Store Bypassing - */ -#define PVM_ID_AA64PFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_BT) | \ - ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SSBS) \ - ) - -#define PVM_ID_AA64PFR2_ALLOW 0ULL - -/* - * Allow for protected VMs: - * - Mixed-endian - * - Distinction between Secure and Non-secure Memory - * - Mixed-endian at EL0 only - * - Non-context synchronizing exception entry and exit - * - * Restrict to the following *unsigned* features for protected VMs: - * - 40-bit IPA - * - 16-bit ASID - */ -#define PVM_ID_AA64MMFR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGEND) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_SNSMEM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_BIGENDEL0) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_EXS) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_PARANGE), ID_AA64MMFR0_EL1_PARANGE_40) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_ASIDBITS), ID_AA64MMFR0_EL1_ASIDBITS_16) \ - ) - -/* - * Allow for protected VMs: - * - Hardware translation table updates to Access flag and Dirty state - * - Number of VMID bits from CPU - * - Hierarchical Permission Disables - * - Privileged Access Never - * - SError interrupt exceptions from speculative reads - * - Enhanced Translation Synchronization - * - Control for cache maintenance permission - */ -#define PVM_ID_AA64MMFR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_VMIDBits) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HPDS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_PAN) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_SpecSEI) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_ETS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_CMOW) \ - ) - -/* - * Allow for protected VMs: - * - Common not Private translations - * - User Access Override - * - IESB bit in the SCTLR_ELx registers - * - Unaligned single-copy atomicity and atomic functions - * - ESR_ELx.EC value on an exception by read access to feature ID space - * - TTL field in address operations. - * - Break-before-make sequences when changing translation block size - * - E0PDx mechanism - */ -#define PVM_ID_AA64MMFR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_CnP) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_UAO) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IESB) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_AT) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_IDS) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_TTL) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_BBM) | \ - ARM64_FEATURE_MASK(ID_AA64MMFR2_EL1_E0PD) \ - ) - -#define PVM_ID_AA64MMFR3_ALLOW (0ULL) - -/* - * No support for Scalable Vectors for protected VMs: - * Requires additional support from KVM, e.g., context-switching and - * trapping at EL2 - */ -#define PVM_ID_AA64ZFR0_ALLOW (0ULL) - -/* - * No support for debug, including breakpoints, and watchpoints for protected - * VMs: - * The Arm architecture mandates support for at least the Armv8 debug - * architecture, which would include at least 2 hardware breakpoints and - * watchpoints. Providing that support to protected guests adds - * considerable state and complexity. Therefore, the reserved value of 0 is - * used for debug-related fields. - */ -#define PVM_ID_AA64DFR0_ALLOW (0ULL) -#define PVM_ID_AA64DFR1_ALLOW (0ULL) - -/* - * No support for implementation defined features. - */ -#define PVM_ID_AA64AFR0_ALLOW (0ULL) -#define PVM_ID_AA64AFR1_ALLOW (0ULL) - -/* - * No restrictions on instructions implemented in AArch64. - */ -#define PVM_ID_AA64ISAR0_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_AES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA1) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA2) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_CRC32) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_RDM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SHA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SM3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_SM4) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_DP) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_FHM) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_TS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_TLB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_RNDR) \ - ) - -/* Restrict pointer authentication to the basic version. */ -#define PVM_ID_AA64ISAR1_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DPB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_JSCVT) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_FCMA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_LRCPC) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_FRINTTS) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SB) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_SPECRES) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_BF16) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_DGH) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_I8MM) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA), ID_AA64ISAR1_EL1_APA_PAuth) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API), ID_AA64ISAR1_EL1_API_PAuth) \ - ) - -#define PVM_ID_AA64ISAR2_ALLOW (\ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_ATS1A)| \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) | \ - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3), ID_AA64ISAR2_EL1_APA3_PAuth) \ - ) - -bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); -bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); -void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); -int kvm_check_pvm_sysreg_table(void); - -#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index 24a9a8330d19..6ff7cbc33000 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,9 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code); +bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code); +void kvm_init_pvm_id_regs(struct kvm_vcpu *vcpu); +int kvm_check_pvm_sysreg_table(void); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 3b4ea97148b9..ffa500b500f2 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -9,7 +9,6 @@ #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..31bd729ea45c 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -12,7 +12,6 @@ #include #include -#include #include #include #include diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index cc69106734ca..7786a83d0fa8 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,7 +26,6 @@ #include #include -#include #include /* Non-VHE specific context */ diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index cea3b099dc56..c6809390c6e6 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -11,7 +11,7 @@ #include -#include +#include #include "../../sys_regs.h" From patchwork Fri Nov 22 11:06:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0CEDAD75E25 for ; Fri, 22 Nov 2024 11:16:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6ktoHeGKmXYIdsUvMRAz3vVzbrdPr/EI44OUns8qrUY=; b=ESC22m4Q6umheOmgpuW1sASuYS /FpXQWPTCUbeOK6f9Q2xNmleGOlyKbFE/u6mDlFXirKG5JDUFwfML0p4G08tI0F+95qDuO93soH6x Fk8CYLWjehx3K6hH4Niq47TK1ecws1RT2TFSaxfZVTqtm+U0QMqfhFUrFhIIfwrOa9upY2/A62XK7 OAWxBjsISBSrAuIAFKypUFyEWfI81qEJD1RVN3LRiGuopSaFaip+9TBKpFDXqSh8860uLAK+jGYN8 5TaG+bbYFREeU/f4J6HR4Tu+0D7aaRPLlDuA19k2xRQ2MMW/mf0GXm2Lqiq5Bmf7ID8j6Y0QjE4uD 4Lvz8bKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERdn-00000002Kww-408q; Fri, 22 Nov 2024 11:15:59 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUu-00000002JUY-0AFG for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:49 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-38235e99a53so1117317f8f.0 for ; Fri, 22 Nov 2024 03:06:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273606; x=1732878406; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6ktoHeGKmXYIdsUvMRAz3vVzbrdPr/EI44OUns8qrUY=; b=DIiZYNI3eLuJGTBGigF6d4LOXq0HqtiOqSwJijPgOdZOZnn32KIuK0wePhfGvasl07 tcdLthtY1U0xQmK/zUKaglghIKFdTIriuREYTxz7L/0EeR6nWe9inQLOuVpaV7WMvXum YywKD6s3qYRTc3GAyP4CJBn0IfKR3DeyBMLNrI6LaKgvPSDCD9KXloNmjads0OPrvTjD 6S4V/TArCYICRrvcSstkLgpaST0d0T+DN2rfIwtx2LuRHBxn6WVA64y89AO5LmejlSHg eH2qpz5uyV4YcsBn8JgEeD0z6TgESx0nKFaJH78J/SgFB3uLjXxbkysQ/3QOVRS/rFAM 380Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273606; x=1732878406; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6ktoHeGKmXYIdsUvMRAz3vVzbrdPr/EI44OUns8qrUY=; b=qz1PDPGzaxjPlntbGTMvQqJPkw3s7M4AX1wDjkvHzLG+vVTYFH/hn5xJJSG9veh61d wy2uudS6YIjl8eX+D3idWdbAk6pVyUxmXGK1KxSyz7SoGloGPzV2lxYl4C7qjGhjb1Po lqmChM2UH/aceKpXU945PnvbVlI5i2GPMHMrTfAE97icYcvlv4lycjZuv7ccLacG+UoG 0J/BA2yk4A1A0/HRgoc+P4SUKFMXGCYnzLPhI4sTx+GOYMJ5WKLGIZPtAsvvPXOApnzl tY32anaa1TXCGN3bNVTSYZw0QRc2PJJKkBSevPRj9+boeQ+W1A37i72G9ewiVqFu3TVK 4myQ== X-Forwarded-Encrypted: i=1; AJvYcCXmzg+aXonVVVI/M/sglHtBd1wyG458zfvAGWtWsd9Ov/5G5Y+imyIGYC06IvI888dYcD/xio1d8FjUEUU3QSCx@lists.infradead.org X-Gm-Message-State: AOJu0YxaTj0SV0Fhhw/+m7Vjyj5bizzMYOl87JUn/MKyt2L+bBUzK56w Z97SD/KMyZOXTjxrEt2LKDtC0A91vvd+jye1FugMG1ilmLkwv4/veW/b4i156TxiXl++vQ8k/Q= = X-Google-Smtp-Source: AGHT+IHxlIibkga1iw7v2vPgOGsJTBx+OxZIvFAGu3eNw4H3FAZffNRHym6mWyxiSwY3urBtHDIw7EKv/w== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a5d:5712:0:b0:382:3c47:f377 with SMTP id ffacd0b85a97d-3826060f567mr890f8f.3.1732273605841; Fri, 22 Nov 2024 03:06:45 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:19 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-10-tabba@google.com> Subject: [PATCH v2 09/12] KVM: arm64: Remove redundant setting of HCR_EL2 trap bit From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030648_080778_5870F04F X-CRM114-Status: UNSURE ( 9.18 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In hVHE mode, HCR_E2H should be set for both protected and non-protected VMs. Since commit 23c9d58cb458 ("KVM: arm64: Fix initializing traps in protected mode"), this has been fixed, and the setting of the flag here is redundant. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index ffa500b500f2..cede527a59d4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -57,9 +57,6 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) /* No support for AArch32. */ val |= HCR_RW; - if (has_hvhe()) - val |= HCR_E2H; - /* * Always trap: * - Feature id registers: to control features exposed to guests From patchwork Fri Nov 22 11:06:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 698BFD75E25 for ; Fri, 22 Nov 2024 11:17:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XWe5Mdglq6lpGdligyGuHt48Am1OpJsAxqpU83sYBew=; b=vdA6dxT9f1Kcie33UntXOxRZzc oZbWH0/1bLdiCaxpmmnXQZfTvdnNghKMjlXpBJBaVS2xBsfHWY1QQv9tL/YbzzM2vT4ZAZUQyCztj qSnwGrqp2UNYtGsLS5N7MDSpiWylrKx7p4Rct7W6NiQM8jQpZXllE1AOg1mNM7FG9y42Gb2sCAJqF 0+YbMs5zpbNo7V8uMneiTUX6qr5Lp9UodKz5sY9TcfsObkJn7itODZ2evI5te/4YaC7yXx1KaURL6 mfDXNx/+4fBjBl3JCQ5KUrW94ksuqkLotnJiQIUoOU7FUJgYLqbaPCprSgYl7b6CCywqiaxa/4GCc c+XkNRoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERei-00000002L8Z-2WZE; Fri, 22 Nov 2024 11:16:56 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUw-00000002JV3-1WN7 for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:51 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4315af466d9so13233605e9.3 for ; Fri, 22 Nov 2024 03:06:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273608; x=1732878408; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XWe5Mdglq6lpGdligyGuHt48Am1OpJsAxqpU83sYBew=; b=Sj5ZqpNwGoLZT6sCmILLYNxxn09fGJQUhAfOt5Ujb9qqhqarE1w3KRrGNMwySukTJZ bK9+lladMHuyuuFzCg6s9CDhzJ3/PZDdovSuEBIbQlrMiWuBWpVzot1Q7HQszWUfbnds uS5aH1XLMlJeDIiwIJMATZnewYp9MJmAHn/W/hoFfKNqKU0HINLW1FTmy8z0IoBIwPVU p/mTjP6A6PCt/wuyjx3P+JdR+iZlZ3u/9wlHZY36W4fZFQfrSC3yTJZ+fLjAJ3e1ghGa BtnudpcmTm9A7Kr5AflNa3j/BbByS6O6LJ0BsMOCVx4cqpuXUU5Sfy4ncRm8wXsaJTn9 dGoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273608; x=1732878408; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XWe5Mdglq6lpGdligyGuHt48Am1OpJsAxqpU83sYBew=; b=JXwXHu9URjzF+7T1iznexyplJhjFIqTYgk2hGP9kSGEyMY0f057dZmGiLwaPMEPzqW /uVEJUrzynIqz1TDM7Gle2NAjEMU5UxKOX3dYrlW7tRA9otrybMNDRJD9mL2RgA1UNhB Nc83U2BytinKs1rUkJ25nzV2Sqf+xOP0LkIYuRE3LL3koMdyTdJS7/hVBRja26U5vsn7 FKFb3zTNRXAm/mqcvnre58MWijEqxG60rY/VwD2cZci+Rj+R+eKVNYBglvE2LwvXJEvz L9GxCo+No+KeyEzjcA4/YLuwHfV4zv0t8UxgtYihSb2fM6EtujD+sxH6PP6XAQkhBxjS ohXA== X-Forwarded-Encrypted: i=1; AJvYcCUDCtwSHzn96GLzoVERY6X1kzy8UgdakGceXW2HZUY+ZAl84IFmWpfbZIXCbP9kO/C4vr/RoucJFvmZFki75Wn+@lists.infradead.org X-Gm-Message-State: AOJu0YyEkHME35BH0yOhoahA9G7TXQKtbQnSfsg2R1Xi5pyv/Jbp4xAe MeMGuiWDqOwml4I+X0whvP4kl2SLo3tZHtr1lqt2YMdNmp57vPlQFQvM+PYlY879hPbHQEMcIQ= = X-Google-Smtp-Source: AGHT+IEu7Wnz6mni8ialrdn5OeFZVWOYWzP7F/hTJWtX6rx9KrRnIH5sNRYNhvao8zhDmrIXrilxffaSrg== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:6005:b0:42c:a8b5:c1b with SMTP id 5b1f17b1804b1-433ce41bee9mr8665e9.2.1732273608174; Fri, 22 Nov 2024 03:06:48 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:20 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-11-tabba@google.com> Subject: [PATCH v2 10/12] KVM: arm64: Calculate cptr_el2 traps on activating traps From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030650_400087_460B5FE5 X-CRM114-Status: GOOD ( 16.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to VHE, calculate the value of cptr_el2 from scratch on activate traps. This removes the need to store cptr_el2 in every vcpu structure. Moreover, some traps, such as whether the guest owns the fp registers, need to be set on every vcpu run. Reported-by: James Clark Fixes: 5294afdbf45a ("KVM: arm64: Exclude FP ownership from kvm_vcpu_arch") Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 1 - arch/arm64/kvm/hyp/nvhe/pkvm.c | 42 ------------------------- arch/arm64/kvm/hyp/nvhe/switch.c | 51 +++++++++++++++++++------------ 4 files changed, 32 insertions(+), 63 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f333b189fb43..99660d040dda 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -708,7 +708,6 @@ struct kvm_vcpu_arch { u64 hcr_el2; u64 hcrx_el2; u64 mdcr_el2; - u64 cptr_el2; /* Exception Information */ struct kvm_vcpu_fault_info fault; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b295218cdc24..8a3d02cf0a7a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1546,7 +1546,6 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); /* * Handle the "start in power-off" case. diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index cede527a59d4..c8ab3e59f4b1 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -83,44 +83,6 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 = val; } -static void pvm_init_traps_cptr(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = vcpu->kvm; - u64 val = vcpu->arch.cptr_el2; - - if (!has_hvhe()) { - val |= CPTR_NVHE_EL2_RES1; - val &= ~(CPTR_NVHE_EL2_RES0); - } - - if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) - val |= CPTR_EL2_TAM; - - /* SVE can be disabled by userspace even if supported. */ - if (!vcpu_has_sve(vcpu)) { - if (has_hvhe()) - val &= ~(CPACR_ELx_ZEN); - else - val |= CPTR_EL2_TZ; - } - - /* No SME support in KVM. */ - BUG_ON(kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)); - if (has_hvhe()) - val &= ~(CPACR_ELx_SMEN); - else - val |= CPTR_EL2_TSM; - - if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP)) { - if (has_hvhe()) - val |= CPACR_EL1_TTA; - else - val |= CPTR_EL2_TTA; - } - - vcpu->arch.cptr_el2 = val; -} - static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; @@ -191,7 +153,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; int ret; - vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); vcpu->arch.mdcr_el2 = 0; pkvm_vcpu_reset_hcr(vcpu); @@ -204,7 +165,6 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu) return ret; pvm_init_traps_hcr(vcpu); - pvm_init_traps_cptr(vcpu); pvm_init_traps_mdcr(vcpu); return 0; @@ -644,8 +604,6 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, return ret; } - hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu); - return 0; } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 7786a83d0fa8..0ebf84a9f9e2 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -35,33 +35,46 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); -static void __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_cptr_traps(struct kvm_vcpu *vcpu) { - u64 val; + u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ - ___activate_traps(vcpu, vcpu->arch.hcr_el2); - __activate_traps_common(vcpu); + if (has_hvhe()) { + val |= CPACR_ELx_TTA; - val = vcpu->arch.cptr_el2; - val |= CPTR_EL2_TAM; /* Same bit irrespective of E2H */ - val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA; - if (cpus_have_final_cap(ARM64_SME)) { - if (has_hvhe()) - val &= ~CPACR_ELx_SMEN; - else - val |= CPTR_EL2_TSM; - } + if (guest_owns_fp_regs()) { + val |= CPACR_ELx_FPEN; + if (vcpu_has_sve(vcpu)) + val |= CPACR_ELx_ZEN; + } + } else { + val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; - if (!guest_owns_fp_regs()) { - if (has_hvhe()) - val &= ~(CPACR_ELx_FPEN | CPACR_ELx_ZEN); - else - val |= CPTR_EL2_TFP | CPTR_EL2_TZ; + /* + * Always trap SME since it's not supported in KVM. + * TSM is RES1 if SME isn't implemented. + */ + val |= CPTR_EL2_TSM; - __activate_traps_fpsimd32(vcpu); + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TZ; + + if (!guest_owns_fp_regs()) + val |= CPTR_EL2_TFP; } + if (!guest_owns_fp_regs()) + __activate_traps_fpsimd32(vcpu); + kvm_write_cptr_el2(val); +} + +static void __activate_traps(struct kvm_vcpu *vcpu) +{ + ___activate_traps(vcpu, vcpu->arch.hcr_el2); + __activate_traps_common(vcpu); + __activate_cptr_traps(vcpu); + write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { From patchwork Fri Nov 22 11:06:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F79DD75E29 for ; Fri, 22 Nov 2024 11:18:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CdzcjtKcPCFIYKoG4aLfkBMjNTuiZe1UO/NhF2ssDps=; b=DAOW1GCqanVR286JTG3f4hZ4kB z4x5xOSx0ju7Gd76CeGK2jVaAA2lil3We6NRT9xc2hPhMNCWqzbThijpN9LRBXA6qwTdcFYugjWNb rx4WbxmKLFvT9I8qXy67p8ru8nYoWPu12FMbyMnT3R1A9nNwlb6nlgnrGMvZgg9iKB6uh8xBo4T3W 9yoDyJz+EYhKzZukhpcXSwWE556m6gwWv+GJeJbF+CnHnvNfiANG2YIWjYpk/yNOakscaNsPv5h0B vhVoWFbA7G65EIg+FU7tfGuCSVmW7r8L1EG1zJJRtQo0Qo863Ufq0MUropph1UkDsKC6YM6CBGkha 4PLD4+Cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERfe-00000002LIT-0yju; Fri, 22 Nov 2024 11:17:54 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERUx-00000002JVP-3mRG for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:52 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-e293150c2c6so3732100276.1 for ; Fri, 22 Nov 2024 03:06:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273610; x=1732878410; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CdzcjtKcPCFIYKoG4aLfkBMjNTuiZe1UO/NhF2ssDps=; b=hCRWfnndKTAAILnBhpThH0ohhBAn4YMfZJWynJIwCRBP2c+98PgdA95tbPEgMNBHCX pZMeV8C8xQ4ev4pFSlJ5usUXqjAXHNi368N7UsEJpIM7Si6/71AkIBheH1Z6HhBO0YI6 jph4hJOPIgHHUSfszqwdfpzj7CwZjivkMV/JgJt3HLlcvCUvAHYK7QRmMIO+d4dxvCMD t22Nq4+5QbtcS/K6dDSOtngi8Jy26sLv3x+g0Z0voZ5yltnukJ5XN6CziRNwScZXNAs5 723DRAt4j4qmJStoIKokzNWUTPBEVzIuCKDPQvNN2ZUuQnH8AAUYiJB2amFk5lBcdD75 Oh8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273610; x=1732878410; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CdzcjtKcPCFIYKoG4aLfkBMjNTuiZe1UO/NhF2ssDps=; b=O4pQCA/NNuGsiB33lx+/rB1jwYi0NncwB+zMXkNzHiXAlSre1IEx9qpLhHgxI7WjWc Ez3I0E4qmJ+Gnv6y/avKtSGaEfP4fQ7nOZNfhbDfFDCN/FcpInXbJzVGq3hT3/OUN+GH wF6YWh6hsB38KW4f7e8CwLcloRRrcUeo2E6HCtY3/ci3+7rRQ/asQJQMyc9X/P6LdmAG gryes47OjK/mwmC3R3JGfd6WdCaZX+Qz2079VgbJv1Y2aVy4T8N86qzoGENlvTQTLR9v Tjq9jGYu5WvzAG3X1m1XAp51gJa8VfmSUTh2gYR5Sukwsd0na6GWSTE/iX2IJ+/klX1K mUOA== X-Forwarded-Encrypted: i=1; AJvYcCWfV4HnKDfdb20hqysioIHdf10c0RrCeE2thvT+GCMV7NPBk2/UofrwYCejOVM45/vlz+bxwVbzk92zhOjLDbU9@lists.infradead.org X-Gm-Message-State: AOJu0Yz4Q5j6a47iBYRkKxwA+ndxoybNmsMCC53RKizURYRVWbCJATXi sg/i1maSBdwslspvDNPQwfOMQ88+PShpPr9Hr8p5nS0VDjZKbIFmGvbBM9rH5b5V1j4uOyoxqg= = X-Google-Smtp-Source: AGHT+IEDyz1NhZvJVWey0V55b9I+icgbi698SOPFguzKApUlQ+aj+8HutiXfXKDa/HG43pXBb+nLXDLnwQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6902:1745:b0:e38:78c0:1a0d with SMTP id 3f1490d57ef6-e38f8acc4bcmr1062276.2.1732273610567; Fri, 22 Nov 2024 03:06:50 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:21 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-12-tabba@google.com> Subject: [PATCH v2 11/12] KVM: arm64: Refactor kvm_reset_cptr_el2() From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030651_935958_5DDA356D X-CRM114-Status: GOOD ( 10.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Fold kvm_get_reset_cptr_el2() into kvm_reset_cptr_el2(), since it is its only caller. Add a comment to clarify that this function is meant for the host value of cptr_el2. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index cf811009a33c..7b3dc52248ce 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -619,7 +619,8 @@ static __always_inline void kvm_write_cptr_el2(u64 val) write_sysreg(val, cptr_el2); } -static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) +/* Resets the value of cptr_el2 when returning to the host. */ +static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) { u64 val; @@ -643,13 +644,6 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu) val &= ~CPTR_EL2_TSM; } - return val; -} - -static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) -{ - u64 val = kvm_get_reset_cptr_el2(vcpu); - kvm_write_cptr_el2(val); } From patchwork Fri Nov 22 11:06:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13883100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6F118D75E25 for ; Fri, 22 Nov 2024 11:19:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TmRVGUs6nkxwhsO2iucDTdvS5A9kO2wSDYZR/UjMPwg=; b=PpmsxHcEM6i4vUAcNyhr7UaUT3 g18GFK3TAUV5UPnjAGSOqGGoPX5QMxJJ+ESJnERZpVRsBXvSyAX24zsU5Y+3848cIqxWVunCodA+T 0gOjKHrm2z3v+DM1NxMaJmGIrLHzz54Mv/8s2RiAwaIgdOBk9S/03hcKpvnKEfZXdbpmBjeuJHxlu 5n8Qoh9IN8fUSRwMgusA2fKBhwc8KLxuilFw2ijUrvOWfdyiNeSSMDt46Nj30ooiSAYz2QzXX8a1v 30u2WeuVvjgSNwCH0fewhwyKIsHmTjMnORX8HL2jAEieuadSzSeA6ipWc9JPH1pAhKEl8lQIrDaWS L1cwvLUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tERgY-00000002LTC-3jwM; Fri, 22 Nov 2024 11:18:50 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tERV0-00000002JVr-25Cs for linux-arm-kernel@lists.infradead.org; Fri, 22 Nov 2024 11:06:55 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-6ee6c2b65b9so33768787b3.0 for ; Fri, 22 Nov 2024 03:06:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732273613; x=1732878413; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TmRVGUs6nkxwhsO2iucDTdvS5A9kO2wSDYZR/UjMPwg=; b=pfMvitR7DvKFf1X4LA2el+rtnlpH/uOSddR46eVr0FLPA44GEBmQfj3PzKOiYtnay7 4T0gbmIK8ADl6058k21q27ZLsqbHnlpSuSZJ3qbTpgtm71j+65Sb36u+8Fp35DZdz8/A AnNqk9nTQ+Eutd3zzSOBMoZ2tSE5Mriicsy+DWB8OM3vDoB0/J5auDPiE3Et+Cw+UWcz Wj3WP+CB6c6lofuSnlCH6lc0D0wgJv870rNZzPa0C/P19U05kwnuhf9En8RlwlNwrg5P i8sZyG4oXElPW9yQgosKJcSIxecy8l9uYysP3pVykg4MtvssdAVbMSjIKP0QpcvxnnTo r3Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732273613; x=1732878413; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TmRVGUs6nkxwhsO2iucDTdvS5A9kO2wSDYZR/UjMPwg=; b=oGSTfrwNOfwLuX6TLTRD6Ys3CqF/uj54K6T3Xif7fcEvmG4yXKLSMZzfaG+bvTU+JJ vCzOBYbZ5kRcnLxPq4Ho5iPRzAekZ1QDjPuSxTAn4J/knAlK/DNX736RjnM0HnAdJsto 16b0WTGcG5C2mzXISwFLiV5p3l61nBtfy09sSgevG8g81B8RFOYebgcnfvHapSoHI15l 0yB9QvEzuOImS4yVSemlt03QIwWIpyX1reKeqIsEjR4J9ZjgYowrcv7vi9rSt6KBIkZr 8AmG95voUibjOJSxoRgbPZEcVPFZ3PsUIn/obOj0xoU7+xiST+IveCkru7qfdyNGES4o Ux2Q== X-Forwarded-Encrypted: i=1; AJvYcCU8W8Vx5yws5UHmi3VmVYpXcTXrKCb9UDT43a1nxtWo8dTY/wUVwJdq62cwcVRIh/fbGCg8fRNHqflDFZNuGxIU@lists.infradead.org X-Gm-Message-State: AOJu0Yy+lMEg8uHt4ExwfNR5v+zaHgHK26EhSP9s/Lx+lus3UkkhNYOd JRLXzfktbu/AFWs5KgjJ+tRm4hkPQytJJwVPFnxxeVEF+szEwnv+fmz6uJX5Hog9zimhJPgSDw= = X-Google-Smtp-Source: AGHT+IFDY0g1fhzMzp9VrckEOC6u+svSMuwV073QrPa2hQhhHhGdZ5PW9AlHSxPAec4RYBV0SLxm6U1zdA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a81:d809:0:b0:6e2:12e5:356f with SMTP id 00721157ae682-6eee0a3cd59mr62337b3.3.1732273612814; Fri, 22 Nov 2024 03:06:52 -0800 (PST) Date: Fri, 22 Nov 2024 11:06:22 +0000 In-Reply-To: <20241122110622.3010118-1-tabba@google.com> Mime-Version: 1.0 References: <20241122110622.3010118-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122110622.3010118-13-tabba@google.com> Subject: [PATCH v2 12/12] KVM: arm64: Fix the value of the CPTR_EL2 RES1 bitmask for nVHE From: Fuad Tabba To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: maz@kernel.org, oliver.upton@linux.dev, james.clark@linaro.org, will@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, broonie@kernel.org, qperret@google.com, tabba@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241122_030654_533707_8D429C27 X-CRM114-Status: GOOD ( 13.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since the introduction of SME, bit 12 in CPTR_EL2 (nVHE) is TSM for trapping SME, instead of RES1, as per ARM ARM DDI 0487K.a, section D23.2.34. Fix the value of CPTR_NVHE_EL2_RES1 to reflect that, and adjust the code that relies on it accordingly. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_arm.h | 2 +- arch/arm64/include/asm/kvm_emulate.h | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 3e0f0de1d2da..24e4ac7c50f2 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -300,7 +300,7 @@ #define CPTR_EL2_TSM (1 << 12) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) -#define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ +#define CPTR_NVHE_EL2_RES1 (BIT(13) | BIT(9) | GENMASK(7, 0)) #define CPTR_NVHE_EL2_RES0 (GENMASK(63, 32) | \ GENMASK(29, 21) | \ GENMASK(19, 14) | \ diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 7b3dc52248ce..6602a4c091ac 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -640,8 +640,8 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu) if (vcpu_has_sve(vcpu) && guest_owns_fp_regs()) val |= CPTR_EL2_TZ; - if (cpus_have_final_cap(ARM64_SME)) - val &= ~CPTR_EL2_TSM; + if (!cpus_have_final_cap(ARM64_SME)) + val |= CPTR_EL2_TSM; } kvm_write_cptr_el2(val);