From patchwork Wed Dec 14 19:40:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Roth X-Patchwork-Id: 13073521 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC444C4332F for ; Wed, 14 Dec 2022 20:13:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229988AbiLNUM7 (ORCPT ); Wed, 14 Dec 2022 15:12:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230059AbiLNUMB (ORCPT ); Wed, 14 Dec 2022 15:12:01 -0500 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2040.outbound.protection.outlook.com [40.107.93.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 169012E9CA; Wed, 14 Dec 2022 12:04:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mBSvASUWLU9PMiN0UNspfbxMBUelJzmVVYgMC7cWyBQilNkJlzXzYWUrLn1JnJpI1P4VUtocYAaH586JcUFabn9/I4cjXrWr4yR5JlpC0uNsOlSBAo66VdZmhmBKt18QV3ent0xh6WlDPAHBHus6ea0g5VtIsnMk2sdtqcYjZlMdbkQkNN0HngIP3OkRRjecUhJl6lvB9Tvr5bYzzlvGWe9P21gPuXiGioQa98RjktfYzTY3zvqqolZ0YWg0vVFOd5oo2ncXXCkQXE99dIuenWz9Y0eNAuLsZXpwdeJGXB3SiCyiBl7UTqKNH7jdeXgQ4OKje3ZvVmA3c7CFg1NdYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=N3c3SYfOpfx8KmHSqA0Zsswq1NidwqBDgUXBOwOQeoU=; b=NHbZPwBmj/ONPFgJLZFB7ir73DZQPtoxsEWU+p7LEa/YosFTgxKoOkWlCqQQHo+HWiNJB8dHzjxZNlayU/JrPwL67OZgFrm7ZKEElDARtyCZljLJWKq3mzVOA/wESN2hN/3Z9VGpwFc6gFTqWXoCS4ksLbR0udFKjiwK+JxIUhfkXZTvLJW2c9COBgyt5yxNrSk+ZBzEW8Iv8KNT8c1Q3llClGB2b8kO4HYTpcqfN6TWN5PqVXoeXYYVmVDy+SKYzuzRfAT4t80wBoql5YrYtuRj2ph7qe4rSkneqcasmXKY5naFhigKFx0+4rwQh0eaJ0+7Ami4BMC9KPFf4HLL1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=N3c3SYfOpfx8KmHSqA0Zsswq1NidwqBDgUXBOwOQeoU=; b=ODC8ZGfHicraJLBc4i7AsQH2yaMeDZsttI5VLtPMJy40UtBYF0UGm63w28QP+fqc7ssxWQlsGS8Imo+YrbHLxeBJ/MXsIlI+tcEsNwu+P7J9actUoSeQvPUJekQ9rS0Zovveca2ONK8gEC3SZF/1ab12uBsSAO02S3/P1EeTLXI= Received: from MW4PR04CA0248.namprd04.prod.outlook.com (2603:10b6:303:88::13) by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.19; Wed, 14 Dec 2022 20:04:14 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:303:88:cafe::67) by MW4PR04CA0248.outlook.office365.com (2603:10b6:303:88::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.11 via Frontend Transport; Wed, 14 Dec 2022 20:04:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5924.11 via Frontend Transport; Wed, 14 Dec 2022 20:04:14 +0000 Received: from localhost (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 14 Dec 2022 14:04:13 -0600 From: Michael Roth To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Brijesh Singh Subject: [PATCH RFC v7 54/64] KVM: SVM: Support SEV-SNP AP Creation NAE event Date: Wed, 14 Dec 2022 13:40:46 -0600 Message-ID: <20221214194056.161492-55-michael.roth@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221214194056.161492-1-michael.roth@amd.com> References: <20221214194056.161492-1-michael.roth@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT013:EE_|DM6PR12MB4927:EE_ X-MS-Office365-Filtering-Correlation-Id: 5df8857f-c983-4660-1f47-08dade0e5fbf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4MydYN3bKdE+TL0cyJYnOoHDX5rfNWZZDay/B3WXKh22Dvlq8oKVVbLv7rAr+Jlm6MxYPkFHqFeQY6jB0HuXhXjniEcEdCMeftgT3vKBhEcA9xcqx+gEzZcduMesKLozfaG/+ZJK6E1gPQdqD1X41tg2IonxXVIfeThaRUt4jm/s3iyx/tla0u1huAmqOP7BEZTxGN+4uxzyYKNAt+gi4n5iZ2CxZYuIJozLCsCcHAYBTLOPUsRVdd8tdyJbonH/ZNjULoUgseABx5knKE0AtWj2CLnXlrqfgzqnXMdFOL2614lTfLECAKCmNaD6/RswhdBHDoaB9Awg1XEqCcRMQ4PF4oWiPFV3MzMZjNV4Nev+zitX+2Cfg5Jnvb1siOkSap5bA/swarJ5IB/Q8aptMFvFMzH3l9oNKgu/3XnZkpHiDmFAsT+hZwT42+EUDNsx7owRs92btVb2hZ5UFGSKjtv2F+qntyP80Qfnc/6+8/nhtyDwjAfZ/ZSDSCVfmIEuAI70zGMCuhezO+cz6l2wIGsRTmr+W642mgZXDpR/dVAaY/rAhPCqfCLogyxE3WdaTzL8IMuroPBbSUUY7qmfEs8F+0zHYGd2WoP3YQTik81PkQ7Fb2OXiNEr9aTya/8toJIvJAS4qYUjWycxw1kjyvT8C262AWgWmLkeDHVpnXQluEBKsQD6C9iQzEtfyOya1VQ1f3OZuVfKR/suA72pSQi2+iR+4xi5TAklkU+QMKA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:CA;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199015)(36840700001)(46966006)(40470700004)(478600001)(26005)(6666004)(186003)(82310400005)(7416002)(426003)(316002)(66899015)(1076003)(336012)(40480700001)(5660300002)(7406005)(16526019)(2906002)(54906003)(83380400001)(70206006)(36756003)(70586007)(36860700001)(4326008)(47076005)(30864003)(8676002)(2616005)(356005)(81166007)(6916009)(86362001)(44832011)(8936002)(41300700001)(40460700003)(82740400003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Dec 2022 20:04:14.2053 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5df8857f-c983-4660-1f47-08dade0e5fbf X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Tom Lendacky Add support for the SEV-SNP AP Creation NAE event. This allows SEV-SNP guests to alter the register state of the APs on their own. This allows the guest a way of simulating INIT-SIPI. A new event, KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, is created and used so as to avoid updating the VMSA pointer while the vCPU is running. For CREATE The guest supplies the GPA of the VMSA to be used for the vCPU with the specified APIC ID. The GPA is saved in the svm struct of the target vCPU, the KVM_REQ_UPDATE_PROTECTED_GUEST_STATE event is added to the vCPU and then the vCPU is kicked. For CREATE_ON_INIT: The guest supplies the GPA of the VMSA to be used for the vCPU with the specified APIC ID the next time an INIT is performed. The GPA is saved in the svm struct of the target vCPU. For DESTROY: The guest indicates it wishes to stop the vCPU. The GPA is cleared from the svm struct, the KVM_REQ_UPDATE_PROTECTED_GUEST_STATE event is added to vCPU and then the vCPU is kicked. The KVM_REQ_UPDATE_PROTECTED_GUEST_STATE event handler will be invoked as a result of the event or as a result of an INIT. The handler sets the vCPU to the KVM_MP_STATE_UNINITIALIZED state, so that any errors will leave the vCPU as not runnable. Any previous VMSA pages that were installed as part of an SEV-SNP AP Creation NAE event are un-pinned. If a new VMSA is to be installed, the VMSA guest page is pinned and set as the VMSA in the vCPU VMCB and the vCPU state is set to KVM_MP_STATE_RUNNABLE. If a new VMSA is not to be installed, the VMSA is cleared in the vCPU VMCB and the vCPU state is left as KVM_MP_STATE_UNINITIALIZED to prevent it from being run. Signed-off-by: Tom Lendacky Signed-off-by: Brijesh Singh Signed-off-by: Ashish Kalra [mdr: add handling for restrictedmem] Signed-off-by: Michael Roth --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/asm/svm.h | 7 +- arch/x86/kvm/svm/sev.c | 245 ++++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 3 + arch/x86/kvm/svm/svm.h | 7 + arch/x86/kvm/x86.c | 9 ++ 6 files changed, 271 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 15b9c678b281..5958cd93e5e6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -115,6 +115,7 @@ #define KVM_REQ_HV_TLB_FLUSH \ KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_MEMORY_MCE KVM_ARCH_REQ(33) +#define KVM_REQ_UPDATE_PROTECTED_GUEST_STATE KVM_ARCH_REQ(34) #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index c18d78d5e505..e76ad26ba64f 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -278,7 +278,12 @@ enum avic_ipi_failure_cause { #define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF) #define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL -#define SVM_SEV_FEAT_SNP_ACTIVE BIT(0) +#define SVM_SEV_FEAT_SNP_ACTIVE BIT(0) +#define SVM_SEV_FEAT_RESTRICTED_INJECTION BIT(3) +#define SVM_SEV_FEAT_ALTERNATE_INJECTION BIT(4) +#define SVM_SEV_FEAT_INT_INJ_MODES \ + (SVM_SEV_FEAT_RESTRICTED_INJECTION | \ + SVM_SEV_FEAT_ALTERNATE_INJECTION) struct vmcb_seg { u16 selector; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 36c312143d12..2f4c9f2bcf76 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -771,6 +771,7 @@ static int sev_launch_update_data(struct kvm *kvm, static int sev_es_sync_vmsa(struct vcpu_svm *svm) { + struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info; struct sev_es_save_area *save = svm->sev_es.vmsa; /* Check some debug related fields before encrypting the VMSA */ @@ -816,6 +817,12 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) if (sev_snp_guest(svm->vcpu.kvm)) save->sev_features |= SVM_SEV_FEAT_SNP_ACTIVE; + /* + * Save the VMSA synced SEV features. For now, they are the same for + * all vCPUs, so just save each time. + */ + sev->sev_features = save->sev_features; + pr_debug("Virtual Machine Save Area (VMSA):\n"); print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false); @@ -3182,6 +3189,10 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm, u64 *exit_code) if (!ghcb_sw_scratch_is_valid(ghcb)) goto vmgexit_err; break; + case SVM_VMGEXIT_AP_CREATION: + if (!ghcb_rax_is_valid(ghcb)) + goto vmgexit_err; + break; case SVM_VMGEXIT_NMI_COMPLETE: case SVM_VMGEXIT_AP_HLT_LOOP: case SVM_VMGEXIT_AP_JUMP_TABLE: @@ -3555,6 +3566,226 @@ static void snp_handle_ext_guest_request(struct vcpu_svm *svm, gpa_t req_gpa, gp svm_set_ghcb_sw_exit_info_2(vcpu, rc); } +static kvm_pfn_t gfn_to_pfn_restricted(struct kvm *kvm, gfn_t gfn) +{ + struct kvm_memory_slot *slot; + kvm_pfn_t pfn; + int order = 0; + + slot = gfn_to_memslot(kvm, gfn); + if (!kvm_slot_can_be_private(slot)) { + pr_err("SEV: Failure retrieving restricted memslot for GFN 0x%llx, flags 0x%x, userspace_addr: 0x%lx\n", + gfn, slot->flags, slot->userspace_addr); + return INVALID_PAGE; + } + + if (!kvm_mem_is_private(kvm, gfn)) { + pr_err("SEV: Failure retrieving restricted PFN for GFN 0x%llx\n", gfn); + return INVALID_PAGE; + } + + if (kvm_restricted_mem_get_pfn(slot, gfn, &pfn, &order)) { + pr_err("SEV: Failure retrieving restricted PFN for GFN 0x%llx\n", gfn); + return INVALID_PAGE; + } + + put_page(pfn_to_page(pfn)); + + return pfn; +} + +static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + kvm_pfn_t pfn; + hpa_t cur_pa; + + WARN_ON(!mutex_is_locked(&svm->sev_es.snp_vmsa_mutex)); + + /* Save off the current VMSA PA for later checks */ + cur_pa = svm->sev_es.vmsa_pa; + + /* Mark the vCPU as offline and not runnable */ + vcpu->arch.pv.pv_unhalted = false; + vcpu->arch.mp_state = KVM_MP_STATE_STOPPED; + + /* Clear use of the VMSA */ + svm->sev_es.vmsa_pa = INVALID_PAGE; + svm->vmcb->control.vmsa_pa = INVALID_PAGE; + + if (cur_pa != __pa(svm->sev_es.vmsa) && VALID_PAGE(cur_pa)) { + /* + * The svm->sev_es.vmsa_pa field holds the hypervisor physical + * address of the about to be replaced VMSA which will no longer + * be used or referenced, so un-pin it. However, restricted + * pages (e.g. via AP creation) should be left to the + * restrictedmem backend to deal with, so don't release the + * page in that case. + */ + if (!VALID_PAGE(gfn_to_pfn_restricted(vcpu->kvm, + gpa_to_gfn(svm->sev_es.snp_vmsa_gpa)))) + kvm_release_pfn_dirty(__phys_to_pfn(cur_pa)); + } + + if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) { + /* + * The VMSA is referenced by the hypervisor physical address, + * so retrieve the PFN and ensure it is restricted memory. + */ + pfn = gfn_to_pfn_restricted(vcpu->kvm, gpa_to_gfn(svm->sev_es.snp_vmsa_gpa)); + if (!VALID_PAGE(pfn)) + return pfn; + + /* Use the new VMSA */ + svm->sev_es.vmsa_pa = pfn_to_hpa(pfn); + svm->vmcb->control.vmsa_pa = svm->sev_es.vmsa_pa; + + /* Mark the vCPU as runnable */ + vcpu->arch.pv.pv_unhalted = false; + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + + svm->sev_es.snp_vmsa_gpa = INVALID_PAGE; + } + + /* + * When replacing the VMSA during SEV-SNP AP creation, + * mark the VMCB dirty so that full state is always reloaded. + */ + vmcb_mark_all_dirty(svm->vmcb); + + return 0; +} + +/* + * Invoked as part of svm_vcpu_reset() processing of an init event. + */ +void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + int ret; + + if (!sev_snp_guest(vcpu->kvm)) + return; + + mutex_lock(&svm->sev_es.snp_vmsa_mutex); + + if (!svm->sev_es.snp_ap_create) + goto unlock; + + svm->sev_es.snp_ap_create = false; + + ret = __sev_snp_update_protected_guest_state(vcpu); + if (ret) + vcpu_unimpl(vcpu, "snp: AP state update on init failed\n"); + +unlock: + mutex_unlock(&svm->sev_es.snp_vmsa_mutex); +} + +static int sev_snp_ap_creation(struct vcpu_svm *svm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info; + struct kvm_vcpu *vcpu = &svm->vcpu; + struct kvm_vcpu *target_vcpu; + struct vcpu_svm *target_svm; + unsigned int request; + unsigned int apic_id; + bool kick; + int ret; + + request = lower_32_bits(svm->vmcb->control.exit_info_1); + apic_id = upper_32_bits(svm->vmcb->control.exit_info_1); + + /* Validate the APIC ID */ + target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, apic_id); + if (!target_vcpu) { + vcpu_unimpl(vcpu, "vmgexit: invalid AP APIC ID [%#x] from guest\n", + apic_id); + return -EINVAL; + } + + ret = 0; + + target_svm = to_svm(target_vcpu); + + /* + * The target vCPU is valid, so the vCPU will be kicked unless the + * request is for CREATE_ON_INIT. For any errors at this stage, the + * kick will place the vCPU in an non-runnable state. + */ + kick = true; + + mutex_lock(&target_svm->sev_es.snp_vmsa_mutex); + + target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE; + target_svm->sev_es.snp_ap_create = true; + + /* Interrupt injection mode shouldn't change for AP creation */ + if (request < SVM_VMGEXIT_AP_DESTROY) { + u64 sev_features; + + sev_features = vcpu->arch.regs[VCPU_REGS_RAX]; + sev_features ^= sev->sev_features; + if (sev_features & SVM_SEV_FEAT_INT_INJ_MODES) { + vcpu_unimpl(vcpu, "vmgexit: invalid AP injection mode [%#lx] from guest\n", + vcpu->arch.regs[VCPU_REGS_RAX]); + ret = -EINVAL; + goto out; + } + } + + switch (request) { + case SVM_VMGEXIT_AP_CREATE_ON_INIT: + kick = false; + fallthrough; + case SVM_VMGEXIT_AP_CREATE: + if (!page_address_valid(vcpu, svm->vmcb->control.exit_info_2)) { + vcpu_unimpl(vcpu, "vmgexit: invalid AP VMSA address [%#llx] from guest\n", + svm->vmcb->control.exit_info_2); + ret = -EINVAL; + goto out; + } + + /* + * Malicious guest can RMPADJUST a large page into VMSA which + * will hit the SNP erratum where the CPU will incorrectly signal + * an RMP violation #PF if a hugepage collides with the RMP entry + * of VMSA page, reject the AP CREATE request if VMSA address from + * guest is 2M aligned. + */ + if (IS_ALIGNED(svm->vmcb->control.exit_info_2, PMD_SIZE)) { + vcpu_unimpl(vcpu, + "vmgexit: AP VMSA address [%llx] from guest is unsafe as it is 2M aligned\n", + svm->vmcb->control.exit_info_2); + ret = -EINVAL; + goto out; + } + + target_svm->sev_es.snp_vmsa_gpa = svm->vmcb->control.exit_info_2; + break; + case SVM_VMGEXIT_AP_DESTROY: + break; + default: + vcpu_unimpl(vcpu, "vmgexit: invalid AP creation request [%#x] from guest\n", + request); + ret = -EINVAL; + break; + } + +out: + if (kick) { + if (target_vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) + target_vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + + kvm_make_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, target_vcpu); + kvm_vcpu_kick(target_vcpu); + } + + mutex_unlock(&target_svm->sev_es.snp_vmsa_mutex); + + return ret; +} + static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; @@ -3802,6 +4033,18 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) ret = 1; break; } + case SVM_VMGEXIT_AP_CREATION: + ret = sev_snp_ap_creation(svm); + if (ret) { + svm_set_ghcb_sw_exit_info_1(vcpu, 1); + svm_set_ghcb_sw_exit_info_2(vcpu, + X86_TRAP_GP | + SVM_EVTINJ_TYPE_EXEPT | + SVM_EVTINJ_VALID); + } + + ret = 1; + break; case SVM_VMGEXIT_UNSUPPORTED_EVENT: vcpu_unimpl(vcpu, "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", @@ -3906,6 +4149,8 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm) set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX, GHCB_VERSION_MIN, sev_enc_bit)); + + mutex_init(&svm->sev_es.snp_vmsa_mutex); } void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e9317d27a01d..7f8c480dfa5e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1337,6 +1337,9 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) svm->spec_ctrl = 0; svm->virt_spec_ctrl = 0; + if (init_event) + sev_snp_init_protected_guest_state(vcpu); + init_vmcb(vcpu); if (!init_event) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 284902e22dce..5e7cb0260dc3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -103,6 +103,8 @@ struct kvm_sev_info { spinlock_t psc_lock; void *snp_certs_data; struct mutex guest_req_lock; + + u64 sev_features; /* Features set at VMSA creation */ }; struct kvm_svm { @@ -224,6 +226,10 @@ struct vcpu_sev_es_state { u64 ghcb_sw_exit_info_2; u64 ghcb_registered_gpa; + + struct mutex snp_vmsa_mutex; + gpa_t snp_vmsa_gpa; + bool snp_ap_create; }; struct vcpu_svm { @@ -734,6 +740,7 @@ void sev_rmp_page_level_adjust(struct kvm *kvm, gfn_t gfn, int *level); int sev_post_map_gfn(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn); void sev_post_unmap_gfn(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn); void handle_rmp_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code); +void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_fault_is_private(struct kvm *kvm, gpa_t gpa, u64 error_code, bool *private_fault); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 08dd1ef7e136..a08601277497 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10387,6 +10387,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) r = 0; goto out; } + + if (kvm_check_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu)) { + kvm_vcpu_reset(vcpu, true); + if (vcpu->arch.mp_state != KVM_MP_STATE_RUNNABLE) + goto out; + } } if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || @@ -12667,6 +12673,9 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) return true; #endif + if (kvm_test_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu)) + return true; + if (kvm_arch_interrupt_allowed(vcpu) && (kvm_cpu_has_interrupt(vcpu) || kvm_guest_apic_has_interrupt(vcpu)))