Message ID | 20240418194133.1452059-10-michael.roth@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand |
On Thu, Apr 18, 2024 at 9:42 PM Michael Roth <michael.roth@amd.com> wrote: > +/* As defined by SEV-SNP Firmware ABI, under "Guest Policy". */ > +#define SNP_POLICY_MASK_API_MAJOR GENMASK_ULL(15, 8) > +#define SNP_POLICY_MASK_API_MINOR GENMASK_ULL(7, 0) > + > +#define SNP_POLICY_MASK_VALID (SNP_POLICY_MASK_SMT | \ > + SNP_POLICY_MASK_RSVD_MBO | \ > + SNP_POLICY_MASK_DEBUG | \ > + SNP_POLICY_MASK_SINGLE_SOCKET | \ > + SNP_POLICY_MASK_API_MAJOR | \ > + SNP_POLICY_MASK_API_MINOR) > + > +/* KVM's SNP support is compatible with 1.51 of the SEV-SNP Firmware ABI. */ > +#define SNP_POLICY_API_MAJOR 1 > +#define SNP_POLICY_API_MINOR 51 > +static inline bool sev_version_greater_or_equal(u8 major, u8 minor) > +{ > + if (major < SNP_POLICY_API_MAJOR) > + return true; Should it perhaps refuse version 0.x? With something like a #define SNP_POLICY_API_MAJOR_MIN 1 to make it a bit more future proof (and testable). > + major = (params.policy & SNP_POLICY_MASK_API_MAJOR); This should be >> 8. Do the QEMU patches not set the API version? :) Paolo
On Fri, Apr 19, 2024 at 01:52:24PM +0200, Paolo Bonzini wrote: > On Thu, Apr 18, 2024 at 9:42 PM Michael Roth <michael.roth@amd.com> wrote: > > +/* As defined by SEV-SNP Firmware ABI, under "Guest Policy". */ > > +#define SNP_POLICY_MASK_API_MAJOR GENMASK_ULL(15, 8) > > +#define SNP_POLICY_MASK_API_MINOR GENMASK_ULL(7, 0) > > + > > +#define SNP_POLICY_MASK_VALID (SNP_POLICY_MASK_SMT | \ > > + SNP_POLICY_MASK_RSVD_MBO | \ > > + SNP_POLICY_MASK_DEBUG | \ > > + SNP_POLICY_MASK_SINGLE_SOCKET | \ > > + SNP_POLICY_MASK_API_MAJOR | \ > > + SNP_POLICY_MASK_API_MINOR) > > + > > +/* KVM's SNP support is compatible with 1.51 of the SEV-SNP Firmware ABI. */ > > +#define SNP_POLICY_API_MAJOR 1 > > +#define SNP_POLICY_API_MINOR 51 > > > +static inline bool sev_version_greater_or_equal(u8 major, u8 minor) > > +{ > > + if (major < SNP_POLICY_API_MAJOR) > > + return true; > > Should it perhaps refuse version 0.x? With something like a > > #define SNP_POLICY_API_MAJOR_MIN 1 > > to make it a bit more future proof (and testable). > > > + major = (params.policy & SNP_POLICY_MASK_API_MAJOR); > > This should be >> 8. Do the QEMU patches not set the API version? :) Argh...it does if you set it via the -object sev-snp-guest,policy=0x... option. I tested with reserved ranges and other flags, but not with non-zero major/minor API fields. =/ But I'm having 2nd thoughts about trying to enforce API version via KVM_SEV_SNP_LAUNCH_START. In practice, the only sensible way to really interpret it is as "the minimum firmware version that userspace decides it is comfortable running a particular guest" on. And that enforcement is already handled as part of the SNP_LAUNCH_START firmware command in the SNP Firmware ABI: if the policy specifies a higher minimum version than what firmware is currently running then it will return as error that will be reported by QEMU as: sev_snp_launch_start: SNP_LAUNCH_START ret=-5 fw_error=7 'Policy is not allowed' On the firmware driver side (drivers/crypto/ccp/sev-dev.c), we already enforce 1.51 as minimum supported SNP firmware, so that sort of already covers the SNP_POLICY_API_MAJOR_MIN case as well. E.g. the test surface KVM needs to concern itself with is already effectively 1.51+. In that sense, whether the user decides to be less restrictive with what minimum firmware version they allow is then totally up to the user, and has no bearing on what firmware versions KVM needs to concern itself with. Then the question of whether or not KVM fully *exposes* a particular user-visible feature of a newer version of the firmware/ABI would be a separate thing to be handled via the normal KVM capabilities/attributes mechanisms. So my current leaning is to send a v14 that backs out the major/minor policy enforcement and let firmware handle that aspect. (and also address your other comments). But let me know if you think that should be handled differently. Thanks! -Mike > > Paolo >
On Fri, Apr 19, 2024 at 4:19 PM Michael Roth <michael.roth@amd.com> wrote: > So my current leaning is to send a v14 that backs out the major/minor > policy enforcement and let firmware handle that aspect. (and also > address your other comments). Sounds good to me! In the meanwhile I guess you can also update to have PFERR_PRIVATE_ACCESS. Paolo
On Thu, Apr 18, 2024, Michael Roth wrote: > +static inline bool sev_version_greater_or_equal(u8 major, u8 minor) > +{ > + if (major < SNP_POLICY_API_MAJOR) > + return true; > + > + if (major == SNP_POLICY_API_MAJOR && minor <= SNP_POLICY_API_MINOR) > + return true; > + > + return false; > +} > + > +static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp) > +{ > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > + struct sev_data_snp_launch_start start = {0}; > + struct kvm_sev_snp_launch_start params; > + u8 major, minor; > + int rc; > + > + if (!sev_snp_guest(kvm)) > + return -ENOTTY; > + > + if (copy_from_user(¶ms, u64_to_user_ptr(argp->data), sizeof(params))) > + return -EFAULT; > + > + /* Don't allow userspace to allocate memory for more than 1 SNP context. */ > + if (sev->snp_context) { > + pr_debug("SEV-SNP context already exists. Refusing to allocate an additional one.\n"); What's the plan with all these printks? There are far too many in this series. Some might be useful, but many of them have no business landing upstream. > + return -EINVAL; > + } > + > + sev->snp_context = snp_context_create(kvm, argp); > + if (!sev->snp_context) > + return -ENOTTY; > + > + if (params.policy & ~SNP_POLICY_MASK_VALID) { > + pr_debug("SEV-SNP hypervisor does not support requested policy %llx (supported %llx).\n", What does "SEV-SNP hypervisor" even mean? > + params.policy, SNP_POLICY_MASK_VALID); > + return -EINVAL; > + } > + > + if (!(params.policy & SNP_POLICY_MASK_RSVD_MBO)) { > + pr_debug("SEV-SNP hypervisor does not support requested policy %llx (must be set %llx).\n", > + params.policy, SNP_POLICY_MASK_RSVD_MBO); > + return -EINVAL; > + } > + > + if (params.policy & SNP_POLICY_MASK_SINGLE_SOCKET) { > + pr_debug("SEV-SNP hypervisor does not support limiting guests to a single socket.\n"); > + return -EINVAL; > + } > + > + if (!(params.policy & SNP_POLICY_MASK_SMT)) { > + pr_debug("SEV-SNP hypervisor does not support limiting guests to a single SMT thread.\n"); > + return -EINVAL; > + } > + > + major = (params.policy & SNP_POLICY_MASK_API_MAJOR); > + minor = (params.policy & SNP_POLICY_MASK_API_MINOR); > + if (!sev_version_greater_or_equal(major, minor)) { Why does this need a someone weirdly named helper? Isn't this just? if (major < SNP_POLICY_API_MAJOR || (major == SNP_POLICY_API_MAJOR && minor < SNP_POLICY_API_MINOR)) > + pr_debug("SEV-SNP hypervisor does not support requested version %d.%d (have %d,%d).\n", > + major, minor, SNP_POLICY_API_MAJOR, SNP_POLICY_API_MINOR); > + return -EINVAL; > + } > + > + start.gctx_paddr = __psp_pa(sev->snp_context); > + start.policy = params.policy; > + memcpy(start.gosvw, params.gosvw, sizeof(params.gosvw)); > + rc = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_START, &start, &argp->error); > + if (rc) { > + pr_debug("SEV_CMD_SNP_LAUNCH_START firmware command failed, rc %d\n", rc); > + goto e_free_context; > + } > + > + sev->fd = argp->sev_fd; > + rc = snp_bind_asid(kvm, &argp->error); > + if (rc) { > + pr_debug("Failed to bind ASID to SEV-SNP context, rc %d\n", rc); > + goto e_free_context; > + } > + > + return 0; > + > +e_free_context: > + snp_decommission_context(kvm); > + > + return rc; > +} > + > int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > { > struct kvm_sev_cmd sev_cmd; > @@ -1999,6 +2154,15 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > goto out; > } > > + /* > + * Once KVM_SEV_INIT2 initializes a KVM instance as an SNP guest, only > + * allow the use of SNP-specific commands. > + */ > + if (sev_snp_guest(kvm) && sev_cmd.id < KVM_SEV_SNP_LAUNCH_START) { > + r = -EPERM; > + goto out; > + } > + > switch (sev_cmd.id) { > case KVM_SEV_ES_INIT: > if (!sev_es_enabled) { > @@ -2063,6 +2227,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) > case KVM_SEV_RECEIVE_FINISH: > r = sev_receive_finish(kvm, &sev_cmd); > break; > + case KVM_SEV_SNP_LAUNCH_START: > + r = snp_launch_start(kvm, &sev_cmd); > + break; > default: > r = -EINVAL; > goto out; > @@ -2258,6 +2425,33 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) > return ret; > } > > +static int snp_decommission_context(struct kvm *kvm) > +{ > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > + struct sev_data_snp_addr data = {}; > + int ret; > + > + /* If context is not created then do nothing */ > + if (!sev->snp_context) > + return 0; > + > + data.address = __sme_pa(sev->snp_context); > + down_write(&sev_deactivate_lock); > + ret = sev_do_cmd(SEV_CMD_SNP_DECOMMISSION, &data, NULL); > + if (WARN_ONCE(ret, "failed to release guest context")) { WARN here, or WARN in the caller, not both. And if you warn here, this can be down_write(&sev_deactivate_lock); ret = sev_do_cmd(SEV_CMD_SNP_DECOMMISSION, &data, NULL); up_write(&sev_deactivate_lock); if (WARN_ONCE(ret, "...")) > + up_write(&sev_deactivate_lock); > + return ret; > + } > + > + up_write(&sev_deactivate_lock); > + > + /* free the context page now */ This doesn't seem like a particularly useful comment. What would be useful is a comment explaining the "decommission" unbinds the ASID. > + snp_free_firmware_page(sev->snp_context); > + sev->snp_context = NULL; > + > + return 0; > +} > + > void sev_vm_destroy(struct kvm *kvm) > { > struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > @@ -2299,7 +2493,15 @@ void sev_vm_destroy(struct kvm *kvm) > } > } > > - sev_unbind_asid(kvm, sev->handle); > + if (sev_snp_guest(kvm)) { > + if (snp_decommission_context(kvm)) { > + WARN_ONCE(1, "Failed to free SNP guest context, leaking asid!\n"); WARN on the actually failure, not '1'. And a newline isn't needed. if (WARN_ONCE(snp_decommission_context(kvm) "Failed to free SNP guest context, leaking asid!")) return; > + return; > + } > + } else { > + sev_unbind_asid(kvm, sev->handle); > + } > + > sev_asid_free(sev); > } > > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index 7f2e9c7fc4ca..0654fc91d4db 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -92,6 +92,7 @@ struct kvm_sev_info { > struct list_head mirror_entry; /* Use as a list entry of mirrors */ > struct misc_cg *misc_cg; /* For misc cgroup accounting */ > atomic_t migration_in_progress; > + void *snp_context; /* SNP guest context page */ > }; > > struct kvm_svm { > -- > 2.25.1 >
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst index 3381556d596d..1b042f827eab 100644 --- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst +++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst @@ -459,6 +459,25 @@ issued by the hypervisor to make the guest ready for execution. Returns: 0 on success, -negative on error +18. KVM_SEV_SNP_LAUNCH_START +---------------------------- + +The KVM_SNP_LAUNCH_START command is used for creating the memory encryption +context for the SEV-SNP guest. + +Parameters (in): struct kvm_sev_snp_launch_start + +Returns: 0 on success, -negative on error + +:: + + struct kvm_sev_snp_launch_start { + __u64 policy; /* Guest policy to use. */ + __u8 gosvw[16]; /* Guest OS visible workarounds. */ + }; + +See the SEV-SNP spec [snp-fw-abi]_ for further detail on the launch input. + Device attribute API ==================== @@ -490,9 +509,11 @@ References ========== -See [white-paper]_, [api-spec]_, [amd-apm]_ and [kvm-forum]_ for more info. +See [white-paper]_, [api-spec]_, [amd-apm]_, [kvm-forum]_, and [snp-fw-abi]_ +for more info. .. [white-paper] https://developer.amd.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf .. [api-spec] https://support.amd.com/TechDocs/55766_SEV-KM_API_Specification.pdf .. [amd-apm] https://support.amd.com/TechDocs/24593.pdf (section 15.34) .. [kvm-forum] https://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf +.. [snp-fw-abi] https://www.amd.com/system/files/TechDocs/56860.pdf diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 9a8b81d20314..bdf8c5461a36 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -697,6 +697,9 @@ enum sev_cmd_id { /* Second time is the charm; improved versions of the above ioctls. */ KVM_SEV_INIT2, + /* SNP-specific commands */ + KVM_SEV_SNP_LAUNCH_START = 100, + KVM_SEV_NR_MAX, }; @@ -822,6 +825,11 @@ struct kvm_sev_receive_update_data { __u32 pad2; }; +struct kvm_sev_snp_launch_start { + __u64 policy; + __u8 gosvw[16]; +}; + #define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0) #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index c41cc73a1efe..4c5abc0e7806 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -25,6 +25,7 @@ #include <asm/fpu/xcr.h> #include <asm/fpu/xstate.h> #include <asm/debugreg.h> +#include <asm/sev.h> #include "mmu.h" #include "x86.h" @@ -58,6 +59,25 @@ static u64 sev_supported_vmsa_features; #define AP_RESET_HOLD_NAE_EVENT 1 #define AP_RESET_HOLD_MSR_PROTO 2 +/* As defined by SEV-SNP Firmware ABI, under "Guest Policy". */ +#define SNP_POLICY_MASK_SMT BIT_ULL(16) +#define SNP_POLICY_MASK_RSVD_MBO BIT_ULL(17) +#define SNP_POLICY_MASK_DEBUG BIT_ULL(19) +#define SNP_POLICY_MASK_SINGLE_SOCKET BIT_ULL(20) +#define SNP_POLICY_MASK_API_MAJOR GENMASK_ULL(15, 8) +#define SNP_POLICY_MASK_API_MINOR GENMASK_ULL(7, 0) + +#define SNP_POLICY_MASK_VALID (SNP_POLICY_MASK_SMT | \ + SNP_POLICY_MASK_RSVD_MBO | \ + SNP_POLICY_MASK_DEBUG | \ + SNP_POLICY_MASK_SINGLE_SOCKET | \ + SNP_POLICY_MASK_API_MAJOR | \ + SNP_POLICY_MASK_API_MINOR) + +/* KVM's SNP support is compatible with 1.51 of the SEV-SNP Firmware ABI. */ +#define SNP_POLICY_API_MAJOR 1 +#define SNP_POLICY_API_MINOR 51 + static u8 sev_enc_bit; static DECLARE_RWSEM(sev_deactivate_lock); static DEFINE_MUTEX(sev_bitmap_lock); @@ -68,6 +88,8 @@ static unsigned int nr_asids; static unsigned long *sev_asid_bitmap; static unsigned long *sev_reclaim_asid_bitmap; +static int snp_decommission_context(struct kvm *kvm); + struct enc_region { struct list_head list; unsigned long npages; @@ -94,12 +116,17 @@ static int sev_flush_asids(unsigned int min_asid, unsigned int max_asid) down_write(&sev_deactivate_lock); wbinvd_on_all_cpus(); - ret = sev_guest_df_flush(&error); + + if (sev_snp_enabled) + ret = sev_do_cmd(SEV_CMD_SNP_DF_FLUSH, NULL, &error); + else + ret = sev_guest_df_flush(&error); up_write(&sev_deactivate_lock); if (ret) - pr_err("SEV: DF_FLUSH failed, ret=%d, error=%#x\n", ret, error); + pr_err("SEV%s: DF_FLUSH failed, ret=%d, error=%#x\n", + sev_snp_enabled ? "-SNP" : "", ret, error); return ret; } @@ -1976,6 +2003,134 @@ int sev_dev_get_attr(u32 group, u64 attr, u64 *val) } } +/* + * The guest context contains all the information, keys and metadata + * associated with the guest that the firmware tracks to implement SEV + * and SNP features. The firmware stores the guest context in hypervisor + * provide page via the SNP_GCTX_CREATE command. + */ +static void *snp_context_create(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + struct sev_data_snp_addr data = {}; + void *context; + int rc; + + /* Allocate memory for context page */ + context = snp_alloc_firmware_page(GFP_KERNEL_ACCOUNT); + if (!context) + return NULL; + + data.address = __psp_pa(context); + rc = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_GCTX_CREATE, &data, &argp->error); + if (rc) { + pr_warn("Failed to create SEV-SNP context, rc %d fw_error %d", + rc, argp->error); + snp_free_firmware_page(context); + return NULL; + } + + return context; +} + +static int snp_bind_asid(struct kvm *kvm, int *error) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct sev_data_snp_activate data = {0}; + + data.gctx_paddr = __psp_pa(sev->snp_context); + data.asid = sev_get_asid(kvm); + return sev_issue_cmd(kvm, SEV_CMD_SNP_ACTIVATE, &data, error); +} + +static inline bool sev_version_greater_or_equal(u8 major, u8 minor) +{ + if (major < SNP_POLICY_API_MAJOR) + return true; + + if (major == SNP_POLICY_API_MAJOR && minor <= SNP_POLICY_API_MINOR) + return true; + + return false; +} + +static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct sev_data_snp_launch_start start = {0}; + struct kvm_sev_snp_launch_start params; + u8 major, minor; + int rc; + + if (!sev_snp_guest(kvm)) + return -ENOTTY; + + if (copy_from_user(¶ms, u64_to_user_ptr(argp->data), sizeof(params))) + return -EFAULT; + + /* Don't allow userspace to allocate memory for more than 1 SNP context. */ + if (sev->snp_context) { + pr_debug("SEV-SNP context already exists. Refusing to allocate an additional one.\n"); + return -EINVAL; + } + + sev->snp_context = snp_context_create(kvm, argp); + if (!sev->snp_context) + return -ENOTTY; + + if (params.policy & ~SNP_POLICY_MASK_VALID) { + pr_debug("SEV-SNP hypervisor does not support requested policy %llx (supported %llx).\n", + params.policy, SNP_POLICY_MASK_VALID); + return -EINVAL; + } + + if (!(params.policy & SNP_POLICY_MASK_RSVD_MBO)) { + pr_debug("SEV-SNP hypervisor does not support requested policy %llx (must be set %llx).\n", + params.policy, SNP_POLICY_MASK_RSVD_MBO); + return -EINVAL; + } + + if (params.policy & SNP_POLICY_MASK_SINGLE_SOCKET) { + pr_debug("SEV-SNP hypervisor does not support limiting guests to a single socket.\n"); + return -EINVAL; + } + + if (!(params.policy & SNP_POLICY_MASK_SMT)) { + pr_debug("SEV-SNP hypervisor does not support limiting guests to a single SMT thread.\n"); + return -EINVAL; + } + + major = (params.policy & SNP_POLICY_MASK_API_MAJOR); + minor = (params.policy & SNP_POLICY_MASK_API_MINOR); + if (!sev_version_greater_or_equal(major, minor)) { + pr_debug("SEV-SNP hypervisor does not support requested version %d.%d (have %d,%d).\n", + major, minor, SNP_POLICY_API_MAJOR, SNP_POLICY_API_MINOR); + return -EINVAL; + } + + start.gctx_paddr = __psp_pa(sev->snp_context); + start.policy = params.policy; + memcpy(start.gosvw, params.gosvw, sizeof(params.gosvw)); + rc = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_START, &start, &argp->error); + if (rc) { + pr_debug("SEV_CMD_SNP_LAUNCH_START firmware command failed, rc %d\n", rc); + goto e_free_context; + } + + sev->fd = argp->sev_fd; + rc = snp_bind_asid(kvm, &argp->error); + if (rc) { + pr_debug("Failed to bind ASID to SEV-SNP context, rc %d\n", rc); + goto e_free_context; + } + + return 0; + +e_free_context: + snp_decommission_context(kvm); + + return rc; +} + int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { struct kvm_sev_cmd sev_cmd; @@ -1999,6 +2154,15 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) goto out; } + /* + * Once KVM_SEV_INIT2 initializes a KVM instance as an SNP guest, only + * allow the use of SNP-specific commands. + */ + if (sev_snp_guest(kvm) && sev_cmd.id < KVM_SEV_SNP_LAUNCH_START) { + r = -EPERM; + goto out; + } + switch (sev_cmd.id) { case KVM_SEV_ES_INIT: if (!sev_es_enabled) { @@ -2063,6 +2227,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) case KVM_SEV_RECEIVE_FINISH: r = sev_receive_finish(kvm, &sev_cmd); break; + case KVM_SEV_SNP_LAUNCH_START: + r = snp_launch_start(kvm, &sev_cmd); + break; default: r = -EINVAL; goto out; @@ -2258,6 +2425,33 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) return ret; } +static int snp_decommission_context(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct sev_data_snp_addr data = {}; + int ret; + + /* If context is not created then do nothing */ + if (!sev->snp_context) + return 0; + + data.address = __sme_pa(sev->snp_context); + down_write(&sev_deactivate_lock); + ret = sev_do_cmd(SEV_CMD_SNP_DECOMMISSION, &data, NULL); + if (WARN_ONCE(ret, "failed to release guest context")) { + up_write(&sev_deactivate_lock); + return ret; + } + + up_write(&sev_deactivate_lock); + + /* free the context page now */ + snp_free_firmware_page(sev->snp_context); + sev->snp_context = NULL; + + return 0; +} + void sev_vm_destroy(struct kvm *kvm) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; @@ -2299,7 +2493,15 @@ void sev_vm_destroy(struct kvm *kvm) } } - sev_unbind_asid(kvm, sev->handle); + if (sev_snp_guest(kvm)) { + if (snp_decommission_context(kvm)) { + WARN_ONCE(1, "Failed to free SNP guest context, leaking asid!\n"); + return; + } + } else { + sev_unbind_asid(kvm, sev->handle); + } + sev_asid_free(sev); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 7f2e9c7fc4ca..0654fc91d4db 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -92,6 +92,7 @@ struct kvm_sev_info { struct list_head mirror_entry; /* Use as a list entry of mirrors */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ atomic_t migration_in_progress; + void *snp_context; /* SNP guest context page */ }; struct kvm_svm {