Message ID | 20210114003708.3798992-13-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: SVM: Misc SEV cleanups | expand |
On 1/13/21 6:37 PM, Sean Christopherson wrote: > Replace calls to svm_sev_enabled() with direct checks on sev_enabled, or > in the case of svm_mem_enc_op, simply drop the call to svm_sev_enabled(). > This effectively replaces checks against a valid max_sev_asid with checks > against sev_enabled. sev_enabled is forced off by sev_hardware_setup() > if max_sev_asid is invalid, all call sites are guaranteed to run after > sev_hardware_setup(), and all of the checks care about SEV being fully > enabled (as opposed to intentionally handling the scenario where > max_sev_asid is valid but SEV enabling fails due to OOM). > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/svm/sev.c | 6 +++--- > arch/x86/kvm/svm/svm.h | 5 ----- > 2 files changed, 3 insertions(+), 8 deletions(-) Thanks Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index a2c3e2d42a7f..7e14514dd083 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -1057,7 +1057,7 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp) > struct kvm_sev_cmd sev_cmd; > int r; > > - if (!svm_sev_enabled() || !sev_enabled) > + if (!sev_enabled) > return -ENOTTY; > > if (!argp) > @@ -1321,7 +1321,7 @@ void __init sev_hardware_setup(void) > > void sev_hardware_teardown(void) > { > - if (!svm_sev_enabled()) > + if (!sev_enabled) > return; > > bitmap_free(sev_asid_bitmap); > @@ -1332,7 +1332,7 @@ void sev_hardware_teardown(void) > > int sev_cpu_init(struct svm_cpu_data *sd) > { > - if (!svm_sev_enabled()) > + if (!sev_enabled) > return 0; > > sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1, sizeof(void *), > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index 4eb4bab0ca3e..8cb4395b58a0 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -569,11 +569,6 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); > > extern unsigned int max_sev_asid; > > -static inline bool svm_sev_enabled(void) > -{ > - return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; > -} > - > void sev_vm_destroy(struct kvm *kvm); > int svm_mem_enc_op(struct kvm *kvm, void __user *argp); > int svm_register_enc_region(struct kvm *kvm,
On 1/13/21 6:37 PM, Sean Christopherson wrote: > Replace calls to svm_sev_enabled() with direct checks on sev_enabled, or > in the case of svm_mem_enc_op, simply drop the call to svm_sev_enabled(). > This effectively replaces checks against a valid max_sev_asid with checks > against sev_enabled. sev_enabled is forced off by sev_hardware_setup() > if max_sev_asid is invalid, all call sites are guaranteed to run after > sev_hardware_setup(), and all of the checks care about SEV being fully > enabled (as opposed to intentionally handling the scenario where > max_sev_asid is valid but SEV enabling fails due to OOM). > > Signed-off-by: Sean Christopherson <seanjc@google.com> Ultimately the #ifdef CONFIG_KVM_AMD_SEV that you added that #defines sev_enabled and sev_es_enabled to false, resolves the build issue when kvm_amd is built into the kernel and ccp is built as a module, for which svm_sev_enabled() was originally created. Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> > --- > arch/x86/kvm/svm/sev.c | 6 +++--- > arch/x86/kvm/svm/svm.h | 5 ----- > 2 files changed, 3 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index a2c3e2d42a7f..7e14514dd083 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -1057,7 +1057,7 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp) > struct kvm_sev_cmd sev_cmd; > int r; > > - if (!svm_sev_enabled() || !sev_enabled) > + if (!sev_enabled) > return -ENOTTY; > > if (!argp) > @@ -1321,7 +1321,7 @@ void __init sev_hardware_setup(void) > > void sev_hardware_teardown(void) > { > - if (!svm_sev_enabled()) > + if (!sev_enabled) > return; > > bitmap_free(sev_asid_bitmap); > @@ -1332,7 +1332,7 @@ void sev_hardware_teardown(void) > > int sev_cpu_init(struct svm_cpu_data *sd) > { > - if (!svm_sev_enabled()) > + if (!sev_enabled) > return 0; > > sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1, sizeof(void *), > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index 4eb4bab0ca3e..8cb4395b58a0 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -569,11 +569,6 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); > > extern unsigned int max_sev_asid; > > -static inline bool svm_sev_enabled(void) > -{ > - return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; > -} > - > void sev_vm_destroy(struct kvm *kvm); > int svm_mem_enc_op(struct kvm *kvm, void __user *argp); > int svm_register_enc_region(struct kvm *kvm, >
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a2c3e2d42a7f..7e14514dd083 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1057,7 +1057,7 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp) struct kvm_sev_cmd sev_cmd; int r; - if (!svm_sev_enabled() || !sev_enabled) + if (!sev_enabled) return -ENOTTY; if (!argp) @@ -1321,7 +1321,7 @@ void __init sev_hardware_setup(void) void sev_hardware_teardown(void) { - if (!svm_sev_enabled()) + if (!sev_enabled) return; bitmap_free(sev_asid_bitmap); @@ -1332,7 +1332,7 @@ void sev_hardware_teardown(void) int sev_cpu_init(struct svm_cpu_data *sd) { - if (!svm_sev_enabled()) + if (!sev_enabled) return 0; sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1, sizeof(void *), diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4eb4bab0ca3e..8cb4395b58a0 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -569,11 +569,6 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu); extern unsigned int max_sev_asid; -static inline bool svm_sev_enabled(void) -{ - return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0; -} - void sev_vm_destroy(struct kvm *kvm); int svm_mem_enc_op(struct kvm *kvm, void __user *argp); int svm_register_enc_region(struct kvm *kvm,
Replace calls to svm_sev_enabled() with direct checks on sev_enabled, or in the case of svm_mem_enc_op, simply drop the call to svm_sev_enabled(). This effectively replaces checks against a valid max_sev_asid with checks against sev_enabled. sev_enabled is forced off by sev_hardware_setup() if max_sev_asid is invalid, all call sites are guaranteed to run after sev_hardware_setup(), and all of the checks care about SEV being fully enabled (as opposed to intentionally handling the scenario where max_sev_asid is valid but SEV enabling fails due to OOM). Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/kvm/svm/sev.c | 6 +++--- arch/x86/kvm/svm/svm.h | 5 ----- 2 files changed, 3 insertions(+), 8 deletions(-)