Message ID | 20210914154825.104886-8-mlevitsk@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | nSVM fixes and optional features | expand |
On 14/09/21 17:48, Maxim Levitsky wrote: > Just in case, add a warning ensuring that on guest entry, > either both VMLOAD and VMSAVE intercept is enabled or > vVMLOAD/VMSAVE is enabled. > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > --- > arch/x86/kvm/svm/svm.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 861ac9f74331..deeebd05f682 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > > WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); > > + /* Check that CVE-2021-3656 can't happen again */ > + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || > + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) > + WARN_ON(!(svm->vmcb->control.virt_ext & > + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); > + > sync_lapic_to_cr8(vcpu); > > if (unlikely(svm->asid != svm->vmcb->control.asid)) { > While it's nice to be "proactive", this does adds some extra work. Maybe it should be under CONFIG_DEBUG_KERNEL. It could be useful to make it into its own function so we can add similar intercept invariants in the same place. Paolo
On 9/14/2021 11:48 PM, Maxim Levitsky wrote: > Just in case, add a warning ensuring that on guest entry, > either both VMLOAD and VMSAVE intercept is enabled or > vVMLOAD/VMSAVE is enabled. > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > --- > arch/x86/kvm/svm/svm.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 861ac9f74331..deeebd05f682 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > > WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); > > + /* Check that CVE-2021-3656 can't happen again */ > + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || > + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) either one needs to be INTERCEPT_VMLOAD, right? > + WARN_ON(!(svm->vmcb->control.virt_ext & > + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); > + > sync_lapic_to_cr8(vcpu); > > if (unlikely(svm->asid != svm->vmcb->control.asid)) { >
On Thu, Sep 23, 2021, Paolo Bonzini wrote: > On 14/09/21 17:48, Maxim Levitsky wrote: > > Just in case, add a warning ensuring that on guest entry, > > either both VMLOAD and VMSAVE intercept is enabled or > > vVMLOAD/VMSAVE is enabled. > > > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > > --- > > arch/x86/kvm/svm/svm.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > > index 861ac9f74331..deeebd05f682 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > > WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); > > + /* Check that CVE-2021-3656 can't happen again */ > > + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || > > + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) > > + WARN_ON(!(svm->vmcb->control.virt_ext & > > + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); > > + > > sync_lapic_to_cr8(vcpu); > > if (unlikely(svm->asid != svm->vmcb->control.asid)) { > > > > While it's nice to be "proactive", this does adds some extra work. Maybe it > should be under CONFIG_DEBUG_KERNEL. It could be useful to make it into its > own function so we can add similar intercept invariants in the same place. I don't know that DEBUG_KERNEL will guard much, DEBUG_KERNEL=y is very common, e.g. it's on by default in the x86 defconfigs. I too agree it's nice to be proactive, but this isn't that different than say failing to intercept CR3 loads when shadow paging is enabled. If we go down the path of effectively auditing KVM invariants, I'd rather we commit fully and (a) add a dedicated Kconfig that is highly unlikely to be turned on by accident and (b) audit a large number of invariants.
On Tue, 2021-10-12 at 01:30 +0800, Xiaoyao Li wrote: > On 9/14/2021 11:48 PM, Maxim Levitsky wrote: > > Just in case, add a warning ensuring that on guest entry, > > either both VMLOAD and VMSAVE intercept is enabled or > > vVMLOAD/VMSAVE is enabled. > > > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > > --- > > arch/x86/kvm/svm/svm.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > > index 861ac9f74331..deeebd05f682 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > > > > WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); > > > > + /* Check that CVE-2021-3656 can't happen again */ > > + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || > > + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) > > either one needs to be INTERCEPT_VMLOAD, right? Oops! Of course. Best regards, Maxim Levitsky > > > + WARN_ON(!(svm->vmcb->control.virt_ext & > > + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); > > + > > sync_lapic_to_cr8(vcpu); > > > > if (unlikely(svm->asid != svm->vmcb->control.asid)) { > >
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 861ac9f74331..deeebd05f682 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); + /* Check that CVE-2021-3656 can't happen again */ + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) + WARN_ON(!(svm->vmcb->control.virt_ext & + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); + sync_lapic_to_cr8(vcpu); if (unlikely(svm->asid != svm->vmcb->control.asid)) {
Just in case, add a warning ensuring that on guest entry, either both VMLOAD and VMSAVE intercept is enabled or vVMLOAD/VMSAVE is enabled. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> --- arch/x86/kvm/svm/svm.c | 6 ++++++ 1 file changed, 6 insertions(+)