Message ID | 20240507155817.3951344-5-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Page fault and MMIO cleanups | expand |
On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > From: Sean Christopherson <seanjc@google.com> > > Move the sanity check that hardware never sets bits that collide with KVM- > define synthetic bits from kvm_mmu_page_fault() to npf_interception(), > i.e. make the sanity check #NPF specific. The legacy #PF path already > WARNs if _any_ of bits 63:32 are set, and the error code that comes from > VMX's EPT Violatation and Misconfig is 100% synthesized (KVM morphs VMX's > EXIT_QUALIFICATION into error code flags). > > Add a compile-time assert in the legacy #PF handler to make sure that KVM- > define flags are covered by its existing sanity check on the upper bits. > > Opportunistically add a description of PFERR_IMPLICIT_ACCESS, since we > are removing the comment that defined it. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > Reviewed-by: Kai Huang <kai.huang@intel.com> > Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> > Message-ID: <20240228024147.41573-8-seanjc@google.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/include/asm/kvm_host.h | 6 ++++++ > arch/x86/kvm/mmu/mmu.c | 14 +++----------- > arch/x86/kvm/svm/svm.c | 9 +++++++++ > 3 files changed, 18 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 58bbcf76ad1e..12e727301262 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -267,7 +267,13 @@ enum x86_intercept_stage; > #define PFERR_GUEST_ENC_MASK BIT_ULL(34) > #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35) > #define PFERR_GUEST_VMPL_MASK BIT_ULL(36) > + > +/* > + * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP checks > + * when emulating instructions that triggers implicit access. > + */ > #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) > +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) > > #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ > PFERR_WRITE_MASK | \ > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index c72a2033ca96..5562d693880a 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, > return -EFAULT; > #endif > > + /* Ensure the above sanity check also covers KVM-defined flags. */ 1. There is no sanity check above related to KVM-defined flags yet. It has to be after Patch 6. 2. I somehow cannot parse the comment properly, though I know it's to ensure KVM-defined PFERR_SYNTHETIC_MASK not contain any bit below 32-bits. > + BUILD_BUG_ON(lower_32_bits(PFERR_SYNTHETIC_MASK)); > + > vcpu->arch.l1tf_flush_l1d = true; > if (!flags) { > trace_kvm_page_fault(vcpu, fault_address, error_code); > @@ -5786,17 +5789,6 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err > int r, emulation_type = EMULTYPE_PF; > bool direct = vcpu->arch.mmu->root_role.direct; > > - /* > - * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP > - * checks when emulating instructions that triggers implicit access. > - * WARN if hardware generates a fault with an error code that collides > - * with the KVM-defined value. Clear the flag and continue on, i.e. > - * don't terminate the VM, as KVM can't possibly be relying on a flag > - * that KVM doesn't know about. > - */ > - if (WARN_ON_ONCE(error_code & PFERR_IMPLICIT_ACCESS)) > - error_code &= ~PFERR_IMPLICIT_ACCESS; > - > if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) > return RET_PF_RETRY; > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 0f3b59da0d4a..535018f152a3 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -2047,6 +2047,15 @@ static int npf_interception(struct kvm_vcpu *vcpu) > u64 fault_address = svm->vmcb->control.exit_info_2; > u64 error_code = svm->vmcb->control.exit_info_1; > > + /* > + * WARN if hardware generates a fault with an error code that collides > + * with KVM-defined sythentic flags. Clear the flags and continue on, > + * i.e. don't terminate the VM, as KVM can't possibly be relying on a > + * flag that KVM doesn't know about. > + */ > + if (WARN_ON_ONCE(error_code & PFERR_SYNTHETIC_MASK)) > + error_code &= ~PFERR_SYNTHETIC_MASK; > + > trace_kvm_page_fault(vcpu, fault_address, error_code); > return kvm_mmu_page_fault(vcpu, fault_address, error_code, > static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
On Mon, May 13, 2024, Xiaoyao Li wrote: > On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > > From: Sean Christopherson <seanjc@google.com> > > > > Move the sanity check that hardware never sets bits that collide with KVM- > > define synthetic bits from kvm_mmu_page_fault() to npf_interception(), > > i.e. make the sanity check #NPF specific. The legacy #PF path already > > WARNs if _any_ of bits 63:32 are set, and the error code that comes from > > VMX's EPT Violatation and Misconfig is 100% synthesized (KVM morphs VMX's > > EXIT_QUALIFICATION into error code flags). > > > > Add a compile-time assert in the legacy #PF handler to make sure that KVM- > > define flags are covered by its existing sanity check on the upper bits. > > > > Opportunistically add a description of PFERR_IMPLICIT_ACCESS, since we > > are removing the comment that defined it. > > > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > Reviewed-by: Kai Huang <kai.huang@intel.com> > > Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> > > Message-ID: <20240228024147.41573-8-seanjc@google.com> > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > > --- > > arch/x86/include/asm/kvm_host.h | 6 ++++++ > > arch/x86/kvm/mmu/mmu.c | 14 +++----------- > > arch/x86/kvm/svm/svm.c | 9 +++++++++ > > 3 files changed, 18 insertions(+), 11 deletions(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index 58bbcf76ad1e..12e727301262 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -267,7 +267,13 @@ enum x86_intercept_stage; > > #define PFERR_GUEST_ENC_MASK BIT_ULL(34) > > #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35) > > #define PFERR_GUEST_VMPL_MASK BIT_ULL(36) > > + > > +/* > > + * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP checks > > + * when emulating instructions that triggers implicit access. > > + */ > > #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) > > +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) > > #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ > > PFERR_WRITE_MASK | \ > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index c72a2033ca96..5562d693880a 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, > > return -EFAULT; > > #endif > > + /* Ensure the above sanity check also covers KVM-defined flags. */ > > 1. There is no sanity check above related to KVM-defined flags yet. It has > to be after Patch 6. Ya, it's not just the comment, the entire changelog expects this patch to land after patch 6. > > 2. I somehow cannot parse the comment properly, though I know it's to ensure > KVM-defined PFERR_SYNTHETIC_MASK not contain any bit below 32-bits. Hmm, how about this? /* * Ensure that the above sanity check on hardware error code bits 63:32 * also prevents false positives on KVM-defined flags. */
On 5/14/2024 1:31 AM, Sean Christopherson wrote: > On Mon, May 13, 2024, Xiaoyao Li wrote: >> On 5/7/2024 11:58 PM, Paolo Bonzini wrote: >>> From: Sean Christopherson <seanjc@google.com> >>> >>> Move the sanity check that hardware never sets bits that collide with KVM- >>> define synthetic bits from kvm_mmu_page_fault() to npf_interception(), >>> i.e. make the sanity check #NPF specific. The legacy #PF path already >>> WARNs if _any_ of bits 63:32 are set, and the error code that comes from >>> VMX's EPT Violatation and Misconfig is 100% synthesized (KVM morphs VMX's >>> EXIT_QUALIFICATION into error code flags). >>> >>> Add a compile-time assert in the legacy #PF handler to make sure that KVM- >>> define flags are covered by its existing sanity check on the upper bits. >>> >>> Opportunistically add a description of PFERR_IMPLICIT_ACCESS, since we >>> are removing the comment that defined it. >>> >>> Signed-off-by: Sean Christopherson <seanjc@google.com> >>> Reviewed-by: Kai Huang <kai.huang@intel.com> >>> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> >>> Message-ID: <20240228024147.41573-8-seanjc@google.com> >>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >>> --- >>> arch/x86/include/asm/kvm_host.h | 6 ++++++ >>> arch/x86/kvm/mmu/mmu.c | 14 +++----------- >>> arch/x86/kvm/svm/svm.c | 9 +++++++++ >>> 3 files changed, 18 insertions(+), 11 deletions(-) >>> >>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >>> index 58bbcf76ad1e..12e727301262 100644 >>> --- a/arch/x86/include/asm/kvm_host.h >>> +++ b/arch/x86/include/asm/kvm_host.h >>> @@ -267,7 +267,13 @@ enum x86_intercept_stage; >>> #define PFERR_GUEST_ENC_MASK BIT_ULL(34) >>> #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35) >>> #define PFERR_GUEST_VMPL_MASK BIT_ULL(36) >>> + >>> +/* >>> + * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP checks >>> + * when emulating instructions that triggers implicit access. >>> + */ >>> #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) >>> +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) >>> #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ >>> PFERR_WRITE_MASK | \ >>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >>> index c72a2033ca96..5562d693880a 100644 >>> --- a/arch/x86/kvm/mmu/mmu.c >>> +++ b/arch/x86/kvm/mmu/mmu.c >>> @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, >>> return -EFAULT; >>> #endif >>> + /* Ensure the above sanity check also covers KVM-defined flags. */ >> >> 1. There is no sanity check above related to KVM-defined flags yet. It has >> to be after Patch 6. > > Ya, it's not just the comment, the entire changelog expects this patch to land > after patch 6. >> >> 2. I somehow cannot parse the comment properly, though I know it's to ensure >> KVM-defined PFERR_SYNTHETIC_MASK not contain any bit below 32-bits. > > Hmm, how about this? > > /* > * Ensure that the above sanity check on hardware error code bits 63:32 > * also prevents false positives on KVM-defined flags. > */ > Maybe it's just myself inability, I still cannot interpret it well. Can't we put it above the sanity check of error code, and just with a comment like /* * Ensure KVM-defined flags not occupied any bits below 32-bits, * that are used by hardware. * /
On Tue, May 14, 2024, Xiaoyao Li wrote: > On 5/14/2024 1:31 AM, Sean Christopherson wrote: > > On Mon, May 13, 2024, Xiaoyao Li wrote: > > > On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > > > > +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) > > > > #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ > > > > PFERR_WRITE_MASK | \ > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > > index c72a2033ca96..5562d693880a 100644 > > > > --- a/arch/x86/kvm/mmu/mmu.c > > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > > @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, > > > > return -EFAULT; > > > > #endif > > > > + /* Ensure the above sanity check also covers KVM-defined flags. */ > > > > > > 1. There is no sanity check above related to KVM-defined flags yet. It has > > > to be after Patch 6. > > > > Ya, it's not just the comment, the entire changelog expects this patch to land > > after patch 6. > > > > > > 2. I somehow cannot parse the comment properly, though I know it's to ensure > > > KVM-defined PFERR_SYNTHETIC_MASK not contain any bit below 32-bits. > > > > Hmm, how about this? > > > > /* > > * Ensure that the above sanity check on hardware error code bits 63:32 > > * also prevents false positives on KVM-defined flags. > > */ > > > > Maybe it's just myself inability, I still cannot interpret it well. > > Can't we put it above the sanity check of error code, and just with a > comment like > > /* > * Ensure KVM-defined flags not occupied any bits below 32-bits, > * that are used by hardware. This is somewhat misleading, as hardware does use bits 63:32 (for #NPF), just not for #PF error codes. And the reason I'm using rather indirect wording is that KVM _could_ define synthetic flags in bits 31:0, there's simply a higher probability of needing to reshuffle bit numbers due to a conflict with a future feature. Is this better? I think it captures what you're looking for, while hopefully also capturing that staying out of bits 31:0 isn't a hard requirement. /* * Restrict KVM-defined flags to bits 63:32 so that it's impossible for * them to conflict with #PF error codes, which are limited to 32 bits. */
On 5/14/2024 11:32 PM, Sean Christopherson wrote: > On Tue, May 14, 2024, Xiaoyao Li wrote: >> On 5/14/2024 1:31 AM, Sean Christopherson wrote: >>> On Mon, May 13, 2024, Xiaoyao Li wrote: >>>> On 5/7/2024 11:58 PM, Paolo Bonzini wrote: >>>>> +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) >>>>> #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ >>>>> PFERR_WRITE_MASK | \ >>>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >>>>> index c72a2033ca96..5562d693880a 100644 >>>>> --- a/arch/x86/kvm/mmu/mmu.c >>>>> +++ b/arch/x86/kvm/mmu/mmu.c >>>>> @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, >>>>> return -EFAULT; >>>>> #endif >>>>> + /* Ensure the above sanity check also covers KVM-defined flags. */ >>>> >>>> 1. There is no sanity check above related to KVM-defined flags yet. It has >>>> to be after Patch 6. >>> >>> Ya, it's not just the comment, the entire changelog expects this patch to land >>> after patch 6. >>>> >>>> 2. I somehow cannot parse the comment properly, though I know it's to ensure >>>> KVM-defined PFERR_SYNTHETIC_MASK not contain any bit below 32-bits. >>> >>> Hmm, how about this? >>> >>> /* >>> * Ensure that the above sanity check on hardware error code bits 63:32 >>> * also prevents false positives on KVM-defined flags. >>> */ >>> >> >> Maybe it's just myself inability, I still cannot interpret it well. >> >> Can't we put it above the sanity check of error code, and just with a >> comment like >> >> /* >> * Ensure KVM-defined flags not occupied any bits below 32-bits, >> * that are used by hardware. > > This is somewhat misleading, as hardware does use bits 63:32 (for #NPF), just not > for #PF error codes. And the reason I'm using rather indirect wording is that > KVM _could_ define synthetic flags in bits 31:0, there's simply a higher probability > of needing to reshuffle bit numbers due to a conflict with a future feature. > > Is this better? I think it captures what you're looking for, while hopefully also > capturing that staying out of bits 31:0 isn't a hard requirement. yeah, it looks better! > /* > * Restrict KVM-defined flags to bits 63:32 so that it's impossible for > * them to conflict with #PF error codes, which are limited to 32 bits. > */
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 58bbcf76ad1e..12e727301262 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -267,7 +267,13 @@ enum x86_intercept_stage; #define PFERR_GUEST_ENC_MASK BIT_ULL(34) #define PFERR_GUEST_SIZEM_MASK BIT_ULL(35) #define PFERR_GUEST_VMPL_MASK BIT_ULL(36) + +/* + * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP checks + * when emulating instructions that triggers implicit access. + */ #define PFERR_IMPLICIT_ACCESS BIT_ULL(48) +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS) #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ PFERR_WRITE_MASK | \ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c72a2033ca96..5562d693880a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4502,6 +4502,9 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, return -EFAULT; #endif + /* Ensure the above sanity check also covers KVM-defined flags. */ + BUILD_BUG_ON(lower_32_bits(PFERR_SYNTHETIC_MASK)); + vcpu->arch.l1tf_flush_l1d = true; if (!flags) { trace_kvm_page_fault(vcpu, fault_address, error_code); @@ -5786,17 +5789,6 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err int r, emulation_type = EMULTYPE_PF; bool direct = vcpu->arch.mmu->root_role.direct; - /* - * IMPLICIT_ACCESS is a KVM-defined flag used to correctly perform SMAP - * checks when emulating instructions that triggers implicit access. - * WARN if hardware generates a fault with an error code that collides - * with the KVM-defined value. Clear the flag and continue on, i.e. - * don't terminate the VM, as KVM can't possibly be relying on a flag - * that KVM doesn't know about. - */ - if (WARN_ON_ONCE(error_code & PFERR_IMPLICIT_ACCESS)) - error_code &= ~PFERR_IMPLICIT_ACCESS; - if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0f3b59da0d4a..535018f152a3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2047,6 +2047,15 @@ static int npf_interception(struct kvm_vcpu *vcpu) u64 fault_address = svm->vmcb->control.exit_info_2; u64 error_code = svm->vmcb->control.exit_info_1; + /* + * WARN if hardware generates a fault with an error code that collides + * with KVM-defined sythentic flags. Clear the flags and continue on, + * i.e. don't terminate the VM, as KVM can't possibly be relying on a + * flag that KVM doesn't know about. + */ + if (WARN_ON_ONCE(error_code & PFERR_SYNTHETIC_MASK)) + error_code &= ~PFERR_SYNTHETIC_MASK; + trace_kvm_page_fault(vcpu, fault_address, error_code); return kvm_mmu_page_fault(vcpu, fault_address, error_code, static_cpu_has(X86_FEATURE_DECODEASSISTS) ?