Message ID | 1430986817-6260-2-git-send-email-guangrong.xiao@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 07/05/2015 10:20, Xiao Guangrong wrote: > Current permission check assumes that RSVD bit in PFEC is always zero, > however, it is not true since MMIO #PF will use it to quickly identify > MMIO access > > Fix it by clearing the bit if walking guest page table is needed > > Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> > --- > arch/x86/kvm/mmu.h | 2 ++ > arch/x86/kvm/paging_tmpl.h | 7 +++++++ > 2 files changed, 9 insertions(+) > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index c7d6563..06eb2fc 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -166,6 +166,8 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, > int index = (pfec >> 1) + > (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1)); > > + WARN_ON(pfec & PFERR_RSVD_MASK); > + > return (mmu->permissions[index] >> pte_access) & 1; > } > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index fd49c86..6e6d115 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -718,6 +718,13 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, > mmu_is_nested(vcpu)); > if (likely(r != RET_MMIO_PF_INVALID)) > return r; > + > + /* > + * page fault with PFEC.RSVD = 1 is caused by shadow > + * page fault, should not be used to walk guest page > + * table. > + */ > + error_code &= ~PFERR_RSVD_MASK; > }; > > r = mmu_topup_memory_caches(vcpu); > Applied. For the other patches I'm waiting for an answer re. kvm_mmu_pte_write. Thanks, Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/07/2015 05:32 PM, Paolo Bonzini wrote: > > > On 07/05/2015 10:20, Xiao Guangrong wrote: >> Current permission check assumes that RSVD bit in PFEC is always zero, >> however, it is not true since MMIO #PF will use it to quickly identify >> MMIO access >> >> Fix it by clearing the bit if walking guest page table is needed >> >> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> >> --- >> arch/x86/kvm/mmu.h | 2 ++ >> arch/x86/kvm/paging_tmpl.h | 7 +++++++ >> 2 files changed, 9 insertions(+) >> >> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h >> index c7d6563..06eb2fc 100644 >> --- a/arch/x86/kvm/mmu.h >> +++ b/arch/x86/kvm/mmu.h >> @@ -166,6 +166,8 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, >> int index = (pfec >> 1) + >> (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1)); >> >> + WARN_ON(pfec & PFERR_RSVD_MASK); >> + >> return (mmu->permissions[index] >> pte_access) & 1; >> } >> >> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h >> index fd49c86..6e6d115 100644 >> --- a/arch/x86/kvm/paging_tmpl.h >> +++ b/arch/x86/kvm/paging_tmpl.h >> @@ -718,6 +718,13 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, >> mmu_is_nested(vcpu)); >> if (likely(r != RET_MMIO_PF_INVALID)) >> return r; >> + >> + /* >> + * page fault with PFEC.RSVD = 1 is caused by shadow >> + * page fault, should not be used to walk guest page >> + * table. >> + */ >> + error_code &= ~PFERR_RSVD_MASK; >> }; >> >> r = mmu_topup_memory_caches(vcpu); >> > > Applied. > > For the other patches I'm waiting for an answer re. kvm_mmu_pte_write. Sure. Actually, i noticed these bugs when i was reviewing your patches, will continue to review soon. :) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index c7d6563..06eb2fc 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -166,6 +166,8 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, int index = (pfec >> 1) + (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1)); + WARN_ON(pfec & PFERR_RSVD_MASK); + return (mmu->permissions[index] >> pte_access) & 1; } diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index fd49c86..6e6d115 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -718,6 +718,13 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, mmu_is_nested(vcpu)); if (likely(r != RET_MMIO_PF_INVALID)) return r; + + /* + * page fault with PFEC.RSVD = 1 is caused by shadow + * page fault, should not be used to walk guest page + * table. + */ + error_code &= ~PFERR_RSVD_MASK; }; r = mmu_topup_memory_caches(vcpu);
Current permission check assumes that RSVD bit in PFEC is always zero, however, it is not true since MMIO #PF will use it to quickly identify MMIO access Fix it by clearing the bit if walking guest page table is needed Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> --- arch/x86/kvm/mmu.h | 2 ++ arch/x86/kvm/paging_tmpl.h | 7 +++++++ 2 files changed, 9 insertions(+)