Message ID | 1466478746-14153-3-git-send-email-bsd@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 21/06/2016 05:12, Bandan Das wrote: > > static int is_shadow_present_pte(u64 pte) > { > - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte); > + int xbit = shadow_xonly_valid ? pte & shadow_x_mask : 0; > + > + return (pte & PT_PRESENT_MASK) | xbit > + && !is_mmio_spte(pte); > } You can change this to just: return (pte & (PT_PRESENT_MASK | shadow_x_mask)) && !is_mmio_spte(pte); because you know this is only called on a valid spte. Reserved bits are checked elsewhere (and shouldn't be there anyway!). > static int is_large_pte(u64 pte) > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index bc019f7..9f5bd06 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -133,7 +133,12 @@ static inline int FNAME(is_present_gpte)(unsigned long pte) > #if PTTYPE != PTTYPE_EPT > return is_present_gpte(pte); > #else > - return pte & 7; > + /* > + * For EPT, bits [2:0] can be 001, 100 or 111 > + * Further, for 100, logical processor should support > + * execute-only. > + */ > + return (pte & 1) || (shadow_xonly_valid && (pte & 4)); > #endif Likewise, is_present_gpte is always followed by an is_rsvd_bits_set check, which does check for execute-only. Modify reset_tdp_shadow_zero_bits_mask to pass the right execonly value to __reset_rsvds_bits_mask_ept, and you can leave is_present_gpte as is. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/21/2016 11:12 AM, Bandan Das wrote: > Until now , is_present_gpte checks for all bits set (XWR) and > is_shadow_present_pte() checks for the present bit set. To support > execute only mappings we should teach these functions to distinguish 100 > as valid based on host support. > > Signed-off-by: Bandan Das <bsd@redhat.com> > --- > arch/x86/kvm/mmu.c | 5 ++++- > arch/x86/kvm/paging_tmpl.h | 7 ++++++- > 2 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 37b01b1..57d8696 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -308,7 +308,10 @@ static int is_nx(struct kvm_vcpu *vcpu) > > static int is_shadow_present_pte(u64 pte) > { > - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte); > + int xbit = shadow_xonly_valid ? pte & shadow_x_mask : 0; > + > + return (pte & PT_PRESENT_MASK) | xbit > + && !is_mmio_spte(pte); It could be simply use pte & (PT_PRESENT_MASK | shadow_x_mask) instead, as this is host controlled page table and we never clear single PT_PRESENT_MASK bit (we only fully clear low 32 bit). > } > > static int is_large_pte(u64 pte) > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index bc019f7..9f5bd06 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -133,7 +133,12 @@ static inline int FNAME(is_present_gpte)(unsigned long pte) > #if PTTYPE != PTTYPE_EPT > return is_present_gpte(pte); > #else > - return pte & 7; > + /* > + * For EPT, bits [2:0] can be 001, 100 or 111 > + * Further, for 100, logical processor should support > + * execute-only. > + */ > + return (pte & 1) || (shadow_xonly_valid && (pte & 4)); No. For !shadow_xonly_valid guest, 100b on gpte can not pass FNAME(is_present_gpte) so that we can get a error code without RSVD, finally a EPT-violation is injected rather than EPT-misconfig. Actually, current code handles shadow_xonly_valid very well, please refer to kvm_init_shadow_ept_mmu(), there already has handled the case of 'execonly'. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 37b01b1..57d8696 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -308,7 +308,10 @@ static int is_nx(struct kvm_vcpu *vcpu) static int is_shadow_present_pte(u64 pte) { - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte); + int xbit = shadow_xonly_valid ? pte & shadow_x_mask : 0; + + return (pte & PT_PRESENT_MASK) | xbit + && !is_mmio_spte(pte); } static int is_large_pte(u64 pte) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index bc019f7..9f5bd06 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -133,7 +133,12 @@ static inline int FNAME(is_present_gpte)(unsigned long pte) #if PTTYPE != PTTYPE_EPT return is_present_gpte(pte); #else - return pte & 7; + /* + * For EPT, bits [2:0] can be 001, 100 or 111 + * Further, for 100, logical processor should support + * execute-only. + */ + return (pte & 1) || (shadow_xonly_valid && (pte & 4)); #endif }
Until now , is_present_gpte checks for all bits set (XWR) and is_shadow_present_pte() checks for the present bit set. To support execute only mappings we should teach these functions to distinguish 100 as valid based on host support. Signed-off-by: Bandan Das <bsd@redhat.com> --- arch/x86/kvm/mmu.c | 5 ++++- arch/x86/kvm/paging_tmpl.h | 7 ++++++- 2 files changed, 10 insertions(+), 2 deletions(-)