Message ID | 1368939152-11406-4-git-send-email-jun.nakajima@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Il 19/05/2013 06:52, Jun Nakajima ha scritto: > From: Nadav Har'El <nyh@il.ibm.com> > > Since link_shadow_page() is used by a routine in mmu.c, add an > EPT-specific link_shadow_page() in paging_tmp.h, rather than moving > it. > > Signed-off-by: Nadav Har'El <nyh@il.ibm.com> > Signed-off-by: Jun Nakajima <jun.nakajima@intel.com> > Signed-off-by: Xinhao Xu <xinhao.xu@intel.com> > --- > arch/x86/kvm/paging_tmpl.h | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 4c45654..dc495f9 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -461,6 +461,18 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, > } > } > > +#if PTTYPE == PTTYPE_EPT > +static void FNAME(link_shadow_page)(u64 *sptep, struct kvm_mmu_page *sp) > +{ > + u64 spte; > + > + spte = __pa(sp->spt) | VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK | > + VMX_EPT_EXECUTABLE_MASK; > + > + mmu_spte_set(sptep, spte); > +} > +#endif The function is small enough that likely the compiler will inline it. You can just handle it unconditionally with FNAME(). Paolo > /* > * Fetch a shadow pte for a specific level in the paging hierarchy. > * If the guest tries to write a write-protected page, we need to > @@ -513,7 +525,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, > goto out_gpte_changed; > > if (sp) > +#if PTTYPE == PTTYPE_EPT > + FNAME(link_shadow_page)(it.sptep, sp); > +#else > link_shadow_page(it.sptep, sp); > +#endif > } > > for (; > @@ -533,7 +549,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, > > sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1, > true, direct_access, it.sptep); > +#if PTTYPE == PTTYPE_EPT > + FNAME(link_shadow_page)(it.sptep, sp); > +#else > link_shadow_page(it.sptep, sp); > +#endif > } > > clear_sp_write_flooding_count(it.sptep); > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/19/2013 12:52 PM, Jun Nakajima wrote: > From: Nadav Har'El <nyh@il.ibm.com> > > Since link_shadow_page() is used by a routine in mmu.c, add an > EPT-specific link_shadow_page() in paging_tmp.h, rather than moving > it. > > Signed-off-by: Nadav Har'El <nyh@il.ibm.com> > Signed-off-by: Jun Nakajima <jun.nakajima@intel.com> > Signed-off-by: Xinhao Xu <xinhao.xu@intel.com> > --- > arch/x86/kvm/paging_tmpl.h | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 4c45654..dc495f9 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -461,6 +461,18 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, > } > } > > +#if PTTYPE == PTTYPE_EPT > +static void FNAME(link_shadow_page)(u64 *sptep, struct kvm_mmu_page *sp) > +{ > + u64 spte; > + > + spte = __pa(sp->spt) | VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK | > + VMX_EPT_EXECUTABLE_MASK; > + > + mmu_spte_set(sptep, spte); > +} > +#endif The only difference between this function and the current link_shadow_page() is shadow_accessed_mask. Can we add a parameter to eliminate this difference, some like: static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp, bool accessed) { u64 spte; spte = __pa(sp->spt) | PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask | shadow_x_mask; if (accessed) spte |= shadow_accessed_mask; mmu_spte_set(sptep, spte); } ? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Sure. Thanks for the suggestion. On Tue, May 21, 2013 at 1:15 AM, Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote: > On 05/19/2013 12:52 PM, Jun Nakajima wrote: >> From: Nadav Har'El <nyh@il.ibm.com> >> >> Since link_shadow_page() is used by a routine in mmu.c, add an >> EPT-specific link_shadow_page() in paging_tmp.h, rather than moving >> it. >> >> Signed-off-by: Nadav Har'El <nyh@il.ibm.com> >> Signed-off-by: Jun Nakajima <jun.nakajima@intel.com> >> Signed-off-by: Xinhao Xu <xinhao.xu@intel.com> >> --- >> arch/x86/kvm/paging_tmpl.h | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> >> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h >> index 4c45654..dc495f9 100644 >> --- a/arch/x86/kvm/paging_tmpl.h >> +++ b/arch/x86/kvm/paging_tmpl.h >> @@ -461,6 +461,18 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, >> } >> } >> >> +#if PTTYPE == PTTYPE_EPT >> +static void FNAME(link_shadow_page)(u64 *sptep, struct kvm_mmu_page *sp) >> +{ >> + u64 spte; >> + >> + spte = __pa(sp->spt) | VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK | >> + VMX_EPT_EXECUTABLE_MASK; >> + >> + mmu_spte_set(sptep, spte); >> +} >> +#endif > > The only difference between this function and the current link_shadow_page() > is shadow_accessed_mask. Can we add a parameter to eliminate this difference, > some like: > > static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp, bool accessed) > { > u64 spte; > > spte = __pa(sp->spt) | PT_PRESENT_MASK | PT_WRITABLE_MASK | > shadow_user_mask | shadow_x_mask; > > if (accessed) > spte |= shadow_accessed_mask; > > mmu_spte_set(sptep, spte); > } > > ? > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 4c45654..dc495f9 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -461,6 +461,18 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, } } +#if PTTYPE == PTTYPE_EPT +static void FNAME(link_shadow_page)(u64 *sptep, struct kvm_mmu_page *sp) +{ + u64 spte; + + spte = __pa(sp->spt) | VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK | + VMX_EPT_EXECUTABLE_MASK; + + mmu_spte_set(sptep, spte); +} +#endif + /* * Fetch a shadow pte for a specific level in the paging hierarchy. * If the guest tries to write a write-protected page, we need to @@ -513,7 +525,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, goto out_gpte_changed; if (sp) +#if PTTYPE == PTTYPE_EPT + FNAME(link_shadow_page)(it.sptep, sp); +#else link_shadow_page(it.sptep, sp); +#endif } for (; @@ -533,7 +549,11 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1, true, direct_access, it.sptep); +#if PTTYPE == PTTYPE_EPT + FNAME(link_shadow_page)(it.sptep, sp); +#else link_shadow_page(it.sptep, sp); +#endif } clear_sp_write_flooding_count(it.sptep);