Message ID | 4DEE20AD.5060109@cn.fujitsu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Jun 07, 2011 at 08:59:25PM +0800, Xiao Guangrong wrote: > Set slot bitmap only if the spte is present > > Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> > --- > arch/x86/kvm/mmu.c | 15 +++++++-------- > 1 files changed, 7 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index cda666a..125f78d 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -743,9 +743,6 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) > struct kvm_mmu_page *sp; > unsigned long *rmapp; > > - if (!is_rmap_spte(*spte)) > - return 0; > - Not sure if this is safe, what if the spte is set as nonpresent but rmap not removed? BTW i don't see what patch 1 and this have to do with the goal of the series. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/21/2011 12:28 AM, Marcelo Tosatti wrote: > On Tue, Jun 07, 2011 at 08:59:25PM +0800, Xiao Guangrong wrote: >> Set slot bitmap only if the spte is present >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> >> --- >> arch/x86/kvm/mmu.c | 15 +++++++-------- >> 1 files changed, 7 insertions(+), 8 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index cda666a..125f78d 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -743,9 +743,6 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) >> struct kvm_mmu_page *sp; >> unsigned long *rmapp; >> >> - if (!is_rmap_spte(*spte)) >> - return 0; >> - > > Not sure if this is safe, what if the spte is set as nonpresent but > rmap not removed? It can not happen, since when we set the spte as nonpresent, we always use drop_spte to remove the rmap, we also do it in set_spte() > > BTW i don't see what patch 1 and this have to do with the goal > of the series. > > There are the preparing work for mmio page fault: - Patch 1 fix the bug in walking shadow page, so we can safely use it to lockless-ly walk shadow page - Patch 2 can avoid add rmap for the mmio spte :-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cda666a..125f78d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -743,9 +743,6 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) struct kvm_mmu_page *sp; unsigned long *rmapp; - if (!is_rmap_spte(*spte)) - return 0; - sp = page_header(__pa(spte)); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp->role.level); @@ -2078,11 +2075,13 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (!was_rmapped && is_large_pte(*sptep)) ++vcpu->kvm->stat.lpages; - page_header_update_slot(vcpu->kvm, sptep, gfn); - if (!was_rmapped) { - rmap_count = rmap_add(vcpu, sptep, gfn); - if (rmap_count > RMAP_RECYCLE_THRESHOLD) - rmap_recycle(vcpu, sptep, gfn); + if (is_shadow_present_pte(*sptep)) { + page_header_update_slot(vcpu->kvm, sptep, gfn); + if (!was_rmapped) { + rmap_count = rmap_add(vcpu, sptep, gfn); + if (rmap_count > RMAP_RECYCLE_THRESHOLD) + rmap_recycle(vcpu, sptep, gfn); + } } kvm_release_pfn_clean(pfn); if (speculative) {
Set slot bitmap only if the spte is present Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> --- arch/x86/kvm/mmu.c | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-)