From patchwork Fri May 7 03:58:41 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 97572 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o4742HsI019457 for ; Fri, 7 May 2010 04:02:17 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750814Ab0EGEBw (ORCPT ); Fri, 7 May 2010 00:01:52 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:58701 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1750773Ab0EGEBw (ORCPT ); Fri, 7 May 2010 00:01:52 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 9F257170119; Fri, 7 May 2010 12:01:51 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o473xwp6016415; Fri, 7 May 2010 11:59:59 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 45E31DC315; Fri, 7 May 2010 12:05:10 +0800 (CST) Message-ID: <4BE38FF1.3030603@cn.fujitsu.com> Date: Fri, 07 May 2010 11:58:41 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH v5 6/9] KVM MMU: support keeping sp live while it's out of protection References: <4BE2818A.5000301@cn.fujitsu.com> <4BE28C6B.8010505@cn.fujitsu.com> In-Reply-To: <4BE28C6B.8010505@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 07 May 2010 04:02:17 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 4077a9c..2d3347c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -894,6 +894,7 @@ static int is_empty_shadow_page(u64 *spt) static void kvm_mmu_free_page(struct kvm *kvm, struct kvm_mmu_page *sp) { ASSERT(is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); list_del(&sp->link); __free_page(virt_to_page(sp->spt)); __free_page(virt_to_page(sp->gfns)); @@ -1542,12 +1543,13 @@ static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp) if (!sp->active_count) { /* Count self */ ret++; - hlist_del(&sp->hash_link); kvm_mmu_free_page(kvm, sp); } else { sp->role.invalid = 1; list_move(&sp->link, &kvm->arch.active_mmu_pages); - kvm_reload_remote_mmus(kvm); + /* No need reload mmu if it's unsync page zapped */ + if (sp->role.level != PT_PAGE_TABLE_LEVEL) + kvm_reload_remote_mmus(kvm); } kvm_mmu_reset_last_pte_updated(kvm); return ret; @@ -1782,7 +1784,8 @@ static void kvm_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn) bucket = &vcpu->kvm->arch.mmu_page_hash[index]; hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { - if (s->gfn != gfn || s->role.direct || s->unsync) + if (s->gfn != gfn || s->role.direct || s->unsync || + s->role.invalid) continue; WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL); __kvm_unsync_page(vcpu, s); @@ -1807,7 +1810,7 @@ static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, if (s->role.level != PT_PAGE_TABLE_LEVEL) return 1; - if (!need_unsync && !s->unsync) { + if (!need_unsync && !s->unsync && !s->role.invalid) { if (!can_unsync || !oos_shadow) return 1; need_unsync = true;