From patchwork Fri Jun 11 13:30:36 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 105576 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o5BDYZRL026270 for ; Fri, 11 Jun 2010 13:34:35 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760308Ab0FKNeL (ORCPT ); Fri, 11 Jun 2010 09:34:11 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:53458 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1760244Ab0FKNeJ (ORCPT ); Fri, 11 Jun 2010 09:34:09 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 9FC4617011E; Fri, 11 Jun 2010 21:34:06 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o5BDVkLI027836; Fri, 11 Jun 2010 21:31:46 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 391B110C007; Fri, 11 Jun 2010 21:33:48 +0800 (CST) Message-ID: <4C123A7C.1070205@cn.fujitsu.com> Date: Fri, 11 Jun 2010 21:30:36 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM list Subject: [PATCH 3/7] KVM: MMU: avoid double write protected in sync page path References: <4C1239EE.3090904@cn.fujitsu.com> In-Reply-To: <4C1239EE.3090904@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 11 Jun 2010 13:34:35 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 21ab85b..2ffd673 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1216,6 +1216,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, if ((sp)->gfn != (gfn) || (sp)->role.direct || \ (sp)->role.invalid) {} else +/* @sp->gfn should be write-protected at the call site */ static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list, bool clear_unsync) { @@ -1224,11 +1225,8 @@ static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return 1; } - if (clear_unsync) { - if (rmap_write_protect(vcpu->kvm, sp->gfn)) - kvm_flush_remote_tlbs(vcpu->kvm); + if (clear_unsync) kvm_unlink_unsync_page(vcpu->kvm, sp); - } if (vcpu->arch.mmu.sync_page(vcpu, sp)) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);