From patchwork Wed Jan 23 10:04:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2023311 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id E8C6C3FD1A for ; Wed, 23 Jan 2013 10:04:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754718Ab3AWKEd (ORCPT ); Wed, 23 Jan 2013 05:04:33 -0500 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:39818 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753738Ab3AWKE2 (ORCPT ); Wed, 23 Jan 2013 05:04:28 -0500 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 23 Jan 2013 20:02:52 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 23 Jan 2013 20:02:49 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id BE9142CE804A; Wed, 23 Jan 2013 21:04:21 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0N9qPhM30015660; Wed, 23 Jan 2013 20:52:26 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0NA4Ks6025660; Wed, 23 Jan 2013 21:04:20 +1100 Received: from localhost.localdomain ([9.123.236.211]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0NA4HDT025569; Wed, 23 Jan 2013 21:04:18 +1100 Message-ID: <50FFB5A1.5090708@linux.vnet.ibm.com> Date: Wed, 23 Jan 2013 18:04:17 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Marcelo Tosatti CC: Avi Kivity , Gleb Natapov , LKML , KVM Subject: [PATCH v2 01/12] KVM: MMU: lazily drop large spte X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012310-5140-0000-0000-000002AA5686 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do not drop large spte until it can be insteaded by small pages so that the guest can happliy read memory through it The idea is from Avi: | As I mentioned before, write-protecting a large spte is a good idea, | since it moves some work from protect-time to fault-time, so it reduces | jitter. This removes the need for the return value. Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 21 ++++++--------------- 1 files changed, 6 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9f628f7..0f90269 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1105,7 +1105,7 @@ static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) /* * Write-protect on the specified @sptep, @pt_protect indicates whether - * spte writ-protection is caused by protecting shadow page table. + * spte write-protection is caused by protecting shadow page table. * @flush indicates whether tlb need be flushed. * * Note: write protection is difference between drity logging and spte @@ -1114,31 +1114,23 @@ static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) * its dirty bitmap is properly set. * - for spte protection, the spte can be writable only after unsync-ing * shadow page. - * - * Return true if the spte is dropped. */ -static bool +static void spte_write_protect(struct kvm *kvm, u64 *sptep, bool *flush, bool pt_protect) { u64 spte = *sptep; if (!is_writable_pte(spte) && !(pt_protect && spte_is_locklessly_modifiable(spte))) - return false; + return; rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); - if (__drop_large_spte(kvm, sptep)) { - *flush |= true; - return true; - } - if (pt_protect) spte &= ~SPTE_MMU_WRITEABLE; spte = spte & ~PT_WRITABLE_MASK; *flush |= mmu_spte_update(sptep, spte); - return false; } static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, @@ -1150,11 +1142,8 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { BUG_ON(!(*sptep & PT_PRESENT_MASK)); - if (spte_write_protect(kvm, sptep, &flush, pt_protect)) { - sptep = rmap_get_first(*rmapp, &iter); - continue; - } + spte_write_protect(kvm, sptep, &flush, pt_protect); sptep = rmap_get_next(&iter); } @@ -2611,6 +2600,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write, break; } + drop_large_spte(vcpu, iterator.sptep); + if (!is_shadow_present_pte(*iterator.sptep)) { u64 base_addr = iterator.addr;