From patchwork Wed Sep 19 16:36:56 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avi Kivity X-Patchwork-Id: 1478551 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id EE37DDF280 for ; Wed, 19 Sep 2012 16:37:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756526Ab2ISQhZ (ORCPT ); Wed, 19 Sep 2012 12:37:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56260 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755791Ab2ISQhV (ORCPT ); Wed, 19 Sep 2012 12:37:21 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q8JGbG7B030424 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 19 Sep 2012 12:37:16 -0400 Received: from s01.tlv.redhat.com (s01.tlv.redhat.com [10.35.255.8]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q8JGb0uu008535; Wed, 19 Sep 2012 12:37:14 -0400 From: Avi Kivity To: Marcelo Tosatti Cc: kvm@vger.kernel.org, Xiao Guangrong Subject: [PATCH v3 09/10] KVM: MMU: Avoid access/dirty update loop if all is well Date: Wed, 19 Sep 2012 19:36:56 +0300 Message-Id: <1348072617-23575-10-git-send-email-avi@redhat.com> In-Reply-To: <1348072617-23575-1-git-send-email-avi@redhat.com> References: <1348072617-23575-1-git-send-email-avi@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Keep track of accessed/dirty bits; if they are all set, do not enter the accessed/dirty update loop. Reviewed-by: Xiao Guangrong Signed-off-by: Avi Kivity --- arch/x86/kvm/paging_tmpl.h | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 95a64d1..810c1da 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -151,7 +151,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, pt_element_t pte; pt_element_t __user *uninitialized_var(ptep_user); gfn_t table_gfn; - unsigned index, pt_access, pte_access; + unsigned index, pt_access, pte_access, accessed_dirty, shift; gpa_t pte_gpa; int offset; const int write_fault = access & PFERR_WRITE_MASK; @@ -180,6 +180,7 @@ retry_walk: ASSERT((!is_long_mode(vcpu) && is_pae(vcpu)) || (mmu->get_cr3(vcpu) & CR3_NONPAE_RESERVED_BITS) == 0); + accessed_dirty = PT_ACCESSED_MASK; pt_access = pte_access = ACC_ALL; ++walker->level; @@ -224,6 +225,7 @@ retry_walk: goto error; } + accessed_dirty &= pte; pte_access = pt_access & gpte_access(vcpu, pte); walker->ptes[walker->level - 1] = pte; @@ -251,11 +253,23 @@ retry_walk: if (!write_fault) protect_clean_gpte(&pte_access, pte); - ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault); - if (unlikely(ret < 0)) - goto error; - else if (ret) - goto retry_walk; + /* + * On a write fault, fold the dirty bit into accessed_dirty by shifting it one + * place right. + * + * On a read fault, do nothing. + */ + shift = write_fault >> ilog2(PFERR_WRITE_MASK); + shift *= PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT; + accessed_dirty &= pte >> shift; + + if (unlikely(!accessed_dirty)) { + ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault); + if (unlikely(ret < 0)) + goto error; + else if (ret) + goto retry_walk; + } walker->pt_access = pt_access; walker->pte_access = pte_access;