From patchwork Wed Jan 23 10:06:36 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2023401 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id AF966DF280 for ; Wed, 23 Jan 2013 10:07:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754716Ab3AWKGq (ORCPT ); Wed, 23 Jan 2013 05:06:46 -0500 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:40041 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754488Ab3AWKGo (ORCPT ); Wed, 23 Jan 2013 05:06:44 -0500 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 23 Jan 2013 20:05:10 +1000 Received: from d23dlp03.au.ibm.com (202.81.31.214) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 23 Jan 2013 20:05:08 +1000 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 82100357804D; Wed, 23 Jan 2013 21:06:40 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0NA6dtu65273892; Wed, 23 Jan 2013 21:06:39 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0NA6dJF029547; Wed, 23 Jan 2013 21:06:40 +1100 Received: from localhost.localdomain ([9.123.236.211]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0NA6aYd029515; Wed, 23 Jan 2013 21:06:37 +1100 Message-ID: <50FFB62C.4070808@linux.vnet.ibm.com> Date: Wed, 23 Jan 2013 18:06:36 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Xiao Guangrong CC: Marcelo Tosatti , Avi Kivity , Gleb Natapov , LKML , KVM Subject: [PATCH v2 05/12] KVM: MMU: introduce vcpu_adjust_access References: <50FFB5A1.5090708@linux.vnet.ibm.com> In-Reply-To: <50FFB5A1.5090708@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012310-5140-0000-0000-000002AA5714 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce it to split the code of adjusting pte_access from the large function of set_spte Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 63 +++++++++++++++++++++++++++++++++------------------- 1 files changed, 40 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index af8bcb2..43b7e0c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2324,25 +2324,18 @@ static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, return 0; } -static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned pte_access, int level, - gfn_t gfn, pfn_t pfn, bool speculative, - bool can_unsync, bool host_writable) +/* + * Return -1 if a race condition is detected, 1 if @gfn need to be + * write-protected, otherwise 0 is returned. + */ +static int vcpu_adjust_access(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned *pte_access, int level, gfn_t gfn, + bool can_unsync, bool host_writable) { - u64 spte; - int ret = 0; - - if (set_mmio_spte(sptep, gfn, pfn, pte_access)) - return 0; + if (!host_writable) + *pte_access &= ~ACC_WRITE_MASK; - spte = PT_PRESENT_MASK; - - if (host_writable) - spte |= SPTE_HOST_WRITEABLE; - else - pte_access &= ~ACC_WRITE_MASK; - - if (pte_access & ACC_WRITE_MASK) { + if (*pte_access & ACC_WRITE_MASK) { /* * Other vcpu creates new sp in the window between * mapping_level() and acquiring mmu-lock. We can @@ -2351,7 +2344,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, */ if (level > PT_PAGE_TABLE_LEVEL && has_wrprotected_page(vcpu->kvm, gfn, level)) - goto done; + return -1; /* * Optimization: for pte sync, if spte was writable the hash @@ -2360,17 +2353,41 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * Same reasoning can be applied to dirty page accounting. */ if (!can_unsync && is_writable_pte(*sptep)) - goto out_access_adjust; + return 0; if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); - ret = 1; - pte_access &= ~ACC_WRITE_MASK; + + *pte_access &= ~ACC_WRITE_MASK; + return 1; } } -out_access_adjust: + return 0; +} + +static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned pte_access, int level, + gfn_t gfn, pfn_t pfn, bool speculative, + bool can_unsync, bool host_writable) +{ + u64 spte; + int ret; + + if (set_mmio_spte(sptep, gfn, pfn, pte_access)) + return 0; + + ret = vcpu_adjust_access(vcpu, sptep, &pte_access, level, gfn, + can_unsync, host_writable); + if (ret < 0) + return 0; + + spte = PT_PRESENT_MASK; + + if (host_writable) + spte |= SPTE_HOST_WRITEABLE; + if (!speculative) spte |= shadow_accessed_mask; @@ -2399,7 +2416,7 @@ out_access_adjust: if (mmu_spte_update(sptep, spte)) kvm_flush_remote_tlbs(vcpu->kvm); -done: + return ret; }