From patchwork Thu May 9 00:53:14 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2542381 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 23535DF2E5 for ; Thu, 9 May 2013 00:53:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754374Ab3EIAx0 (ORCPT ); Wed, 8 May 2013 20:53:26 -0400 Received: from mail-da0-f46.google.com ([209.85.210.46]:63717 "EHLO mail-da0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753871Ab3EIAxZ (ORCPT ); Wed, 8 May 2013 20:53:25 -0400 Received: by mail-da0-f46.google.com with SMTP id e20so1283236dak.33 for ; Wed, 08 May 2013 17:53:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=3uW17czpVJHrGpLMcfuDW43KBudZGvIO6Qvo+7NwLmU=; b=ngy853J/2n+FluAKGPVNtCHoeabOltrLiBnGFII5ufi+nivHGloXcotUmmqmIw6UFs 3u33zuKW1/7zFv7ZDHq14I2mx0lryw2+nkjOPei9myVitfTeeTeJvSPZM8W33/UuxdB1 jFOMqpa9uboqh2h2PJku3No3k8APRonDBfrWkKY50G3nVECUY76Bi2LMiHRH8XvXeWhh lqhtIKK2Y+j+ZLEn9wBXahnFIm2edsy0rZj1cHmamgV78edTEcEii++ys1VVyt+EeJAp 6y5/wwChCwOBXVSGRaXCXscALYnT+Oam5Kmx2iMcb/fOBm+weOmqvy0tBlGUHKO3hpT3 DeBw== X-Received: by 10.68.216.37 with SMTP id on5mr10377061pbc.35.1368060804898; Wed, 08 May 2013 17:53:24 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id uq10sm746104pbc.5.2013.05.08.17.53.23 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 08 May 2013 17:53:23 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Subject: [PATCH v3 02/13] nEPT: Move gpte_access() and prefetch_invalid_gpte() to paging_tmpl.h Date: Wed, 8 May 2013 17:53:14 -0700 Message-Id: <1368060805-2790-2-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1368060805-2790-1-git-send-email-jun.nakajima@intel.com> References: <1368060805-2790-1-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQmRoOTjAFC0dn481dMMKKF48mkKxqjLLwKDNfrc0uTDnvw+Lby8vsBBvtjdagC0avjlt+zS Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For preparation, we just move gpte_access() and prefetch_invalid_gpte() from mmu.c to paging_tmpl.h. Signed-off-by: Nadav Har'El Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/kvm/mmu.c | 30 ------------------------------ arch/x86/kvm/paging_tmpl.h | 40 +++++++++++++++++++++++++++++++++++----- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 004cc87..117233f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2488,26 +2488,6 @@ static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, return gfn_to_pfn_memslot_atomic(slot, gfn); } -static bool prefetch_invalid_gpte(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, u64 *spte, - u64 gpte) -{ - if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) - goto no_present; - - if (!is_present_gpte(gpte)) - goto no_present; - - if (!(gpte & PT_ACCESSED_MASK)) - goto no_present; - - return false; - -no_present: - drop_spte(vcpu->kvm, spte); - return true; -} - static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *start, u64 *end) @@ -3408,16 +3388,6 @@ static bool sync_mmio_spte(u64 *sptep, gfn_t gfn, unsigned access, return false; } -static inline unsigned gpte_access(struct kvm_vcpu *vcpu, u64 gpte) -{ - unsigned access; - - access = (gpte & (PT_WRITABLE_MASK | PT_USER_MASK)) | ACC_EXEC_MASK; - access &= ~(gpte >> PT64_NX_SHIFT); - - return access; -} - static inline bool is_last_gpte(struct kvm_mmu *mmu, unsigned level, unsigned gpte) { unsigned index; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index da20860..df34d4a 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -103,6 +103,36 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, return (ret != orig_pte); } +static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, u64 *spte, + u64 gpte) +{ + if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) + goto no_present; + + if (!is_present_gpte(gpte)) + goto no_present; + + if (!(gpte & PT_ACCESSED_MASK)) + goto no_present; + + return false; + +no_present: + drop_spte(vcpu->kvm, spte); + return true; +} + +static inline unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, u64 gpte) +{ + unsigned access; + + access = (gpte & (PT_WRITABLE_MASK | PT_USER_MASK)) | ACC_EXEC_MASK; + access &= ~(gpte >> PT64_NX_SHIFT); + + return access; +} + static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct guest_walker *walker, @@ -225,7 +255,7 @@ retry_walk: } accessed_dirty &= pte; - pte_access = pt_access & gpte_access(vcpu, pte); + pte_access = pt_access & FNAME(gpte_access)(vcpu, pte); walker->ptes[walker->level - 1] = pte; } while (!is_last_gpte(mmu, walker->level, pte)); @@ -309,13 +339,13 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn_t gfn; pfn_t pfn; - if (prefetch_invalid_gpte(vcpu, sp, spte, gpte)) + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access & gpte_access(vcpu, gpte); + pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte); protect_clean_gpte(&pte_access, gpte); pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, no_dirty_log && (pte_access & ACC_WRITE_MASK)); @@ -782,14 +812,14 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) sizeof(pt_element_t))) return -EINVAL; - if (prefetch_invalid_gpte(vcpu, sp, &sp->spt[i], gpte)) { + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { vcpu->kvm->tlbs_dirty++; continue; } gfn = gpte_to_gfn(gpte); pte_access = sp->role.access; - pte_access &= gpte_access(vcpu, gpte); + pte_access &= FNAME(gpte_access)(vcpu, gpte); protect_clean_gpte(&pte_access, gpte); if (sync_mmio_spte(&sp->spt[i], gfn, pte_access, &nr_present))