From patchwork Wed Jan 23 10:07:20 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2023461 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id A5000DF280 for ; Wed, 23 Jan 2013 10:09:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754667Ab3AWKHa (ORCPT ); Wed, 23 Jan 2013 05:07:30 -0500 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:40107 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754427Ab3AWKH2 (ORCPT ); Wed, 23 Jan 2013 05:07:28 -0500 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 23 Jan 2013 20:05:54 +1000 Received: from d23dlp03.au.ibm.com (202.81.31.214) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 23 Jan 2013 20:05:51 +1000 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 5FEF2357804D; Wed, 23 Jan 2013 21:07:23 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0NA7M6859244584; Wed, 23 Jan 2013 21:07:22 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0NA7MUT030437; Wed, 23 Jan 2013 21:07:23 +1100 Received: from localhost.localdomain ([9.123.236.211]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0NA7K06030357; Wed, 23 Jan 2013 21:07:21 +1100 Message-ID: <50FFB658.6040205@linux.vnet.ibm.com> Date: Wed, 23 Jan 2013 18:07:20 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Xiao Guangrong CC: Marcelo Tosatti , Avi Kivity , Gleb Natapov , LKML , KVM Subject: [PATCH v2 06/12] KVM: MMU: introduce a static table to map guest access to spte access References: <50FFB5A1.5090708@linux.vnet.ibm.com> In-Reply-To: <50FFB5A1.5090708@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012310-5140-0000-0000-000002AA5739 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It makes set_spte more clean and reduces branch prediction Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 37 ++++++++++++++++++++++++++----------- 1 files changed, 26 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 43b7e0c..a8a9c0e 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -235,6 +235,29 @@ static inline u64 rsvd_bits(int s, int e) return ((1ULL << (e - s + 1)) - 1) << s; } +static u64 gaccess_to_spte_access[ACC_ALL + 1]; +static void build_access_table(void) +{ + int access; + + for (access = 0; access < ACC_ALL + 1; access++) { + u64 spte_access = 0; + + if (access & ACC_EXEC_MASK) + spte_access |= shadow_x_mask; + else + spte_access |= shadow_nx_mask; + + if (access & ACC_USER_MASK) + spte_access |= shadow_user_mask; + + if (access & ACC_WRITE_MASK) + spte_access |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; + + gaccess_to_spte_access[access] = spte_access; + } +} + void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, u64 dirty_mask, u64 nx_mask, u64 x_mask) { @@ -243,6 +266,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, shadow_dirty_mask = dirty_mask; shadow_nx_mask = nx_mask; shadow_x_mask = x_mask; + build_access_table(); } EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes); @@ -2391,20 +2415,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (!speculative) spte |= shadow_accessed_mask; - if (pte_access & ACC_EXEC_MASK) - spte |= shadow_x_mask; - else - spte |= shadow_nx_mask; - - if (pte_access & ACC_USER_MASK) - spte |= shadow_user_mask; - - if (pte_access & ACC_WRITE_MASK) - spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; - if (level > PT_PAGE_TABLE_LEVEL) spte |= PT_PAGE_SIZE_MASK; + spte |= gaccess_to_spte_access[pte_access]; + if (tdp_enabled) spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, kvm_is_mmio_pfn(pfn));