From patchwork Wed Mar 3 19:12:16 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 83421 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o23JDiBW008230 for ; Wed, 3 Mar 2010 19:13:44 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754794Ab0CCTNS (ORCPT ); Wed, 3 Mar 2010 14:13:18 -0500 Received: from tx2ehsobe003.messaging.microsoft.com ([65.55.88.13]:48645 "EHLO TX2EHSOBE005.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754108Ab0CCTNF (ORCPT ); Wed, 3 Mar 2010 14:13:05 -0500 Received: from mail87-tx2-R.bigfish.com (10.9.14.235) by TX2EHSOBE005.bigfish.com (10.9.40.25) with Microsoft SMTP Server id 8.1.340.0; Wed, 3 Mar 2010 19:13:03 +0000 Received: from mail87-tx2 (localhost [127.0.0.1]) by mail87-tx2-R.bigfish.com (Postfix) with ESMTP id 60911A481BB; Wed, 3 Mar 2010 19:13:02 +0000 (UTC) X-SpamScore: 2 X-BigFish: VPS2(zz936eM10d1Iab9bhab78ozz1202hzzz32i6bh87h2a8h62h) X-Spam-TCS-SCL: 1:0 X-FB-DOMAIN-IP-MATCH: fail Received: from mail87-tx2 (localhost.localdomain [127.0.0.1]) by mail87-tx2 (MessageSwitch) id 1267643579178503_16253; Wed, 3 Mar 2010 19:12:59 +0000 (UTC) Received: from TX2EHSMHS020.bigfish.com (unknown [10.9.14.237]) by mail87-tx2.bigfish.com (Postfix) with ESMTP id 20ED8218021; Wed, 3 Mar 2010 19:12:59 +0000 (UTC) Received: from ausb3extmailp02.amd.com (163.181.251.22) by TX2EHSMHS020.bigfish.com (10.9.99.120) with Microsoft SMTP Server (TLS) id 14.0.482.39; Wed, 3 Mar 2010 19:12:56 +0000 Received: from ausb3twp01.amd.com (ausb3twp01.amd.com [163.181.250.37]) by ausb3extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with SMTP id o23JGG2g003842; Wed, 3 Mar 2010 13:16:20 -0600 X-WSS-ID: 0KYQ01F-01-LMJ-02 X-M-MSG: Received: from sausexbh1.amd.com (sausexbh1.amd.com [163.181.22.101]) by ausb3twp01.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 254FE102869D; Wed, 3 Mar 2010 13:12:50 -0600 (CST) Received: from sausexmb1.amd.com ([163.181.3.156]) by sausexbh1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 13:12:54 -0600 Received: from seurexmb1.amd.com ([165.204.9.130]) by sausexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 13:12:53 -0600 Received: from lemmy.osrc.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 20:12:42 +0100 Received: by lemmy.osrc.amd.com (Postfix, from userid 41430) id CB57EC9B89; Wed, 3 Mar 2010 20:12:42 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: Alexander Graf , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 13/18] KVM: MMU: Introduce Nested MMU context Date: Wed, 3 Mar 2010 20:12:16 +0100 Message-ID: <1267643541-451-14-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.0 In-Reply-To: <1267643541-451-1-git-send-email-joerg.roedel@amd.com> References: <1267643541-451-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 03 Mar 2010 19:12:42.0813 (UTC) FILETIME=[7FE07ED0:01CABB05] MIME-Version: 1.0 X-Reverse-DNS: ausb3extmailp02.amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 03 Mar 2010 19:13:44 +0000 (UTC) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 20dd1ce..66a698e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -264,6 +264,13 @@ struct kvm_mmu { u64 *pae_root; u64 rsvd_bits_mask[2][4]; + + /* + * If true the mmu runs in two-level mode. + * vcpu->arch.nested_mmu needs to contain meaningful + * values then. + */ + bool nested; }; struct kvm_vcpu_arch { @@ -296,6 +303,7 @@ struct kvm_vcpu_arch { struct kvm_mmu mmu; + /* This will hold the mmu context of the second level guest */ struct kvm_mmu nested_mmu; /* only needed in kvm_pv_mmu_op() path, but it's hot so diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c831955..ccaf6b1 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2154,6 +2154,18 @@ static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 *error) return gpa; } +static gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 *error) +{ + u32 access; + + BUG_ON(!vcpu->arch.mmu.nested); + + /* NPT walks are treated as user writes */ + access = PFERR_WRITE_MASK | PFERR_USER_MASK; + + return vcpu->arch.nested_mmu.gva_to_gpa(vcpu, gpa, access, error); +} + static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t vaddr, u32 access, u32 *error) { @@ -2476,11 +2488,45 @@ static int init_kvm_softmmu(struct kvm_vcpu *vcpu) return r; } +static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu *g_context = &vcpu->arch.nested_mmu; + struct kvm_mmu *h_context = &vcpu->arch.mmu; + + g_context->get_cr3 = get_cr3; + g_context->translate_gpa = translate_nested_gpa; + g_context->inject_page_fault = kvm_inject_page_fault; + + /* + * Note that arch.mmu.gva_to_gpa translates l2_gva to l1_gpa. The + * translation of l2_gpa to l1_gpa addresses is done using the + * arch.nested_mmu.gva_to_gpa function. Basically the gva_to_gpa + * functions between mmu and nested_mmu are swapped. + */ + if (!is_paging(vcpu)) { + g_context->root_level = 0; + h_context->gva_to_gpa = nonpaging_gva_to_gpa_nested; + } else if (is_long_mode(vcpu)) { + g_context->root_level = PT64_ROOT_LEVEL; + h_context->gva_to_gpa = paging64_gva_to_gpa_nested; + } else if (is_pae(vcpu)) { + g_context->root_level = PT32E_ROOT_LEVEL; + h_context->gva_to_gpa = paging64_gva_to_gpa_nested; + } else { + g_context->root_level = PT32_ROOT_LEVEL; + h_context->gva_to_gpa = paging32_gva_to_gpa_nested; + } + + return 0; +} + static int init_kvm_mmu(struct kvm_vcpu *vcpu) { vcpu->arch.update_pte.pfn = bad_pfn; - if (tdp_enabled) + if (vcpu->arch.mmu.nested) + return init_kvm_nested_mmu(vcpu); + else if (tdp_enabled) return init_kvm_tdp_mmu(vcpu); else return init_kvm_softmmu(vcpu);