From patchwork Fri Jun 19 13:16:27 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 31343 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5JDH5lI018632 for ; Fri, 19 Jun 2009 13:17:05 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756895AbZFSNQ7 (ORCPT ); Fri, 19 Jun 2009 09:16:59 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757303AbZFSNQ7 (ORCPT ); Fri, 19 Jun 2009 09:16:59 -0400 Received: from va3ehsobe002.messaging.microsoft.com ([216.32.180.12]:20035 "EHLO VA3EHSOBE002.bigfish.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755694AbZFSNQu (ORCPT ); Fri, 19 Jun 2009 09:16:50 -0400 Received: from mail170-va3-R.bigfish.com (10.7.14.239) by VA3EHSOBE002.bigfish.com (10.7.40.22) with Microsoft SMTP Server id 8.1.340.0; Fri, 19 Jun 2009 13:16:52 +0000 Received: from mail170-va3 (localhost.localdomain [127.0.0.1]) by mail170-va3-R.bigfish.com (Postfix) with ESMTP id CA484146026F; Fri, 19 Jun 2009 13:16:52 +0000 (UTC) X-SpamScore: 1 X-BigFish: VPS1(zzzz1202hzzz32i17ch62h) X-Spam-TCS-SCL: 1:0 Received: by mail170-va3 (MessageSwitch) id 1245417410199174_1993; Fri, 19 Jun 2009 13:16:50 +0000 (UCT) Received: from ausb3extmailp02.amd.com (ausb3extmailp02.amd.com [163.181.251.22]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail170-va3.bigfish.com (Postfix) with ESMTP id 12AC7B6005A; Fri, 19 Jun 2009 13:16:50 +0000 (UTC) Received: from ausb3twp02.amd.com ([163.181.250.38]) by ausb3extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id n5JDGkvu020026; Fri, 19 Jun 2009 08:16:49 -0500 X-WSS-ID: 0KLHM7P-02-2UE-01 Received: from sausexbh1.amd.com (sausexbh1.amd.com [163.181.22.101]) by ausb3twp02.amd.com (Tumbleweed MailGate 3.5.1) with ESMTP id 22E7C1234025; Fri, 19 Jun 2009 08:16:37 -0500 (CDT) Received: from sausexmb5.amd.com ([163.181.49.129]) by sausexbh1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 08:16:44 -0500 Received: from SDRSEXMB1.amd.com ([172.20.3.116]) by sausexmb5.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 08:16:44 -0500 Received: from seurexmb1.amd.com ([165.204.9.130]) by SDRSEXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 15:16:39 +0200 Received: from lemmy.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 15:16:38 +0200 Received: by lemmy.amd.com (Postfix, from userid 41430) id 7BE0AC9B79; Fri, 19 Jun 2009 15:16:38 +0200 (CEST) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 6/8] kvm/mmu: add support for another level to page walker Date: Fri, 19 Jun 2009 15:16:27 +0200 Message-ID: <1245417389-5527-7-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.6.3.1 In-Reply-To: <1245417389-5527-1-git-send-email-joerg.roedel@amd.com> References: <1245417389-5527-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 19 Jun 2009 13:16:38.0771 (UTC) FILETIME=[2DC0A430:01C9F0E0] MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The page walker may be used with nested paging too when accessing mmio areas. Make it support the additional page-level too. Signed-off-by: Joerg Roedel --- arch/x86/kvm/mmu.c | 6 ++++++ arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++++ 2 files changed, 22 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ef2396d..fc0e2fc 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -117,6 +117,11 @@ module_param(oos_shadow, bool, 0644); #define PT64_DIR_BASE_ADDR_MASK \ (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + PT64_LEVEL_BITS)) - 1)) +#define PT64_PDPE_BASE_ADDR_MASK \ + (PT64_BASE_ADDR_MASK & ~(1ULL << (PAGE_SHIFT + (2 * PT64_LEVEL_BITS)))) +#define PT64_PDPE_OFFSET_MASK \ + (PT64_BASE_ADDR_MASK & (1ULL << (PAGE_SHIFT + (2 * PT64_LEVEL_BITS)))) + #define PT32_BASE_ADDR_MASK PAGE_MASK #define PT32_DIR_BASE_ADDR_MASK \ (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) @@ -130,6 +135,7 @@ module_param(oos_shadow, bool, 0644); #define PFERR_RSVD_MASK (1U << 3) #define PFERR_FETCH_MASK (1U << 4) +#define PT_PDPE_LEVEL 3 #define PT_DIRECTORY_LEVEL 2 #define PT_PAGE_TABLE_LEVEL 1 diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 8fbf4e7..54c77be 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -55,6 +55,7 @@ #define gpte_to_gfn FNAME(gpte_to_gfn) #define gpte_to_gfn_pde FNAME(gpte_to_gfn_pde) +#define gpte_to_gfn_pdpe FNAME(gpte_to_gfn_pdpe) /* * The guest_walker structure emulates the behavior of the hardware page @@ -81,6 +82,11 @@ static gfn_t gpte_to_gfn_pde(pt_element_t gpte) return (gpte & PT_DIR_BASE_ADDR_MASK) >> PAGE_SHIFT; } +static gfn_t gpte_to_gfn_pdpe(pt_element_t gpte) +{ + return (gpte & PT64_PDPE_BASE_ADDR_MASK) >> PAGE_SHIFT; +} + static bool FNAME(cmpxchg_gpte)(struct kvm *kvm, gfn_t table_gfn, unsigned index, pt_element_t orig_pte, pt_element_t new_pte) @@ -201,6 +207,15 @@ walk: break; } + if (walker->level == PT_PDPE_LEVEL && + (pte & PT_PAGE_SIZE_MASK) && + is_long_mode(vcpu)) { + walker->gfn = gpte_to_gfn_pdpe(pte); + walker->gfn += (addr & PT64_PDPE_OFFSET_MASK) + >> PAGE_SHIFT; + break; + } + pt_access = pte_access; --walker->level; } @@ -609,4 +624,5 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) #undef PT_MAX_FULL_LEVELS #undef gpte_to_gfn #undef gpte_to_gfn_pde +#undef gpte_to_gfn_pdpe #undef CMPXCHG