From patchwork Fri Jun 19 13:16:25 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 31345 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5JDHBuu018657 for ; Fri, 19 Jun 2009 13:17:12 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756643AbZFSNRG (ORCPT ); Fri, 19 Jun 2009 09:17:06 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758201AbZFSNRG (ORCPT ); Fri, 19 Jun 2009 09:17:06 -0400 Received: from va3ehsobe004.messaging.microsoft.com ([216.32.180.14]:39958 "EHLO VA3EHSOBE004.bigfish.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757263AbZFSNQ6 (ORCPT ); Fri, 19 Jun 2009 09:16:58 -0400 Received: from mail30-va3-R.bigfish.com (10.7.14.242) by VA3EHSOBE004.bigfish.com (10.7.40.24) with Microsoft SMTP Server id 8.1.340.0; Fri, 19 Jun 2009 13:17:00 +0000 Received: from mail30-va3 (localhost.localdomain [127.0.0.1]) by mail30-va3-R.bigfish.com (Postfix) with ESMTP id 69EAD41822D; Fri, 19 Jun 2009 13:17:00 +0000 (UTC) X-SpamScore: -2 X-BigFish: VPS-2(zz14c3Mzz1202hzzz32i17ch43j62h) X-Spam-TCS-SCL: 1:0 X-FB-SS: 5, Received: by mail30-va3 (MessageSwitch) id 1245417418454391_3677; Fri, 19 Jun 2009 13:16:58 +0000 (UCT) Received: from svlb1extmailp02.amd.com (unknown [139.95.251.11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail30-va3.bigfish.com (Postfix) with ESMTP id 3F74113E0053; Fri, 19 Jun 2009 13:16:58 +0000 (UTC) Received: from svlb1twp02.amd.com ([139.95.250.35]) by svlb1extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id n5JDGdYP021802; Fri, 19 Jun 2009 06:16:42 -0700 X-WSS-ID: 0KLHM7Q-04-ARI-01 Received: from SSVLEXBH1.amd.com (ssvlexbh1.amd.com [139.95.53.182]) by svlb1twp02.amd.com (Tumbleweed MailGate 3.5.1) with ESMTP id 213A71103BC; Fri, 19 Jun 2009 06:16:38 -0700 (PDT) Received: from ssvlexmb2.amd.com ([139.95.53.7]) by SSVLEXBH1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 06:16:43 -0700 Received: from SF30EXMB1.amd.com ([172.20.6.49]) by ssvlexmb2.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 06:16:43 -0700 Received: from seurexmb1.amd.com ([165.204.9.130]) by SF30EXMB1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 15:16:39 +0200 Received: from lemmy.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 19 Jun 2009 15:16:38 +0200 Received: by lemmy.amd.com (Postfix, from userid 41430) id 6334EC9B74; Fri, 19 Jun 2009 15:16:38 +0200 (CEST) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 4/8] kvm/mmu: make rmap code aware of mapping levels Date: Fri, 19 Jun 2009 15:16:25 +0200 Message-ID: <1245417389-5527-5-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.6.3.1 In-Reply-To: <1245417389-5527-1-git-send-email-joerg.roedel@amd.com> References: <1245417389-5527-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 19 Jun 2009 13:16:38.0600 (UTC) FILETIME=[2DA68C80:01C9F0E0] MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch removes the largepage parameter from the rmap_add function. Together with rmap_remove this function now uses the role.level field to find determine if the page is a huge page. Signed-off-by: Joerg Roedel --- arch/x86/kvm/mmu.c | 53 +++++++++++++++++++++++++++------------------------ 1 files changed, 28 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 3fa6009..0ef947d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -511,19 +511,19 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn) * Note: gfn must be unaliased before this function get called */ -static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, int lpage) +static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, int level) { struct kvm_memory_slot *slot; unsigned long idx; slot = gfn_to_memslot(kvm, gfn); - if (!lpage) + if (likely(level == PT_PAGE_TABLE_LEVEL)) return &slot->rmap[gfn - slot->base_gfn]; - idx = (gfn / KVM_PAGES_PER_HPAGE(PT_DIRECTORY_LEVEL)) - - (slot->base_gfn / KVM_PAGES_PER_HPAGE(PT_DIRECTORY_LEVEL)); + idx = (gfn / KVM_PAGES_PER_HPAGE(level)) - + (slot->base_gfn / KVM_PAGES_PER_HPAGE(level)); - return &slot->lpage_info[0][idx].rmap_pde; + return &slot->lpage_info[level - 2][idx].rmap_pde; } /* @@ -535,7 +535,7 @@ static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, int lpage) * If rmapp bit zero is one, (then rmap & ~1) points to a struct kvm_rmap_desc * containing more mappings. */ -static void rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn, int lpage) +static void rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; struct kvm_rmap_desc *desc; @@ -547,7 +547,7 @@ static void rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn, int lpage) gfn = unalias_gfn(vcpu->kvm, gfn); sp = page_header(__pa(spte)); sp->gfns[spte - sp->spt] = gfn; - rmapp = gfn_to_rmap(vcpu->kvm, gfn, lpage); + rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp->role.level); if (!*rmapp) { rmap_printk("rmap_add: %p %llx 0->1\n", spte, *spte); *rmapp = (unsigned long)spte; @@ -614,7 +614,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) kvm_release_pfn_dirty(pfn); else kvm_release_pfn_clean(pfn); - rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], is_large_pte(*spte)); + rmapp = gfn_to_rmap(kvm, sp->gfns[spte - sp->spt], sp->role.level); if (!*rmapp) { printk(KERN_ERR "rmap_remove: %p %llx 0->BUG\n", spte, *spte); BUG(); @@ -677,10 +677,10 @@ static int rmap_write_protect(struct kvm *kvm, u64 gfn) { unsigned long *rmapp; u64 *spte; - int write_protected = 0; + int i, write_protected = 0; gfn = unalias_gfn(kvm, gfn); - rmapp = gfn_to_rmap(kvm, gfn, 0); + rmapp = gfn_to_rmap(kvm, gfn, PT_PAGE_TABLE_LEVEL); spte = rmap_next(kvm, rmapp, NULL); while (spte) { @@ -702,21 +702,24 @@ static int rmap_write_protect(struct kvm *kvm, u64 gfn) } /* check for huge page mappings */ - rmapp = gfn_to_rmap(kvm, gfn, 1); - spte = rmap_next(kvm, rmapp, NULL); - while (spte) { - BUG_ON(!spte); - BUG_ON(!(*spte & PT_PRESENT_MASK)); - BUG_ON((*spte & (PT_PAGE_SIZE_MASK|PT_PRESENT_MASK)) != (PT_PAGE_SIZE_MASK|PT_PRESENT_MASK)); - pgprintk("rmap_write_protect(large): spte %p %llx %lld\n", spte, *spte, gfn); - if (is_writeble_pte(*spte)) { - rmap_remove(kvm, spte); - --kvm->stat.lpages; - __set_spte(spte, shadow_trap_nonpresent_pte); - spte = NULL; - write_protected = 1; + for (i = PT_DIRECTORY_LEVEL; + i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) { + rmapp = gfn_to_rmap(kvm, gfn, i); + spte = rmap_next(kvm, rmapp, NULL); + while (spte) { + BUG_ON(!spte); + BUG_ON(!(*spte & PT_PRESENT_MASK)); + BUG_ON((*spte & (PT_PAGE_SIZE_MASK|PT_PRESENT_MASK)) != (PT_PAGE_SIZE_MASK|PT_PRESENT_MASK)); + pgprintk("rmap_write_protect(large): spte %p %llx %lld\n", spte, *spte, gfn); + if (is_writeble_pte(*spte)) { + rmap_remove(kvm, spte); + --kvm->stat.lpages; + __set_spte(spte, shadow_trap_nonpresent_pte); + spte = NULL; + write_protected = 1; + } + spte = rmap_next(kvm, rmapp, spte); } - spte = rmap_next(kvm, rmapp, spte); } return write_protected; @@ -1823,7 +1826,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, page_header_update_slot(vcpu->kvm, sptep, gfn); if (!was_rmapped) { - rmap_add(vcpu, sptep, gfn, largepage); + rmap_add(vcpu, sptep, gfn); if (!is_rmap_spte(*sptep)) kvm_release_pfn_clean(pfn); } else {