From patchwork Tue Sep 23 19:34:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andres Lagar-Cavilla X-Patchwork-Id: 4959331 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 616369F313 for ; Tue, 23 Sep 2014 19:35:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 16F2B20279 for ; Tue, 23 Sep 2014 19:35:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C521E20274 for ; Tue, 23 Sep 2014 19:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754745AbaIWTe7 (ORCPT ); Tue, 23 Sep 2014 15:34:59 -0400 Received: from mail-oa0-f74.google.com ([209.85.219.74]:63431 "EHLO mail-oa0-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751746AbaIWTe5 (ORCPT ); Tue, 23 Sep 2014 15:34:57 -0400 Received: by mail-oa0-f74.google.com with SMTP id m1so1409139oag.1 for ; Tue, 23 Sep 2014 12:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=Sz/JSXdoddEt8buPM9UCTgCMd31lhqsS8RK/x2BWqJU=; b=R0iBzQBBA/rIfb7qraf9egeS61F3z1QO0hhHu2zAw8gZn7vg6Wh4pLdpvElC84md+b 1NvbMLCD2pJJHSE5s+cWA9QL+IuGxS6euSXqppbwTfAWyekbZ5sZB+aofRJhT5gTqAnP VwZGQyStmwsUTqgxOSddRwHvj8fLFGMpK2MCVrFuxq4j9d1vGS+4Y+cDG5viMPgtJ7N1 vq5/WJhUFzK1FmhuBQcCt9UQrQO/OxTx4PJPfRFc/7DlP9g3yB5KlY0op5gOcbd8SEt7 U7274F8tFMq7wb72lCtPzEp03Iyq6mdIxpxrPnQ+doLLJY6sMR0QEMXuv49ECimhqOdE +6jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Sz/JSXdoddEt8buPM9UCTgCMd31lhqsS8RK/x2BWqJU=; b=Kdl7LjtjMma66KKhjuSXtPyrGcKeoOVk+MmlHzViOwfjlxbEcFD+M52u5OnN+f2ojI +7ScipN5zC6d/eAcFK4mL4FoVTkdAQ0DUc8thupCl2W3W9ssFtGgHVK1aoyRQVPUif8T SUqGwZG8ECGsWazAZS42RelWI/m+1kpeYsfU/5bi4bho2nxG8iJtqqPmabo3sHwgPGn0 upiy6oIYInFyi802FgWZS/UZ08Ory6SxD6j4PgPjFDQb5k7bys6nj08wjiE+HhRQPW9j pk9R0z/CLaobqSGzBAPqQPLygmwMcSfgG3aTyipx5YLzZwmSUXSJjsT+wYHX9uuUhR5c PPlw== X-Gm-Message-State: ALoCoQmy1eP9ijZORVAT/nUudDwsPMx6lwoDpf3z7hIlx2q2Vv11MKYQ9WhDcoEx42R3m+VCIZRm X-Received: by 10.182.107.135 with SMTP id hc7mr1374153obb.48.1411500897153; Tue, 23 Sep 2014 12:34:57 -0700 (PDT) Received: from corpmail-nozzle1-1.hot.corp.google.com ([100.108.1.104]) by gmr-mx.google.com with ESMTPS id n24si689134yha.6.2014.09.23.12.34.56 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Sep 2014 12:34:57 -0700 (PDT) Received: from sigsegv.mtv.corp.google.com ([172.17.131.1]) by corpmail-nozzle1-1.hot.corp.google.com with ESMTP id st5AKCYV.1; Tue, 23 Sep 2014 12:34:57 -0700 Received: by sigsegv.mtv.corp.google.com (Postfix, from userid 256548) id 3700B1200A1; Tue, 23 Sep 2014 12:34:56 -0700 (PDT) From: Andres Lagar-Cavilla To: Paolo Bonzini , Radim Krcmar , Rik van Riel , Gleb Natapov , Steven Rostedt , kvm@vger.kernel.org Cc: Andres Lagar-Cavilla Subject: [PATCH] kvm/x86/mmu: Pass gfn and level to rmapp callback. Date: Tue, 23 Sep 2014 12:34:54 -0700 Message-Id: <1411500894-30542-1-git-send-email-andreslc@google.com> X-Mailer: git-send-email 2.1.0.rc2.206.gedb03e5 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Callbacks don't have to do extra computation to learn what the caller (lvm_handle_hva_range()) knows very well. Useful for debugging/tracing/printk/future. Signed-off-by: Andres Lagar-Cavilla --- arch/x86/kvm/mmu.c | 38 ++++++++++++++++++++++---------------- include/trace/events/kvm.h | 10 ++++++---- 2 files changed, 28 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index f33d5e4..cc14eba 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1262,7 +1262,8 @@ static bool rmap_write_protect(struct kvm *kvm, u64 gfn) } static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, - struct kvm_memory_slot *slot, unsigned long data) + struct kvm_memory_slot *slot, gfn_t gfn, int level, + unsigned long data) { u64 *sptep; struct rmap_iterator iter; @@ -1270,7 +1271,8 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, while ((sptep = rmap_get_first(*rmapp, &iter))) { BUG_ON(!(*sptep & PT_PRESENT_MASK)); - rmap_printk("kvm_rmap_unmap_hva: spte %p %llx\n", sptep, *sptep); + rmap_printk("kvm_rmap_unmap_hva: spte %p %llx gfn %llx (%d)\n", + sptep, *sptep, gfn, level); drop_spte(kvm, sptep); need_tlb_flush = 1; @@ -1280,7 +1282,8 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, } static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp, - struct kvm_memory_slot *slot, unsigned long data) + struct kvm_memory_slot *slot, gfn_t gfn, int level, + unsigned long data) { u64 *sptep; struct rmap_iterator iter; @@ -1294,7 +1297,8 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp, for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { BUG_ON(!is_shadow_present_pte(*sptep)); - rmap_printk("kvm_set_pte_rmapp: spte %p %llx\n", sptep, *sptep); + rmap_printk("kvm_set_pte_rmapp: spte %p %llx gfn %llx (%d)\n", + sptep, *sptep, gfn, level); need_flush = 1; @@ -1328,6 +1332,8 @@ static int kvm_handle_hva_range(struct kvm *kvm, int (*handler)(struct kvm *kvm, unsigned long *rmapp, struct kvm_memory_slot *slot, + gfn_t gfn, + int level, unsigned long data)) { int j; @@ -1357,6 +1363,7 @@ static int kvm_handle_hva_range(struct kvm *kvm, j < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++j) { unsigned long idx, idx_end; unsigned long *rmapp; + gfn_t gfn = gfn_start; /* * {idx(page_j) | page_j intersects with @@ -1367,8 +1374,10 @@ static int kvm_handle_hva_range(struct kvm *kvm, rmapp = __gfn_to_rmap(gfn_start, j, memslot); - for (; idx <= idx_end; ++idx) - ret |= handler(kvm, rmapp++, memslot, data); + for (; idx <= idx_end; + ++idx, gfn += (1UL << KVM_HPAGE_GFN_SHIFT(j))) + ret |= handler(kvm, rmapp++, memslot, + gfn, j, data); } } @@ -1379,6 +1388,7 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, unsigned long data, int (*handler)(struct kvm *kvm, unsigned long *rmapp, struct kvm_memory_slot *slot, + gfn_t gfn, int level, unsigned long data)) { return kvm_handle_hva_range(kvm, hva, hva + 1, data, handler); @@ -1400,7 +1410,8 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) } static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, - struct kvm_memory_slot *slot, unsigned long data) + struct kvm_memory_slot *slot, gfn_t gfn, int level, + unsigned long data) { u64 *sptep; struct rmap_iterator uninitialized_var(iter); @@ -1410,25 +1421,20 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, for (sptep = rmap_get_first(*rmapp, &iter); sptep; sptep = rmap_get_next(&iter)) { - struct kvm_mmu_page *sp; - gfn_t gfn; BUG_ON(!is_shadow_present_pte(*sptep)); - /* From spte to gfn. */ - sp = page_header(__pa(sptep)); - gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); - if (*sptep & shadow_accessed_mask) { young = 1; clear_bit((ffs(shadow_accessed_mask) - 1), (unsigned long *)sptep); } - trace_kvm_age_page(gfn, slot, young); + trace_kvm_age_page(gfn, level, slot, young); } return young; } static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp, - struct kvm_memory_slot *slot, unsigned long data) + struct kvm_memory_slot *slot, gfn_t gfn, + int level, unsigned long data) { u64 *sptep; struct rmap_iterator iter; @@ -1466,7 +1472,7 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp->role.level); - kvm_unmap_rmapp(vcpu->kvm, rmapp, NULL, 0); + kvm_unmap_rmapp(vcpu->kvm, rmapp, NULL, gfn, sp->role.level, 0); kvm_flush_remote_tlbs(vcpu->kvm); } diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h index 0d2de78..6edf1f2 100644 --- a/include/trace/events/kvm.h +++ b/include/trace/events/kvm.h @@ -225,24 +225,26 @@ TRACE_EVENT(kvm_fpu, ); TRACE_EVENT(kvm_age_page, - TP_PROTO(ulong gfn, struct kvm_memory_slot *slot, int ref), - TP_ARGS(gfn, slot, ref), + TP_PROTO(ulong gfn, int level, struct kvm_memory_slot *slot, int ref), + TP_ARGS(gfn, level, slot, ref), TP_STRUCT__entry( __field( u64, hva ) __field( u64, gfn ) + __field( u8, level ) __field( u8, referenced ) ), TP_fast_assign( __entry->gfn = gfn; + __entry->level = level; __entry->hva = ((gfn - slot->base_gfn) << PAGE_SHIFT) + slot->userspace_addr; __entry->referenced = ref; ), - TP_printk("hva %llx gfn %llx %s", - __entry->hva, __entry->gfn, + TP_printk("hva %llx gfn %llx level %u %s", + __entry->hva, __entry->gfn, __entry->level, __entry->referenced ? "YOUNG" : "OLD") );