From patchwork Tue Dec 20 23:25:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 9482387 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E165160237 for ; Tue, 20 Dec 2016 23:26:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D052628335 for ; Tue, 20 Dec 2016 23:26:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C47502836D; Tue, 20 Dec 2016 23:26:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D10428335 for ; Tue, 20 Dec 2016 23:26:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935819AbcLTX0V (ORCPT ); Tue, 20 Dec 2016 18:26:21 -0500 Received: from mail-pg0-f43.google.com ([74.125.83.43]:36030 "EHLO mail-pg0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932479AbcLTX0T (ORCPT ); Tue, 20 Dec 2016 18:26:19 -0500 Received: by mail-pg0-f43.google.com with SMTP id f188so77907489pgc.3 for ; Tue, 20 Dec 2016 15:26:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=JdFMC0wH9FsTvNGsf7CO3uw1C/qdx7uiKI0a3R7KqYk=; b=Fjlm7UI9DfXqclOkoxuum0FekdtfdmpMn0sNaGZlqrcazjK+ihL2XXSXtDOmBwWuqK Q3F2E4//5UvtXBGB8xfaTkMUqCi1yHuZuLui6HZovPrGnfmWT0XxVW/58R2TBZugGQa1 2iQu9TdoRlsuo/IC4s4/b3UgRUIUmNUP2k9ZbynYOPxPDgjhXDcRcsGGkwS/AswSEYdl xxtp39Togn8Pz6mtCkgAHtEpLdnU2zyFarWMOGttn6T3pBeF/JdpTZfvR0ojKNS+7kVt /Xqepxssz3rMXEO7gLwyilC5NqCEn2jOqXwPnTJgfdqVMcqnW0exBw48xyISdCp7J45f ZAkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=JdFMC0wH9FsTvNGsf7CO3uw1C/qdx7uiKI0a3R7KqYk=; b=BW9IY6xVhExEzapPpbtBPsQQyE2SPFVMWAFOD41zmc3G9GMnqDNEtHY6WxstBuc5F4 Otgj/WfK7VeurwbozucCvL5rIj0xnK/KyRe2n/csMVEG/qgqyam+JANeEwkovWfFpO0u n/eAdfnk8felWti/jQID3Qv4lImCWbIlgihZIPHH7cSivLEQFxysdBQkidZ17asblvee zQWKe7HhzVLnS96gVQc0XlP/HNSjOwHNEF4BPOy+UEkpuBcsj3+UvzDOvO/MCC9Vil5a ZmsnTUPmdvsabuWHmV6RqJY0+lS0rtcDDkVZuJ7vNK7UrqCIPLJAgoYp7oTfw0OwM8co wguw== X-Gm-Message-State: AIkVDXJf9PCtc9fTYRLp98e1iKR9EXjUvr10OHtJNGGQGqRW4Ab4WQAOo5jampdnBGwNnXXu X-Received: by 10.98.29.205 with SMTP id d196mr1459639pfd.111.1482276378493; Tue, 20 Dec 2016 15:26:18 -0800 (PST) Received: from dmatlack.sea.corp.google.com ([100.100.206.82]) by smtp.gmail.com with ESMTPSA id k67sm41500687pfk.69.2016.12.20.15.26.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Dec 2016 15:26:17 -0800 (PST) From: David Matlack To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, david@redhat.com, David Matlack Subject: [PATCH v2] kvm: x86: export maximum number of mmu_page_hash collisions Date: Tue, 20 Dec 2016 15:25:57 -0800 Message-Id: <1482276357-6273-1-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Report the maximum number of mmu_page_hash collisions as a per-VM stat. This will make it easy to identify problems with the mmu_page_hash in the future. Signed-off-by: David Matlack Reviewed-by: David Hildenbrand --- Changes in v2: * Removed "bool created" * Removed unrelated whitespace change arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 25 +++++++++++++++++-------- arch/x86/kvm/x86.c | 2 ++ 3 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e19a78b..5962e4bc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -815,6 +815,7 @@ struct kvm_vm_stat { ulong mmu_unsync; ulong remote_tlb_flush; ulong lpages; + ulong max_mmu_page_hash_collisions; }; struct kvm_vcpu_stat { diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 6dcb72b..aa6a34a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1904,17 +1904,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, * since it has been deleted from active_mmu_pages but still can be found * at hast list. * - * for_each_gfn_valid_sp() has skipped that kind of pages. + * for_each_valid_sp() has skipped that kind of pages. */ -#define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ +#define for_each_valid_sp(_kvm, _sp, _gfn) \ hlist_for_each_entry(_sp, \ &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ - if ((_sp)->gfn != (_gfn) || is_obsolete_sp((_kvm), (_sp)) \ - || (_sp)->role.invalid) {} else + if (is_obsolete_sp((_kvm), (_sp)) || (_sp)->role.invalid) { \ + } else #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \ - for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ - if ((_sp)->role.direct) {} else + for_each_valid_sp(_kvm, _sp, _gfn) \ + if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else /* @sp->gfn should be write-protected at the call site */ static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, @@ -2116,6 +2116,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp; bool need_sync = false; bool flush = false; + int collisions = 0; LIST_HEAD(invalid_list); role = vcpu->arch.mmu.base_role; @@ -2130,7 +2131,12 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; } - for_each_gfn_valid_sp(vcpu->kvm, sp, gfn) { + for_each_valid_sp(vcpu->kvm, sp, gfn) { + if (sp->gfn != gfn) { + collisions++; + continue; + } + if (!need_sync && sp->unsync) need_sync = true; @@ -2153,7 +2159,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, __clear_sp_write_flooding_count(sp); trace_kvm_mmu_get_page(sp, false); - return sp; + goto out; } ++vcpu->kvm->stat.mmu_cache_miss; @@ -2183,6 +2189,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, trace_kvm_mmu_get_page(sp, true); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); +out: + if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) + vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; return sp; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8f86c0c..ee4c35e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -190,6 +190,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { "mmu_unsync", VM_STAT(mmu_unsync) }, { "remote_tlb_flush", VM_STAT(remote_tlb_flush) }, { "largepages", VM_STAT(lpages) }, + { "max_mmu_page_hash_collisions", + VM_STAT(max_mmu_page_hash_collisions) }, { NULL } };