From patchwork Mon Dec 19 21:58:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 9480787 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 49576607D3 for ; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AC3E28449 for ; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F99F284FB; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA79C28487 for ; Mon, 19 Dec 2016 21:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752986AbcLSV6f (ORCPT ); Mon, 19 Dec 2016 16:58:35 -0500 Received: from mail-it0-f47.google.com ([209.85.214.47]:35998 "EHLO mail-it0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752378AbcLSV6e (ORCPT ); Mon, 19 Dec 2016 16:58:34 -0500 Received: by mail-it0-f47.google.com with SMTP id 75so45757606ite.1 for ; Mon, 19 Dec 2016 13:58:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=f5IInXCpFluJWZ6YEhcfukT4kiY1DU+qv5Cbkwsb23Q=; b=o+VATVkb0SHhezXTKjhIsi5cLylRiV9PfI5/DbSqWaSP/h4GJ5OCVP6rRehFeC0R3A yaG5ZYuEawSmiPUwAcKfc9zozVwaEq2fsy/yMO5rM+F1mj+kQibCk+mj+Fr7S7Se2EXi bciR3xcdl/KSeRKbv9qm1L0Zp5kerAdzFL2TstFJuEkIs0f9XhRspvO/oiQaj1ic0BUw utAguIDfDnDBZtMSTRIIZ6PfGIqHn627mCP6xJIUdK4CTmCG8viIg/+WAwm6unV9dtIq tX7Yc6mfcGAGIhSuJvxCcD/RlfB0Y4KmB2HeRVc/aCkzhUq6cWdfvM/bfS9C/tv/bsK2 4FoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=f5IInXCpFluJWZ6YEhcfukT4kiY1DU+qv5Cbkwsb23Q=; b=Ro112ww5uHrWt6N3TjHYOyGi5kauGeYuldPP5imtyfllquedlmioachoogJiS91buU /3hOWi3/73598q6+lZUuQ518hG7MtRqGzfkjFoL31pNVUyA9sr3I50ewLODQSHfQIzAI tg40GAFHuqCfdZfKRsa+CQjGQlY43Qop4GNB/m2PApon0umBpgelcdO27nqkDjScVF6f HnFpjrD5l+v5pg2z5xPU8IMxxt+0HTJs28B2RJLStu9kJhbKc3CLHkCmp8Pr9GG3fhpm yzSzORz38+2QYZJF9GmxpA3TgB9Elsr83QiWDuyfgeQTEny2OwyWSJN4WMEJFStIueiV nKTQ== X-Gm-Message-State: AKaTC02FISEPH7avZnMrgZU/yq82FEvi6wVAVUQrhNO8Nz78FETcqHrppyJeoVbBkyc69A1u X-Received: by 10.36.14.21 with SMTP id 21mr17512285ite.79.1482184712892; Mon, 19 Dec 2016 13:58:32 -0800 (PST) Received: from dmatlack.sea.corp.google.com ([100.100.206.82]) by smtp.gmail.com with ESMTPSA id h17sm9032707ioh.6.2016.12.19.13.58.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 Dec 2016 13:58:32 -0800 (PST) From: David Matlack To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, David Matlack Subject: [PATCH 2/2] kvm: x86: reduce collisions in mmu_page_hash Date: Mon, 19 Dec 2016 13:58:25 -0800 Message-Id: <1482184705-127401-2-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1482184705-127401-1-git-send-email-dmatlack@google.com> References: <1482184705-127401-1-git-send-email-dmatlack@google.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When using two-dimensional paging, the mmu_page_hash (which provides lookups for existing kvm_mmu_page structs), becomes imbalanced; with too many collisions in buckets 0 and 512. This has been seen to cause mmu_lock to be held for multiple milliseconds in kvm_mmu_get_page on VMs with a large amount of RAM mapped with 4K pages. The current hash function uses the lower 10 bits of gfn to index into mmu_page_hash. When doing shadow paging, gfn is the address of the guest page table being shadow. These tables are 4K-aligned, which makes the low bits of gfn a good hash. However, with two-dimensional paging, no guest page tables are being shadowed, so gfn is the base address that is mapped by the table. Thus page tables (level=1) have a 2MB aligned gfn, page directories (level=2) have a 1GB aligned gfn, etc. This means hashes will only differ in their 10th bit. hash_64() provides a better hash. For example, on a VM with ~200G (99458 direct=1 kvm_mmu_page structs): hash max_mmu_page_hash_collisions -------------------------------------------- low 10 bits 49847 hash_64 105 perfect 97 While we're changing the hash, increase the table size by 4x to better support large VMs (further reduces number of collisions in 200G VM to 29). Note that hash_64() does not provide a good distribution prior to commit ef703f49a6c5 ("Eliminate bad hash multipliers from hash_32() and hash_64()"). Signed-off-by: David Matlack Change-Id: I5aa6b13c834722813c6cca46b8b1ed6f53368ade --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8ba0d64..5962e4bc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -115,7 +115,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_PERMILLE_MMU_PAGES 20 #define KVM_MIN_ALLOC_MMU_PAGES 64 -#define KVM_MMU_HASH_SHIFT 10 +#define KVM_MMU_HASH_SHIFT 12 #define KVM_NUM_MMU_PAGES (1 << KVM_MMU_HASH_SHIFT) #define KVM_MIN_FREE_MMU_PAGES 5 #define KVM_REFILL_PAGES 25 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 58995fd9..de55653 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1713,7 +1713,7 @@ static void kvm_mmu_free_page(struct kvm_mmu_page *sp) static unsigned kvm_page_table_hashfn(gfn_t gfn) { - return gfn & ((1 << KVM_MMU_HASH_SHIFT) - 1); + return hash_64(gfn, KVM_MMU_HASH_SHIFT); } static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu,