From patchwork Thu Apr 29 21:18:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B35C2C433ED for ; Thu, 29 Apr 2021 21:18:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A5F661445 for ; Thu, 29 Apr 2021 21:18:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237327AbhD2VTk (ORCPT ); Thu, 29 Apr 2021 17:19:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236695AbhD2VTh (ORCPT ); Thu, 29 Apr 2021 17:19:37 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FBDFC06138C for ; Thu, 29 Apr 2021 14:18:50 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id p8-20020a05622a0488b02901bab8dfa0c5so8198046qtx.1 for ; Thu, 29 Apr 2021 14:18:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1wV27xnqMavGvIltGDlIu2K+7OVnrSqNChCOIi4TNS4=; b=jvknsd1iliMtr4R8A/dMkmpsg9lN29BvOtTtwdjUo3Pa2giCRJxUwhmEV6cDaxID4L /h+UxqoKiAa4UDtUIQhN8YogWmlGs4GTmkDd3f52tHXv9mHOHHcDMBEWc2QzfuAztd6i N5uO1J8zMpM8hHsI22gb5JIRYENVMalPPKejZJ+Zmxg+2L5N2kq+namxp0woSkyLY1KI vseeOyb9NmfVoW1k2dqo8MOXSgBBDGlnoE9j/GsZ8XD9yVKShb9uk9mEL1cHKLq6Ieoj Wkxw8LcorWTx0weBpHFHeD+6vxlNe94YZzcUVi355ABkpBrl7xxqP9gAfbxLzoy24HUj i/Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1wV27xnqMavGvIltGDlIu2K+7OVnrSqNChCOIi4TNS4=; b=XxNY7o1zrgSeTO6aYLbEFqu9ETOf3bMuYW+dY5RCJW5cacbdbMDggq22rUO/mxAnW2 xCuq0tBaYYHxg5CZGH9TR+qLePJT7AOoSj9JiMRZ1ExFnf0Yqc0Oq1Ac73vFmhmD1iCr qMRF1H8T7b8AmLVnIBm6+XOb3V/YBeyDv1KQGxPFkhxmUKFEMzK2AVuULN8ue/tbS+Uy 98wsvn3H6lPPV9yG83k359oCGTLJHjFIVsZb+8RXqEgpo/nOfJZ6tioZng6P1SJqn9v4 D+uWOjXMxah/w76zUw4IA4uNtvgWy8rbJ19av4wlZ1rlrKS9OZdY90DU0uNDYXttOr0h TEpQ== X-Gm-Message-State: AOAM530Bzjq5Pdo4+2iebgYewDdOi8FcSVZ6iABJodsDhWtYjyyLgotu XTK3wPZQn0SwC00hwGo7e09sM2ZOh0HY X-Google-Smtp-Source: ABdhPJyY7Bg2F/dVUQmrpChxTdjTp6lJBGHFvWu78QERBqCawGmLq2EKPh/Rlez8bBdb3LyylM8lYUZYlKwO X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:a0c:ba1a:: with SMTP id w26mr2046434qvf.27.1619731129504; Thu, 29 Apr 2021 14:18:49 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:27 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-2-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 1/7] KVM: x86/mmu: Track if shadow MMU active From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a field to each VM to track if the shadow / legacy MMU is actually in use. If the shadow MMU is not in use, then that knowledge opens the door to other optimizations which will be added in future patches. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 10 +++++++++- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.h | 4 ++-- 5 files changed, 19 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ad22d4839bcc..3900dcf2439e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1122,6 +1122,8 @@ struct kvm_arch { */ spinlock_t tdp_mmu_pages_lock; #endif /* CONFIG_X86_64 */ + + bool shadow_mmu_active; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 930ac8a7e7c9..3975272321d0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3110,6 +3110,11 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return ret; } +void activate_shadow_mmu(struct kvm *kvm) +{ + kvm->arch.shadow_mmu_active = true; +} + static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, struct list_head *invalid_list) { @@ -3280,6 +3285,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } + activate_shadow_mmu(vcpu->kvm); + write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); if (r < 0) @@ -5467,7 +5474,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) { struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; - kvm_mmu_init_tdp_mmu(kvm); + if (!kvm_mmu_init_tdp_mmu(kvm)) + activate_shadow_mmu(kvm); node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index f2546d6d390c..297a911c018c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -165,4 +165,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +void activate_shadow_mmu(struct kvm *kvm); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 83cbdbe5de5a..5342aca2c8e0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -14,10 +14,10 @@ static bool __read_mostly tdp_mmu_enabled = false; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); /* Initializes the TDP MMU for the VM, if enabled. */ -void kvm_mmu_init_tdp_mmu(struct kvm *kvm) +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) - return; + return false; /* This should not be changed for the lifetime of the VM. */ kvm->arch.tdp_mmu_enabled = true; @@ -25,6 +25,8 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); + + return true; } static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5fdf63090451..b046ab5137a1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -80,12 +80,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); #ifdef CONFIG_X86_64 -void kvm_mmu_init_tdp_mmu(struct kvm *kvm); +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } #else -static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {} +static inline bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return false; } static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } From patchwork Thu Apr 29 21:18:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4612DC433B4 for ; Thu, 29 Apr 2021 21:18:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FBFD6144B for ; Thu, 29 Apr 2021 21:18:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237345AbhD2VTl (ORCPT ); Thu, 29 Apr 2021 17:19:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237254AbhD2VTk (ORCPT ); Thu, 29 Apr 2021 17:19:40 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C016C06138B for ; Thu, 29 Apr 2021 14:18:53 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id r20-20020ac85c940000b02901bac34fa2eeso6148878qta.11 for ; Thu, 29 Apr 2021 14:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AFifcTo7yJgH7f6Mv5eIDb5XkHNBKiWZ1ZsI8FxKEwU=; b=fDkWhXKgyy4dWGiZ2SL9acePSJcnl2bzTAJqxTv2+2/SfROAsbgOfcqZc/XyUr9HGE ocDfJHOnkMJUzpLKfN6JaBGJJZzQhBGsgXgsfnJI/eGjdYaRPYtzJzKRjzYLGC9xnKbJ PAIvP+EiajxmYkyfeZLqLntbanUD7pu6rQIBYejsiSR3Qe2rbu8MuqPOZky6nUBarVL0 oOsILjFgdFVeQ8t66CHpSIlYbGCwoqt8EcH0CSVtwEjiECLWim11RZoKrUabYhMSHzk/ Ca/tLzhbhQmS4ldkDZAph1GfJZKUijpZTZHrRYm4U/AsTV5rOl+W4s5Dbg43k85trAh0 nkqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AFifcTo7yJgH7f6Mv5eIDb5XkHNBKiWZ1ZsI8FxKEwU=; b=ZRAdaQIpHx9i2i7xbdTcCOdyCX08aXiTCbJs9tuDgA94CpKqqkj3/zBqksVtRyBE0Q lezv7w+lyOx61I0CkilJM/GFA5m8NvEYSXJj6YBRxgiIdzkt6GKn1LGlNQPZBZ0VEigc J3NnIgUgvmfu8d/iIdOF/GH9Z5+uhlbcDYibqiqbwKvhAsAi6JaqK3JP3SUG4D4+xYMo 6woypR0tGNb7wx7nGScBJzTOT2OlII9kDN9pyYpfllBb7VyM3lm4ydBt2mweoEsNeFcX T5VDgzv1AIQU6zsXq3RotcqiFZTVmZ3J6xQEEmvpmu+wTqB2EPNSeKB06jl+atbz9AV+ nV0g== X-Gm-Message-State: AOAM531O73HUMW1i1bK7WWSRzHCt74jSzYN40SY7q30E9DOHRjM0LNbu u8OvYxdlmZI/CTe577qi4A/KN60JAoX9 X-Google-Smtp-Source: ABdhPJzlwq0GQ4NskvrieGXS3r2031teEPa2i/l41r6i1K2N6pM9HP/dz+voq/5lJ0i+DZJkCMNG4vE1Yrqb X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:ad4:41c6:: with SMTP id a6mr1757284qvq.56.1619731132368; Thu, 29 Apr 2021 14:18:52 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:28 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-3-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 2/7] KVM: x86/mmu: Skip rmap operations if shadow MMU inactive From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the shadow MMU is not in use, and only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 128 +++++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3975272321d0..e252af46f205 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm->arch.shadow_mmu_active) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm->arch.shadow_mmu_active) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm->arch.shadow_mmu_active) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, + true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1444,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm->arch.shadow_mmu_active) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1457,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm->arch.shadow_mmu_active) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1513,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm->arch.shadow_mmu_active) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1526,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm->arch.shadow_mmu_active) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5447,7 +5462,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_reload_remote_mmus(kvm); - kvm_zap_obsolete_pages(kvm); + if (kvm->arch.shadow_mmu_active) + kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -5498,29 +5514,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5547,12 +5563,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5622,16 +5641,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5658,11 +5676,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5687,6 +5708,14 @@ void kvm_mmu_zap_all(struct kvm *kvm) int ign; write_lock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_zap_all(kvm); + + if (!kvm->arch.shadow_mmu_active) { + write_unlock(&kvm->mmu_lock); + return; + } + restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (WARN_ON(sp->role.invalid)) @@ -5699,9 +5728,6 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } From patchwork Thu Apr 29 21:18:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78EE3C43461 for ; Thu, 29 Apr 2021 21:19:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A79C6144B for ; Thu, 29 Apr 2021 21:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237447AbhD2VTr (ORCPT ); Thu, 29 Apr 2021 17:19:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237449AbhD2VTo (ORCPT ); Thu, 29 Apr 2021 17:19:44 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D91CEC06138E for ; Thu, 29 Apr 2021 14:18:55 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id x186-20020a25e0c30000b02904f0d007a955so5970096ybg.12 for ; Thu, 29 Apr 2021 14:18:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QRzqEuizFaFkQxevKt3y5xfCD5/WcG3SO3mTco4rXd8=; b=a7o4NbCb/FLCP3YNj/MExk/0rq0NY4gLbkLB3ZIyFzZxQrbQ3t+vk5+Pp0MCoG9rqt MEI8VOXWgTEf4BcNg441Pro8kHFr71sXKs7G4yS8BDDuGdu41FadlSKvrmif4doRTLUk ReXW9ShVSjefMMtgPQC9dDaPI32aM5gYpm2y80hvuT8JOrBXVOHSj8l+1s0EhPqn6vm6 2GGjtxYyUlTWL9Wz7hEDD7XUZRXSmU4t0n0xLmqQzSjQrDQwMppuhnBk3syz9JsYUWYN 5ZZ+gr1eT27GQBrEJt6jRtjQsTRqXb9OIJkr2jeYxl8+0J1iOJUAoeDUSZ4hmuyZhLR3 IBrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QRzqEuizFaFkQxevKt3y5xfCD5/WcG3SO3mTco4rXd8=; b=BDvjtzAZyA3Xt+g64LcdXE4kqJLcd/mZb3xtevkEzd4ppj73V1J5tVrpQQzasSoyEE jsTgRi2E2Gb9MX6LpwwyRiUb2tkSD6QUCOwXZp94t45OyKJPV1L72R9j0bALATfw+mC6 LFvvVmzF32cMPwBPNviXbj1IyfGJbBK/PxFTqnJkqHM3chxgwv4xEQ2tYGgAbCwJMW7k J8CLl4Pjxe/d0HWlUZBozyFlxXZn2dmE7SEN1x1lfaG9Y3iOAb4erL6bJ7Tym27pAdN2 K7uvfVIfDw/GZEK0o1ZwM6IZ16ClNBeNYOVFKEH3gUl8n9aZeUHFz0UI/nA4rA54DkxO qnVQ== X-Gm-Message-State: AOAM530xSnO2yKXzCJWZ+hSQZboc8+M/5N7UQAilZUlL41zmNo5l6HU4 yErVAgia2zZnaUV5mIutkeh43HLFAv4K X-Google-Smtp-Source: ABdhPJzW+3BDMQbSuehL66s7eyvJCJrWAWWw4G9WeRa7yQ1zEs9j14VEAtPYhCvIytk/eK/E2AOfbyTPnh1P X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:a25:b9c3:: with SMTP id y3mr2081339ybj.480.1619731135170; Thu, 29 Apr 2021 14:18:55 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:29 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-4-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 3/7] KVM: x86/mmu: Deduplicate rmap freeing From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small code deduplication. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cf3b67679cf0..5bcf07465c47 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10818,17 +10818,23 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_hv_destroy_vm(kvm); } -void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +static void free_memslot_rmap(struct kvm_memory_slot *slot) { int i; for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.rmap[i]); slot->arch.rmap[i] = NULL; + } +} - if (i == 0) - continue; +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + int i; + + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } @@ -10894,12 +10900,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, return 0; out_free: - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { - kvfree(slot->arch.rmap[i]); - slot->arch.rmap[i] = NULL; - if (i == 0) - continue; + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } From patchwork Thu Apr 29 21:18:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 798E1C433B4 for ; Thu, 29 Apr 2021 21:19:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5433D61464 for ; Thu, 29 Apr 2021 21:19:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237448AbhD2VTt (ORCPT ); Thu, 29 Apr 2021 17:19:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237413AbhD2VTp (ORCPT ); Thu, 29 Apr 2021 17:19:45 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93634C06138B for ; Thu, 29 Apr 2021 14:18:58 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id v15-20020a05620a090fb02902e4d7d50ae2so7137675qkv.19 for ; Thu, 29 Apr 2021 14:18:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=V/i9JLKn2bODnum9/FR/aKnhuikAVXFu3BpBHLyrfLg=; b=I9jyJqDzeQ9s48jJcFT9qziZLErEBCH11Qi47JBM9REY7FJl3TRaDRh686S2S0LIZm 0ryfFDQdWCmESL7K3WFI2F8Bvvfqc872VR+LFCip039aSna3fcOw+08XHdeCV5iAruU7 Mm/r1KmyO438ITRn1YuzUvLJlFzIOM9MxiyFIdxKJTcCQ+evKkQxTpNMjbElj6Whi0Ss g7/EoWAAIwU43ifUhHvg0oN8RgAuH6IeaBmQGDY5CHemIbOhzGQ2X8ZdTq1fRbQuEhHJ nuWv/On0GGu4QyS5FY/+NL9Znd6yD44zp05zyNrgV6O9DPJrN3k11MoFICA32im5ACKj EclQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=V/i9JLKn2bODnum9/FR/aKnhuikAVXFu3BpBHLyrfLg=; b=IMxXsXEOYkNrTfPsfUdtDmaRYqqz/qgOdvQaHoSQIKmEHxaD+0W/SI+vcgzmGoVX99 lljMglDdy43B/A12ElChsNAG4NMUEvZJ4aZGx5US15inhb9NLn2Tp0ERL3NZ6kObhOoC s6cnFTHlWVSCsKas4UnBM711XSODfbDl6eVH2b9+48aeoqlUjzKmK1Cm9r+RO31X/nz1 z62G2fjnBIrTllD2m/qVmdI7CRzgem6wMHtR3jluMUjnkOucD+ekTyWnPhyZ+/H1EAAp fabw+kdMhwSOY49cTCEMr+h0fFeR8/j34eRvmtaxUHJPufLqPbKFDSHXG2R2USh37axG zTrw== X-Gm-Message-State: AOAM531AKjgkOz7wMjFgoCWx2e+oa/1CZD4+sYGM8vpxnq5agtTy9XqX MuAbGfdDoN6KC60smG2bxVyIe6lnGp0K X-Google-Smtp-Source: ABdhPJxek7EoJFLQCq2VFF2SvgbuPWfNa8YiGHrso7Hyqp+gn4CW1qGlf3Ii9UWik9Ydi47ZbF1cYzuB0thw X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:ad4:518a:: with SMTP id b10mr1895601qvp.19.1619731137789; Thu, 29 Apr 2021 14:18:57 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:30 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-5-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 4/7] KVM: x86/mmu: Factor out allocating memslot rmap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small refactor to facilitate allocating rmaps for all memslots at once. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 41 ++++++++++++++++++++++++++++++++--------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5bcf07465c47..fc32a7dbe4c4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10842,10 +10842,37 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } +static int alloc_memslot_rmap(struct kvm_memory_slot *slot, + unsigned long npages) +{ + int i; + + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + int lpages; + int level = i + 1; + + lpages = gfn_to_index(slot->base_gfn + npages - 1, + slot->base_gfn, level) + 1; + + slot->arch.rmap[i] = + kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), + GFP_KERNEL_ACCOUNT); + if (!slot->arch.rmap[i]) + goto out_free; + } + + return 0; + +out_free: + free_memslot_rmap(slot); + return -ENOMEM; +} + static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, unsigned long npages) { int i; + int r; /* * Clear out the previous array pointers for the KVM_MR_MOVE case. The @@ -10854,7 +10881,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + r = alloc_memslot_rmap(slot, npages); + if (r) + return r; + + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; unsigned long ugfn; int lpages; @@ -10863,14 +10894,6 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; - slot->arch.rmap[i] = - kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.rmap[i]) - goto out_free; - if (i == 0) - continue; - linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT); if (!linfo) goto out_free; From patchwork Thu Apr 29 21:18:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53A94C433B4 for ; Thu, 29 Apr 2021 21:19:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F42761445 for ; Thu, 29 Apr 2021 21:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237490AbhD2VTu (ORCPT ); Thu, 29 Apr 2021 17:19:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237471AbhD2VTs (ORCPT ); Thu, 29 Apr 2021 17:19:48 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9612BC06138F for ; Thu, 29 Apr 2021 14:19:01 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 7-20020a5b01070000b02904ed6442e5f6so24093968ybx.23 for ; Thu, 29 Apr 2021 14:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mFEL1a6aUCqpk4hZSWwrim39XPZlMuhgczIiyiJU3Hk=; b=QoIvPIfNc4daBsgyq7HoomFzUjFl28lBXn6OC3BTTU0TFQtx+vaUXnMgDGoZv87SHZ +iDWWmphxqbcA0cHJKI1fiyKrP9qINdh4l5asfLbzi1CaOky8QNS8V4iACDbFW+Xefpr WskLfTKP0Pk8PnepZS09HRmbxb8H9v3CT9pRtwTwcVAeu24YPWZhPhAXbZQ2eZsRpFr/ TS9qXCMxsH5YAFNk/yLSILxFD/hUanDMUoooFYWCYdwchTEPSnq0Gayq5zoJDmUAZZtI J8mMUc+ZW520waefGFYIlhjXOsHE5St5/VXM08TBgkVVU1e7Kao+Oy53Sl4JfuSvMesm ztKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mFEL1a6aUCqpk4hZSWwrim39XPZlMuhgczIiyiJU3Hk=; b=oY+PVysYNq9y6sdBxcfCt1ukE01PLIa420wxBdO9EhAvoR3eazX+mi79mb3QlrS4ax 6K4NY68kg+jF2hu6bOGkO3cpH7zqc67u/vAhxu4FD/wrERhW9AL3zynpKNd8/9Nms9hj oHf+k7S1APfegPWFBjc3vdsrhwUECibeuD9VXbaCDHr3o5s8yvjQHODRldyGPdR32dVg YMEP9JixBck79bc/k4FU/co4ZzB7vamLNyNbbUtkyaByQBeDgWakliOm6z5/lIdVKZwj ilEa4kix4UI8WPViaHjItNBBNSX0Xm5tyK6UnhxslmNiPubtxLkwavrNxUqdYaoqvh6L /Zmg== X-Gm-Message-State: AOAM533HjJv+u7SoRwTRBYouOx5+ZLyRAl9NG7OblkGb1wvETtONYMBa PVY04PyPPkmAcxxBrQ/CGiShF4VQRTDU X-Google-Smtp-Source: ABdhPJx7pm5xPEQiP+I1G1fbjQZ0XfY8/nduo0ZV1m6C+Xt9+OiGj1+WB3jXutNMXULR2+S4HiCFAwrWzMuu X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:a25:cf8c:: with SMTP id f134mr2184198ybg.216.1619731140835; Thu, 29 Apr 2021 14:19:00 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:31 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-6-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 5/7] KVM: mmu: Refactor memslot copy From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out copying kvm_memslots from allocating the memory for new ones in preparation for adding a new lock to protect the arch-specific fields of the memslots. No functional change intended. Signed-off-by: Ben Gardon --- virt/kvm/kvm_main.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2799c6660cce..c8010f55e368 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1306,6 +1306,18 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, return old_memslots; } +static size_t kvm_memslots_size(int slots) +{ + return sizeof(struct kvm_memslots) + + (sizeof(struct kvm_memory_slot) * slots); +} + +static void kvm_copy_memslots(struct kvm_memslots *from, + struct kvm_memslots *to) +{ + memcpy(to, from, kvm_memslots_size(from->used_slots)); +} + /* * Note, at a minimum, the current number of used slots must be allocated, even * when deleting a memslot, as we need a complete duplicate of the memslots for @@ -1315,19 +1327,16 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, enum kvm_mr_change change) { struct kvm_memslots *slots; - size_t old_size, new_size; - - old_size = sizeof(struct kvm_memslots) + - (sizeof(struct kvm_memory_slot) * old->used_slots); + size_t new_size; if (change == KVM_MR_CREATE) - new_size = old_size + sizeof(struct kvm_memory_slot); + new_size = kvm_memslots_size(old->used_slots + 1); else - new_size = old_size; + new_size = kvm_memslots_size(old->used_slots); slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); if (likely(slots)) - memcpy(slots, old, old_size); + kvm_copy_memslots(old, slots); return slots; } From patchwork Thu Apr 29 21:18:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B43C433ED for ; Thu, 29 Apr 2021 21:19:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0B7861445 for ; Thu, 29 Apr 2021 21:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237535AbhD2VT5 (ORCPT ); Thu, 29 Apr 2021 17:19:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237510AbhD2VTv (ORCPT ); Thu, 29 Apr 2021 17:19:51 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 912ADC06138E for ; Thu, 29 Apr 2021 14:19:04 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v7-20020a05620a0a87b02902e02f31812fso28847415qkg.6 for ; Thu, 29 Apr 2021 14:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IXgrcu6fpr/f6DU3zcjv8Lr+SP1VIhyvQbBITAXnt70=; b=uFdeOtrnJ/E5UdVBYv6JOGZ37SBAXdfJGQVxQMpubPDNCJ8+IzNmupd/lPzTTkOXi8 WfSjXnIwkdi941dxgyJ+Bx4AYe2zw0hbz/JkxajApbjOajeMVhqQVFx6pkort1WPafaT ichw0tOOVFdiLfh2EcLejztdrgeg5Xv3klQUCF12znnq1LhDPsmhdfq0nRsuKP0B1jj0 AmCyjde7POqS41yFvfcWL+Mu53UW+OBN+hiJA2P/dk6/1UeeJiyzWQHbUEoNfHkP8X6u LbWKs18J4vJ/D9K77ysXQXRdRJ2KXJxgeTDT3VIHiaJEFwjPnWmBLQnSv9MH3on/GfVu 4DDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IXgrcu6fpr/f6DU3zcjv8Lr+SP1VIhyvQbBITAXnt70=; b=Ocyu2CKho8JulK351Aoy14nIWSw0I63SwzUrdls35lmzSj1rpwjui0ULwGl4d/c/3M mL8a/Vq2CDgSr/1uqOaTHLNj/4Zdl4OJ9xKJPUx8RAskBOJ9k1XmnpLGWBVM/mSWgncr VTLyF48M+UizjGSDmSQL3CJWiEHm1ID8Ybs1un+Zjn1KrGvisg7sWShHbZjwEIHkJVx+ wkVMX6HErRbuW9jfQMCvPJ8i0AzfAxrRoWdDqLl4hrc0sIuToyzYReeGhthg5D+bDfvq beqDGukAQvJIHjhw30KeuxkK5VqLKwh0VP7qlsEhZxRxOmSn5CnEzHw29VFoQuEFVbWS mZqw== X-Gm-Message-State: AOAM533i+YaXhu7up+l3PPWdv60MVi1KFDonCfW4rsgAUNuRGM5hnw4l D2p2ho9wMpO7jGXUqnikd/Tiw1N/QiI4 X-Google-Smtp-Source: ABdhPJzPevB4xMaP83Cr8Q4/TkfQvL42SaHJ+4twJtZWyFXdRCpCp9cyChJTiEG3idsHA98deC5QRpI2qbuR X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:a05:6214:1bca:: with SMTP id m10mr1868439qvc.56.1619731143719; Thu, 29 Apr 2021 14:19:03 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:32 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-7-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 6/7] KVM: mmu: Add slots_arch_lock for memslot arch fields From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new lock to protect the arch-specific fields of memslots if they need to be modified in a kvm->srcu read critical section. A future commit will use this lock to lazily allocate memslot rmaps for x86. Signed-off-by: Ben Gardon --- include/linux/kvm_host.h | 9 +++++++++ virt/kvm/kvm_main.c | 31 ++++++++++++++++++++++++++----- 2 files changed, 35 insertions(+), 5 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8895b95b6a22..2d5e797fbb08 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -472,6 +472,15 @@ struct kvm { #endif /* KVM_HAVE_MMU_RWLOCK */ struct mutex slots_lock; + + /* + * Protects the arch-specific fields of struct kvm_memory_slots in + * use by the VM. To be used under the slots_lock (above) or in a + * kvm->srcu read cirtical section where acquiring the slots_lock + * would lead to deadlock with the synchronize_srcu in + * install_new_memslots. + */ + struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c8010f55e368..97b03fa2d0c8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -908,6 +908,7 @@ static struct kvm *kvm_create_vm(unsigned long type) mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); mutex_init(&kvm->slots_lock); + mutex_init(&kvm->slots_arch_lock); INIT_LIST_HEAD(&kvm->devices); BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); @@ -1280,6 +1281,10 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; rcu_assign_pointer(kvm->memslots[as_id], slots); + + /* Acquired in kvm_set_memslot. */ + mutex_unlock(&kvm->slots_arch_lock); + synchronize_srcu_expedited(&kvm->srcu); /* @@ -1351,6 +1356,9 @@ static int kvm_set_memslot(struct kvm *kvm, struct kvm_memslots *slots; int r; + /* Released in install_new_memslots. */ + mutex_lock(&kvm->slots_arch_lock); + slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change); if (!slots) return -ENOMEM; @@ -1364,10 +1372,9 @@ static int kvm_set_memslot(struct kvm *kvm, slot->flags |= KVM_MEMSLOT_INVALID; /* - * We can re-use the old memslots, the only difference from the - * newly installed memslots is the invalid flag, which will get - * dropped by update_memslots anyway. We'll also revert to the - * old memslots if preparing the new memory region fails. + * We can re-use the memory from the old memslots. + * It will be overwritten with a copy of the new memslots + * after reacquiring the slots_arch_lock below. */ slots = install_new_memslots(kvm, as_id, slots); @@ -1379,6 +1386,17 @@ static int kvm_set_memslot(struct kvm *kvm, * - kvm_is_visible_gfn (mmu_check_root) */ kvm_arch_flush_shadow_memslot(kvm, slot); + + /* Released in install_new_memslots. */ + mutex_lock(&kvm->slots_arch_lock); + + /* + * The arch-specific fields of the memslots could have changed + * between releasing the slots_arch_lock in + * install_new_memslots and here, so get a fresh copy of the + * slots. + */ + kvm_copy_memslots(__kvm_memslots(kvm, as_id), slots); } r = kvm_arch_prepare_memory_region(kvm, new, mem, change); @@ -1394,8 +1412,11 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; out_slots: - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + slot = id_to_memslot(slots, old->id); + slot->flags &= ~KVM_MEMSLOT_INVALID; slots = install_new_memslots(kvm, as_id, slots); + } kvfree(slots); return r; } From patchwork Thu Apr 29 21:18:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12231967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA6DCC43461 for ; Thu, 29 Apr 2021 21:19:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B16C66144B for ; Thu, 29 Apr 2021 21:19:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237531AbhD2VT7 (ORCPT ); Thu, 29 Apr 2021 17:19:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237473AbhD2VTz (ORCPT ); Thu, 29 Apr 2021 17:19:55 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6945FC06138D for ; Thu, 29 Apr 2021 14:19:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id g29-20020a25b11d0000b02904f44adeb480so1067593ybj.13 for ; Thu, 29 Apr 2021 14:19:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XVYnt8AGloz+IDwnCKZeO9D6UQw7DJweCpNStTZ2cAE=; b=Yoms/P21MbDMDoiBPrEogKqYS5Sb7aE+YqkWkyLuFIxSRheV6OeZOhbVarBcmxP8+L I6v8tWDNzyQ0z2FDbwyUXzBoCix0u9PM2JD0kgz70rKytm9obtsTu5nhSSjDgMlVWkwv 4ZW5VcRGq0wAzd5aTfaUIRHpFpy8lHWaovtP5hhUbc/n9ukF1S9FRquLbP5ANIMG/crj 0h4cGJeua+OWjKtSTfQSxRk27YKKnhwVO0j8FMJoMsU3jPQqiLcNQo/GOZYACiSXCOkk FrkIqDRzlkxEGeMMbSv2fxZmby3u8F/KtGNPcbZLvMsOFDZ/e2XssSjOTm58fQb9TUdd lGFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XVYnt8AGloz+IDwnCKZeO9D6UQw7DJweCpNStTZ2cAE=; b=lNM6XKRzcSGMZPYG9HUKJYdo1OAHE5auev4tQFAw+oV7XOvmbKkNFp+8XZHJBo/46v CECK6PWoAmPSkW8FehtHFXbjtFd6s/LtM21IsDXgdF+cL+FEamrgK/2ucRokZDcdgQtM BDtFmyeiesDv2DhaLnROk2DapXiOyyoxgbm7ojBMRjMoYz48+zGm3ey+DRlGQKEUwDde CcS+pKNESAaWbzw6ldKkJX/j+ZWzYG1U+08WZ6woRulcZ/+Y5RtoBG+Cr//TfGXCe2ys VE9ULZSDQR+4+hr38Xv84eLIR6N9UDeKHcz6vVLJmUQeVTpneqRsjb+QOIOYe8var+cp 6hTw== X-Gm-Message-State: AOAM532vKYrHX1IXIl4UHgO63Cs0B9N9cVXiMGEPmb6VuJ1dnO7AJD8Y rijHBEEg9TvD77mgI4v6FX2oqJst36Vk X-Google-Smtp-Source: ABdhPJwYDd2Kf5SXfPvdgeurHUBQKR0XZ8wIuevF5+PyOg8FgCJ5XUQKCZjH3vOch21gd0bVVopXKhLOd2lK X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:1a18:9719:a02:56fb]) (user=bgardon job=sendgmr) by 2002:a25:3b0e:: with SMTP id i14mr2379500yba.12.1619731146539; Thu, 29 Apr 2021 14:19:06 -0700 (PDT) Date: Thu, 29 Apr 2021 14:18:33 -0700 In-Reply-To: <20210429211833.3361994-1-bgardon@google.com> Message-Id: <20210429211833.3361994-8-bgardon@google.com> Mime-Version: 1.0 References: <20210429211833.3361994-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2 7/7] KVM: x86/mmu: Lazily allocate memslot rmaps From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the TDP MMU is in use, wait to allocate the rmaps until the shadow MMU is actually used. (i.e. a nested VM is launched.) This saves memory equal to 0.2% of guest memory in cases where the TDP MMU is used and there are no nested guests involved. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 11 +++++++ arch/x86/kvm/mmu/mmu.c | 21 +++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/x86.c | 54 ++++++++++++++++++++++++++++++--- 4 files changed, 80 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3900dcf2439e..b8633ed00a6a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1124,6 +1124,15 @@ struct kvm_arch { #endif /* CONFIG_X86_64 */ bool shadow_mmu_active; + + /* + * If set, the rmap should be allocated for any newly created or + * modified memslots. If allocating rmaps lazily, this may be set + * before the rmaps are allocated for existing memslots, but + * shadow_mmu_active will not be set until after the rmaps are fully + * allocated. + */ + bool alloc_memslot_rmaps; }; struct kvm_vm_stat { @@ -1855,4 +1864,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); +int alloc_all_memslots_rmaps(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e252af46f205..b2a6585bd978 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3125,9 +3125,17 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return ret; } -void activate_shadow_mmu(struct kvm *kvm) +int activate_shadow_mmu(struct kvm *kvm) { + int r; + + r = alloc_all_memslots_rmaps(kvm); + if (r) + return r; + kvm->arch.shadow_mmu_active = true; + + return 0; } static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, @@ -3300,7 +3308,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } - activate_shadow_mmu(vcpu->kvm); + r = activate_shadow_mmu(vcpu->kvm); + if (r) + return r; write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); @@ -5491,7 +5501,12 @@ void kvm_mmu_init_vm(struct kvm *kvm) struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; if (!kvm_mmu_init_tdp_mmu(kvm)) - activate_shadow_mmu(kvm); + /* + * No memslots can have been allocated at this point. + * activate_shadow_mmu won't actually need to allocate + * rmaps, so it cannot fail. + */ + WARN_ON(activate_shadow_mmu(kvm)); node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 297a911c018c..c6b21a916452 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -165,6 +165,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); -void activate_shadow_mmu(struct kvm *kvm); +int activate_shadow_mmu(struct kvm *kvm); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fc32a7dbe4c4..c72b35cbaef7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10842,11 +10842,24 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } -static int alloc_memslot_rmap(struct kvm_memory_slot *slot, +static int alloc_memslot_rmap(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned long npages) { int i; + if (!kvm->arch.alloc_memslot_rmaps) + return 0; + + /* + * All rmaps for a memslot should be allocated either before + * the memslot is installed (in which case no other threads + * should have a pointer to it), or under the + * slots_arch_lock. Avoid overwriting already allocated + * rmaps. + */ + if (slot->arch.rmap[0]) + return 0; + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { int lpages; int level = i + 1; @@ -10868,7 +10881,40 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, +int alloc_memslots_rmaps(struct kvm *kvm, struct kvm_memslots *slots) +{ + struct kvm_memory_slot *slot; + int r = 0; + + kvm_for_each_memslot(slot, slots) { + r = alloc_memslot_rmap(kvm, slot, slot->npages); + if (r) + break; + } + return r; +} + +int alloc_all_memslots_rmaps(struct kvm *kvm) +{ + struct kvm_memslots *slots; + int r = 0; + int i; + + mutex_lock(&kvm->slots_arch_lock); + kvm->arch.alloc_memslot_rmaps = true; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + r = alloc_memslots_rmaps(kvm, slots); + if (r) + break; + } + mutex_unlock(&kvm->slots_arch_lock); + return r; +} + +static int kvm_alloc_memslot_metadata(struct kvm *kvm, + struct kvm_memory_slot *slot, unsigned long npages) { int i; @@ -10881,7 +10927,7 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - r = alloc_memslot_rmap(slot, npages); + r = alloc_memslot_rmap(kvm, slot, npages); if (r) return r; @@ -10954,7 +11000,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(memslot, + return kvm_alloc_memslot_metadata(kvm, memslot, mem->memory_size >> PAGE_SHIFT); return 0; }