From patchwork Tue Jul 13 22:09:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12375281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FD4CC11F66 for ; Tue, 13 Jul 2021 22:10:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75783610A7 for ; Tue, 13 Jul 2021 22:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236359AbhGMWM6 (ORCPT ); Tue, 13 Jul 2021 18:12:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236344AbhGMWM5 (ORCPT ); Tue, 13 Jul 2021 18:12:57 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E17E2C0613E9 for ; Tue, 13 Jul 2021 15:10:06 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id l10-20020a17090270cab029011dbfb3981aso80030plt.22 for ; Tue, 13 Jul 2021 15:10:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CsySGCAQ8336by2YJBp50kxUPY3frhNUxx9JeUFjwp8=; b=SFuiu9fAcgy31OXti8nIfcaltUKg0QxALdmPhReGQPGYTYWGvZQcxVCxIqYqxsICPI TjCd4Znb5+xDdAbxF/VUuByQvCwTEqjO80EqJkXlVAJBq+NHRb0i5ZK8209oB8Po228t lBfzg+bq6vpk2xZzi2KaFzLW8ZQZl6zDBRzfwC0VIUC03PbTQ8X0LFFI0mOOf8VWheB+ 8K5XsU5v845yXjT8W1jLAD0g8YysjCATmUHk8GHkNwSNOvW8pHl5wjVw0WTJMPOmgQ9h 0xx+VTNghiadUvPQ7uCfKJFuOXcyGZu4lYlbK0g1WTDbPD3Oz3q/VB5gC24GMOrX4jYs 5q4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CsySGCAQ8336by2YJBp50kxUPY3frhNUxx9JeUFjwp8=; b=tCpIcOSDwD1pBgU4+SJa5R8FGcb6x7ZqmBtZNUGYpxTRQq9Pwv7694+2Xde1LmQ434 /cbpsYNVQTCo/npb/R9hcy/n7hjjzsYB5bxSH17vt957fivb/8PhUIpW0K/Bdo89pw48 fPbo4vCOo95ObIaJ/q7sP0U0PJAj6Hiw+E249/gp97hKXq32TjS3CS3HkUuvbu+Ae1AG UHKuK+al/+AWuodanHmiYwFLvOCSp/W+YEGlr3k6g5ybv90lqsTmJlT5uB4EAPqohWUV 229dyKi1uFne0VkE/DNZDDc2HNaoLFNknzzHqcyMPqyrS3n3T5r67s/bmGOQDfAn6bTF 7bMQ== X-Gm-Message-State: AOAM533RyDjAF1XheazorECuJFLbmeeuv9oicH7BTUH4YUEC6nkBRbKn V0Q65ti4SmVER4ZXBJOrop1zeUgLJRe+tAQUVr0cC/37bMDVMdnMktnev578klxNSc23BUixroU d/AKFVD1EdFJCj4V51Y/mAuAPo/viqT8oDX12pf162Z7Q9VW0iN86Yt5zYHVnQCA= X-Google-Smtp-Source: ABdhPJxKhKgAxUbvTGJlfOWn10MX0zlpjtbk5OuUEK7EngliDE/WdO1Ve9F01MoDctVH9SpYvG06/safXuIsCQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:8e88:b029:11c:51bc:def8 with SMTP id bg8-20020a1709028e88b029011c51bcdef8mr5011271plb.57.1626214206274; Tue, 13 Jul 2021 15:10:06 -0700 (PDT) Date: Tue, 13 Jul 2021 22:09:54 +0000 In-Reply-To: <20210713220957.3493520-1-dmatlack@google.com> Message-Id: <20210713220957.3493520-4-dmatlack@google.com> Mime-Version: 1.0 References: <20210713220957.3493520-1-dmatlack@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH v3 3/6] KVM: x86/mmu: Make walk_shadow_page_lockless_{begin,end} interoperate with the TDP MMU From: David Matlack To: kvm@vger.kernel.org Cc: Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , Paolo Bonzini , Junaid Shahid , Andrew Jones , Matthew Wilcox , Yu Zhao , David Hildenbrand , Andrew Morton , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Acquire the RCU read lock in walk_shadow_page_lockless_begin and release it in walk_shadow_page_lockless_end when the TDP MMU is enabled. This should not introduce any functional changes but is used in the following commit to make fast_page_fault interoperate with the TDP MMU. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ++---- arch/x86/kvm/mmu/tdp_mmu.h | 10 ++++++++++ 3 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 45274436d3c0..e3d99853b962 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -686,6 +686,11 @@ static bool mmu_spte_age(u64 *sptep) static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) { + if (is_tdp_mmu(vcpu->arch.mmu)) { + kvm_tdp_mmu_walk_lockless_begin(); + return; + } + /* * Prevent page table teardown by making any free-er wait during * kvm_flush_remote_tlbs() IPI to all active vcpus. @@ -701,6 +706,11 @@ static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) { + if (is_tdp_mmu(vcpu->arch.mmu)) { + kvm_tdp_mmu_walk_lockless_end(); + return; + } + /* * Make sure the write to vcpu->mode is not reordered in front of * reads to sptes. If it does, kvm_mmu_commit_zap_page() can see us @@ -3612,6 +3622,8 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) /* * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. + * + * Must be called between walk_shadow_page_lockless_{begin,end}. */ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { @@ -3619,8 +3631,6 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level int leaf = -1; u64 spte; - walk_shadow_page_lockless_begin(vcpu); - for (shadow_walk_init(&iterator, vcpu, addr), *root_level = iterator.level; shadow_walk_okay(&iterator); @@ -3634,8 +3644,6 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level break; } - walk_shadow_page_lockless_end(vcpu); - return leaf; } @@ -3647,11 +3655,15 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) int root, leaf, level; bool reserved = false; + walk_shadow_page_lockless_begin(vcpu); + if (is_tdp_mmu(vcpu->arch.mmu)) leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else leaf = get_walk(vcpu, addr, sptes, &root); + walk_shadow_page_lockless_end(vcpu); + if (unlikely(leaf < 0)) { *sptep = 0ull; return reserved; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index caac4ddb46df..98ffd1ba556e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1516,6 +1516,8 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, /* * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. + * + * Must be called between kvm_tdp_mmu_walk_lockless_{begin,end}. */ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) @@ -1527,14 +1529,10 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, *root_level = vcpu->arch.mmu->shadow_root_level; - rcu_read_lock(); - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf = iter.level; sptes[leaf] = iter.old_spte; } - rcu_read_unlock(); - return leaf; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 1cae4485b3bc..93e1bf5089c4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -77,6 +77,16 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, int min_level); +static inline void kvm_tdp_mmu_walk_lockless_begin(void) +{ + rcu_read_lock(); +} + +static inline void kvm_tdp_mmu_walk_lockless_end(void) +{ + rcu_read_unlock(); +} + int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level);