From patchwork Thu Dec 8 19:38:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8673C4332F for ; Thu, 8 Dec 2022 20:40:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lKIKHnhaJquFObDOoFYTp0ewJdB0m40YamxC5qD9S6k=; b=fMm2roN5w6BH1ndIG8slrX33aK 3vP8Hui7j8vZ+2fmVc1wS6jKQQyuF3Vqf4rGek98WxxXDuohVr54OfJaWfOr+ftw/+SYNvmEKx9wb w6xe1CQDo+sJH5I0cdJgYm/xLZDxxzgd9dq/LlT0lkrToGGWua0u2xeiCjxA9wuJAWUhOsRPDSqnB 29xUszUz37AXtc04VWZcSJStxnzabFtwRB0eI+l5oEQVXpMftoruF9stlvbwjo+EyGlWRqn7L5Wis wEui+BkWwV40C4cCHnndBZrArRzeyRIbBBkA3mhX/4mE8ZpTsa+X+z2VaeoHNZOiB/muE2GnjBnsU P7+DFiSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NfQ-00Aph7-0K; Thu, 08 Dec 2022 20:38:52 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV7-00AgYd-Ur for linux-arm-kernel@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:14 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=bKBzEWnnUSmo38V1MoJOUz8eBd NKZOelA7lZSZJhPd/nbprBk19zV8BeQ7FN6g7K3FUIvSZ8/tPHZsBbXZI/6jh6xgz7MQGCmiWXhzL GUd1AEBNm5EkerkRmjG/mba98Tu3Gjy7oemQO3pGLSy+Ttdm3VR0abKyNemJzw9Wi1O7nwlWBIUWz axLKlfpRDREpfYUfmYpTrfi1fuH+4cPk9yanqQgi2TdAdSjzKuJLeSyINraysGgk4u8tnGG0QLQpI 5ncid/CSILalcQTimMFuq2JjBwStbasT18tJ3Fwm+vbxXA3vqasNbE3O+G02O9QWQaXU5zJjP4mmX LqYAhTMA==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjl-008Zlz-M4 for linux-arm-kernel@lists.infradead.org; Thu, 08 Dec 2022 19:39:21 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id x188-20020a2531c5000000b00716de19d76bso539120ybx.19 for ; Thu, 08 Dec 2022 11:39:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=irP5mcGoAOYc8wosnD/TtALC0WWa+NGjhVIHJTTnr/3Z8yjzEK7JM4oiVhdZFBRQwy zJ2t/ttFB0YDo1jKtcssuFL71HMxb9Mn2FDYgr/S1maeH1kDPYzBK0DF9LASFH6X+Gyz ADGVieTHTJS3Xu0juG8XWrdDMzDZvqxucwC6VjLm3/r24BngHKVLne44DdbtStaOGGOh nAuOImwdaJqBcKx9C/oAm0bmd7zRXkwa5CkgPV2RFBbSGZAPdiE+y9pF0NYdTwDcUcBb YIWPvUTgaFvUXFehRkfrZQ4c+oYOqfGi3KvXXK7lpuBLnfpJgagtw4O5VTtpSo61tAre q86Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=boROFsEVzyYl8x9LciUshBRdcjkrU4uUX0lxFH9lS+RA6mjCAXDGR1JE/GEoN3dXu8 2sjBd7LHq8eX1Q7mdeAigGTAEkda+RHNPZy4C33b//cHylGcVWPzgBsG5ZG8kunWnJiv X3OVCsBV38ofBQOuJDLLwgf4Wz/1XxbiE3kKRTFlJ5G1gphhq8A3NUggT48UJJLk/l8j pt+KxPWYoJY9mM0qaqEAUKaaNznwofTFCENLWhyqV6h6YuNBzz1osZn0i8+D0HHwFGR1 JIsFcoEee2zgdZLmyW1h05quDo/7MfbghJjR1VFYLrwZ1QBryN9XbdO7U3G3u0pGXa5B 1dpQ== X-Gm-Message-State: ANoB5pmk0bNVCOKYA5UY5rAO5yTzZEUJFa//0AincJduVwyjhs/0twlH 8jA1aY0DRKmLqqUOObPAdNHe1WT+3bDTzQ== X-Google-Smtp-Source: AA0mqf4/lKyQk5lKEjo3ymU4c9dqtITM1gYvYc6O/tXnyNlkcxf5NSwbnRprMsGK5JwDZSs7UsMVtOf+2DbYHA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:120d:0:b0:3d5:ecbb:2923 with SMTP id 13-20020a81120d000000b003d5ecbb2923mr35023748yws.485.1670528355728; Thu, 08 Dec 2022 11:39:15 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:25 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-6-dmatlack@google.com> Subject: [RFC PATCH 05/37] KVM: x86/mmu: Unify TDP MMU and Shadow MMU root refcounts From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193917_874423_098DA258 X-CRM114-Status: GOOD ( 20.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use the same field for refcounting roots in the TDP MMU and Shadow MMU. The atomicity provided by refcount_t is overkill for the Shadow MMU, since it holds the write-lock. But converging this field will enable a future commit to more easily move struct kvm_mmu_page to common code. Note, refcount_dec_and_test() returns true if the resulting refcount is 0. Hence the check in mmu_free_root_page() is inverted to check if shadow root refcount is 0. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 6 ++---- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 5 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f7668a32721d..11cef930d5ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2498,14 +2498,14 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); - if (!sp->root_count) { + if (!refcount_read(&sp->root_refcount)) { /* Count self */ (*nr_zapped)++; /* * Already invalid pages (previously active roots) are not on * the active page list. See list_del() in the "else" case of - * !sp->root_count. + * !sp->root_refcount. */ if (sp->role.invalid) list_add(&sp->link, invalid_list); @@ -2515,7 +2515,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, } else { /* * Remove the active root from the active page list, the root - * will be explicitly freed when the root_count hits zero. + * will be explicitly freed when the root_refcount hits zero. */ list_del(&sp->link); @@ -2570,7 +2570,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); list_for_each_entry_safe(sp, nsp, invalid_list, link) { - WARN_ON(!sp->role.invalid || sp->root_count); + WARN_ON(!sp->role.invalid || refcount_read(&sp->root_refcount)); kvm_mmu_free_shadow_page(sp); } } @@ -2593,7 +2593,7 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm, * Don't zap active root pages, the page itself can't be freed * and zapping it will just force vCPUs to realloc and reload. */ - if (sp->root_count) + if (refcount_read(&sp->root_refcount)) continue; unstable = __kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, @@ -3481,7 +3481,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (is_tdp_mmu_page(sp)) kvm_tdp_mmu_put_root(kvm, sp, false); - else if (!--sp->root_count && sp->role.invalid) + else if (refcount_dec_and_test(&sp->root_refcount) && sp->role.invalid) kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); *root_hpa = INVALID_PAGE; @@ -3592,7 +3592,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, WARN_ON_ONCE(role.arch.direct && role.arch.has_4_byte_gpte); sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); - ++sp->root_count; + refcount_inc(&sp->root_refcount); return __pa(sp->spt); } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c1a379fba24d..fd4990c8b0e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -87,10 +87,8 @@ struct kvm_mmu_page { u64 *shadowed_translation; /* Currently serving as active root */ - union { - int root_count; - refcount_t tdp_mmu_root_count; - }; + refcount_t root_refcount; + unsigned int unsync_children; union { struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 6a4a43b90780..ffd10ce3eae3 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -19,7 +19,7 @@ __entry->mmu_valid_gen = sp->mmu_valid_gen; \ __entry->gfn = sp->gfn; \ __entry->role = sp->role.word; \ - __entry->root_count = sp->root_count; \ + __entry->root_count = refcount_read(&sp->root_refcount); \ __entry->unsync = sp->unsync; #define KVM_MMU_PAGE_PRINTK() ({ \ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fc0b87ceb1ea..34d674080170 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -130,7 +130,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, { kvm_lockdep_assert_mmu_lock_held(kvm, shared); - if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) + if (!refcount_dec_and_test(&root->root_refcount)) return; /* @@ -158,7 +158,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, * zap the root because a root cannot go from invalid to valid. */ if (!kvm_tdp_root_mark_invalid(root)) { - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); /* * Zapping the root in a worker is not just "nice to have"; @@ -316,7 +316,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) root = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_sp(root, NULL, 0, role); - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); @@ -883,7 +883,7 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, * and lead to use-after-free as zapping a SPTE triggers "writeback" of * dirty accessed bits to the SPTE's associated struct page. */ - WARN_ON_ONCE(!refcount_read(&root->tdp_mmu_root_count)); + WARN_ON_ONCE(!refcount_read(&root->root_refcount)); kvm_lockdep_assert_mmu_lock_held(kvm, shared); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 18d3719f14ea..19d3153051a3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -14,7 +14,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) { - return refcount_inc_not_zero(&root->tdp_mmu_root_count); + return refcount_inc_not_zero(&root->root_refcount); } void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,