From patchwork Wed May 29 18:05:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13679431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05A0EC27C50 for ; Wed, 29 May 2024 19:16:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ok2/nqH+8QJDQN6HKGkpA4YWDgN4xzDlP5yEkSDyZz8=; b=3XbvU4w2rDeB+hwhva42MpalO0 /KSQROUfQCD9bhzRwvPS0J4fIEZXx9KqxAYlwSS/RUp8UVVR6FO5BLnd3ERJ416P2icHMURAr3d8j x3Eq7NBuLqscls0+cVua47tppjki4+f0KAR/E329bmeKkOYehTh99QSRvL/HS5QbE3TlLi8FCXF4S /asP/nI353cxijWw7ojjr6Zu09j9J3pr1Lszxcb+kDoOkZXoeHyJrqDT8w0UFGmyXxiD0H7X+UTfC g8Lp4fHCEvZXddlGYyuuWcBmvedfJKZbFvuGDjCXFZbavVtZ0tpS7/q/KwOR0DT0CCLFectWKFX4u LOr8MAug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCOmE-00000005JOt-41Y1; Wed, 29 May 2024 19:15:58 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCNfv-000000058hq-3qxV for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2024 18:05:30 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-df7721f2e70so364779276.0 for ; Wed, 29 May 2024 11:05:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717005921; x=1717610721; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CwoKXote5Ew7Jsdz/aAIUFFvPSk6A909NzRQYNohjtE=; b=lmbM3rv18dU1DWcpLzgBAHtouF6dXDYzld5WWJv0QdV13DdMJX4G1wlofiF9k0wYkO /2W2EOjS5ELhZlTK5VOs+TCzoGeOVsHb7O3oA/mMW2iVlKDxC4c2qUYPUwE6g16we6Os R2P8QhF3bK/u5V/4NSK/mHd/WDGpP1GIaFnAdcKR6mBFKoht01UtALRFnCvHsR+XSeem b+Z8ZyQ1ZdE8m8vgN1Vt1PANzF27vfTVJE7ntbD30Ttdr3LJADwquuwFNGEc1MCKd3Th m40K11uq6Yra/WMop53Lz6SogCNVLa+sgIHKMppl0c8CfAAzKqOLeUvaFsMljYdN2E5O +VoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717005921; x=1717610721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CwoKXote5Ew7Jsdz/aAIUFFvPSk6A909NzRQYNohjtE=; b=aiOTbVPLAKX+YMrvMp1m4jS2iPz304mMQuSw5MSKYibzhd9lltulEO3JzTzKDqX8Wy 3pef1TQymuNg8YCw82p7jLM2teid0AtQdFCXHezyTQ47sbeNgAeKSQpdrddO/usPciqG Cawq/XWybJPLYTjeNMr/zn9tc1rTHMyBI1r6RIP8O/Os36kQtfD2yLieS7ttWAR3KjrM 8H52EMFIutkK7NZjy/I4DSVcJSF7Bm+QTkoYc1zYdoy1IPc3gpFsjpOgCvwL0g18lkdq aDoT5Y8XjAj6QrY5PFoHZbmg5ru9EhpEjayg8mlHg9KXLllUtpu9T4iho/qNipD/aBwB lUgA== X-Forwarded-Encrypted: i=1; AJvYcCXhZnzfJ7crEkVyEcLQs6h6FuQOLr6k3w4wHvovj76/p8WUXaYj5F5CFHDnxjfiUdxyolZfR9CG8UbYKj4ZkkGWwK+lvFAX1vxKUo9sLluFtMl+zeE= X-Gm-Message-State: AOJu0Yy3M9q1iW/fG1nKQ+Mwo/E3oMr8ZzpQXvFRiSMD89tgopMvqmLZ DPHc0Ix7Peafq8qadml9/zgIP5Ib0MRQRYyoGUVtHnf1hp+P3mdbhRqGiexeqwOIaQ4/iRHyHdV R6XpJVb+Hl+/k4zWp9w== X-Google-Smtp-Source: AGHT+IF9TgFBMPkNgtG0hPNwMJ317N5S8DzPgfu78mJAr8hsxOfUSU1lJMom0dwIl649roanAV0XRBVJc/tZTtZp X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6902:1005:b0:df7:9df5:4bbc with SMTP id 3f1490d57ef6-dfa462d091bmr641866276.0.1717005921060; Wed, 29 May 2024 11:05:21 -0700 (PDT) Date: Wed, 29 May 2024 18:05:07 +0000 In-Reply-To: <20240529180510.2295118-1-jthoughton@google.com> Mime-Version: 1.0 References: <20240529180510.2295118-1-jthoughton@google.com> X-Mailer: git-send-email 2.45.1.288.g0e0cd299f1-goog Message-ID: <20240529180510.2295118-5-jthoughton@google.com> Subject: [PATCH v4 4/7] KVM: Move MMU lock acquisition for test/clear_young to architecture From: James Houghton To: Andrew Morton , Paolo Bonzini Cc: Albert Ou , Ankit Agrawal , Anup Patel , Atish Patra , Axel Rasmussen , Bibo Mao , Catalin Marinas , David Matlack , David Rientjes , Huacai Chen , James Houghton , James Morse , Jonathan Corbet , Marc Zyngier , Michael Ellerman , Nicholas Piggin , Oliver Upton , Palmer Dabbelt , Paul Walmsley , Raghavendra Rao Ananta , Ryan Roberts , Sean Christopherson , Shaoqin Huang , Shuah Khan , Suzuki K Poulose , Tianrui Zhao , Will Deacon , Yu Zhao , Zenghui Yu , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_110524_221140_B5F7C0A9 X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For implementation mmu_notifier_{test,clear}_young, the KVM memslot walker used to take the MMU lock for us. Now make the architectures take it themselves. Don't relax locking for any architecture except powerpc e500; its implementations of kvm_age_gfn and kvm_test_age_gfn simply return false, so there is no need to grab the KVM MMU lock. Signed-off-by: James Houghton --- arch/arm64/kvm/mmu.c | 30 ++++++++++++++++++++++-------- arch/loongarch/kvm/mmu.c | 20 +++++++++++++++----- arch/mips/kvm/mmu.c | 21 ++++++++++++++++----- arch/powerpc/kvm/book3s.c | 14 ++++++++++++-- arch/riscv/kvm/mmu.c | 26 ++++++++++++++++++++------ arch/x86/kvm/mmu/mmu.c | 8 ++++++++ virt/kvm/kvm_main.c | 4 ++-- 7 files changed, 95 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8bcab0cc3fe9..8337009dde77 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1773,25 +1773,39 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + write_lock(&kvm->mmu_lock); if (!kvm->arch.mmu.pgt) - return false; + goto out; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, - range->start << PAGE_SHIFT, - size, true); + young = kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + range->start << PAGE_SHIFT, + size, true); + +out: + write_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + write_lock(&kvm->mmu_lock); if (!kvm->arch.mmu.pgt) - return false; + goto out; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, - range->start << PAGE_SHIFT, - size, false); + young = kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + range->start << PAGE_SHIFT, + size, false); + +out: + write_unlock(&kvm->mmu_lock); + return young; } phys_addr_t kvm_mmu_get_httbr(void) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 98883aa23ab8..5eb262bcf6b0 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -497,24 +497,34 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_ptw_ctx ctx; + bool young; + + spin_lock(&kvm->mmu_lock); ctx.flag = 0; ctx.ops = kvm_mkold_pte; kvm_ptw_prepare(kvm, &ctx); - return kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, + young = kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, range->end << PAGE_SHIFT, &ctx); + + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa = range->start << PAGE_SHIFT; - kvm_pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); + kvm_pte_t *ptep; + bool young; - if (ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep)) - return true; + spin_lock(&kvm->mmu_lock); + ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); - return false; + young = ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep); + + spin_unlock(&kvm->mmu_lock); + return young; } /* diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index c17157e700c0..db3b7cf22db1 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -446,17 +446,28 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa = range->start << PAGE_SHIFT; - pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); + pte_t *gpa_pte; + bool young = false; - if (!gpa_pte) - return false; - return pte_young(*gpa_pte); + spin_lock(&kvm->mmu_lock); + gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); + + if (gpa_pte) + young = pte_young(*gpa_pte); + + spin_unlock(&kvm->mmu_lock); + return young; } /** diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index ff6c38373957..f503ab9ac3a5 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -887,12 +887,22 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm->arch.kvm_ops->age_gfn(kvm, range); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm->arch.kvm_ops->age_gfn(kvm, range); + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm->arch.kvm_ops->test_age_gfn(kvm, range); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm->arch.kvm_ops->test_age_gfn(kvm, range); + spin_unlock(&kvm->mmu_lock); + return young; } int kvmppc_core_init_vm(struct kvm *kvm) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index b63650f9b966..c78abe8041fb 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -555,17 +555,24 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) pte_t *ptep; u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + spin_lock(&kvm->mmu_lock); if (!kvm->arch.pgd) - return false; + goto out; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + goto out; + + young = ptep_test_and_clear_young(NULL, 0, ptep); - return ptep_test_and_clear_young(NULL, 0, ptep); +out: + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -573,17 +580,24 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) pte_t *ptep; u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + spin_lock(&kvm->mmu_lock); if (!kvm->arch.pgd) - return false; + goto out; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + goto out; + + young = pte_young(ptep_get(ptep)); - return pte_young(ptep_get(ptep)); +out: + spin_unlock(&kvm->mmu_lock); + return young; } int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 662f62dfb2aa..6a2a557c2c31 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1630,12 +1630,16 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; + write_lock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); + write_unlock(&kvm->mmu_lock); + return young; } @@ -1643,12 +1647,16 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; + write_lock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); + write_unlock(&kvm->mmu_lock); + return young; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d197b6725cb3..8d2d3acf18d8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -901,7 +901,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn, * more sophisticated heuristic later. */ return kvm_handle_hva_range_no_flush(mn, start, end, - kvm_age_gfn, false); + kvm_age_gfn, true); } static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, @@ -911,7 +911,7 @@ static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, trace_kvm_test_age_hva(address); return kvm_handle_hva_range_no_flush(mn, address, address + 1, - kvm_test_age_gfn, false); + kvm_test_age_gfn, true); } static void kvm_mmu_notifier_release(struct mmu_notifier *mn,