From patchwork Thu Oct 10 18:23:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13831153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9D9DD2444F for ; Thu, 10 Oct 2024 21:16:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id: Content-Transfer-Encoding:Content-Type:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=msqY7uBXRGTy0M AGoCTeXnmBBUiahKjHv0GFu13VXPMbV+FTv875kVYBmMSusMWatDM9h4Q9IZsUjA2QOPmw5vQS1/U 6dDgmWRdJJ4rAs+pD6mtsf8TbMZMvWrz2uKiPur3XI0kHh7Jahy0BoT3CQxdOE9GRgPaz8xE5SMPM ij+QC3YA0/72aoTBsCtckC7UyzjaBpJG/P8Yi6oDLhDqclE/wlwx1dTIZ88Irh+f/Kf/kNVz3UuBj ooz0LPOmHK2RhoZjBpO5hCHa8pNkB/Tym6txT42Wt7duSWfAHVap8hP6gziHT5OAz0emZaBEAx8yM d94vxI9YZJ8c0ewWqkRg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sz0WI-0000000ENVr-220X; Thu, 10 Oct 2024 21:16:26 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxql-0000000Dpyr-3Na3 for linux-arm-kernel@bombadil.infradead.org; Thu, 10 Oct 2024 18:25:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:Content-Type :Cc:To:From:Subject:Message-ID:References:Mime-Version:In-Reply-To:Date: Reply-To:Sender:Content-ID:Content-Description; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=WC5mBfewmhr0oTCmlmNwSCj/Pp pyZFqDuD9jdf1OyDYyY6Z+780w+rFPt1b3N6St2vNJy6Xo40msLkaeO0EXEzw2hy7pUSmSAtbFbcn hjyMvRQdYxXlrbYo4u93w7sohWmmtcarVfPKc9k6P8/iq1ezmx6/6EsW18ui6lMOw+0wsex7g+rY7 ORB9rl6uaqhLFjFxYHGp+CeCpakIH5y9O2nNXDlUnukB927Pt5FvlmrGt/kv5ygzT3VgqD+aRUKNZ c7Qzsc6v9S3hoMLhyRe4zPTS/zgMsLfZO0IiDygb8bqQuljmk+J7sfJgdBUAHj/9RZAcZykdL7jkn 6LbSo/Pg==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxqg-00000005UVP-2UeM for linux-arm-kernel@lists.infradead.org; Thu, 10 Oct 2024 18:25:22 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-20b921fa133so13291605ad.1 for ; Thu, 10 Oct 2024 11:25:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584715; x=1729189515; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=l0w9lnO34VRhpw9k4OCEu+5qWOSo/Odt1wTWLT30hnPFDBw9mjJcx13yjIxqVQ+ozn RZRohhTr//PIMU5+d6u0p2Ybm0pluL9Trct+kP4ZdpaBMtyn+vjVDJ4xtjrW3W5rr4J1 xQqPMgmVruN1GprlW/XdHvx+FDJGuSIERXlP2nBikfRbtTWoVBUOgV9j0JftuApvZJ19 qVe6ULQk5PLsMp/4wZJBa8kk3LNbuqSLiixvIPjkPnYXNjwzO/ralU4fWg9M65kFhe3i Fgza66jUadX1Vy06tXYPpkdLyYki95qkQQQ/etSfZX5Hb2a4gT4fVGxo1ccleuS6iOy4 dVNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584715; x=1729189515; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=WDoARMTIexYqUKDMQLAC2OQWFTeknat7PpvqQx990/M=; b=MgzwdfWfevaMCWkC3YBShgh7Iw81jcPwPPhX+FiBewlJOeP/h0x1sd7vShKKJM1+XF 9vKFj6dP4GcsMwshM88poPJ05cC4dA+rHqEXmU0Hld7pYvVgzshPa8PWg/+JSG9TVzHK BxliBOBhHAX+1QTDE9UWeBUf6Cxt3+cmtt71ydz7PaclO8gub925qVzlhp8rr4CbIOBF iElVxTg7WUn2GaC5UjKr0wmKKDAZ5YwIvDmjvgVjV+2eK6l1na3GC5Xq6kkfwhKUXLY+ uZ0PNNWy1h5nFO2M+U0QGeKa6sxGhoiXPDEoeCnhVIuPw2DBuZ4DKSvBX9v06FVMfnls TfTw== X-Forwarded-Encrypted: i=1; AJvYcCUcI4IAdUpgRqVNha4zPQuxlC0xSAN6WZVX/m8FB1wiMKjHlZYrGF6g0GWBpr7//d3zTSctIg2SHsNUJpilWloD@lists.infradead.org X-Gm-Message-State: AOJu0YywaK8zMbjIUHF95Ga4yKKr02wogZg9cSBkL+2nAsG+bvGR01At lwEpoOlh4xLJidZUmPkMHkSA2bucFZ9ROKfOGGuPdvWq+HeMrHX88uaNX9YiHKty/vXsojBmNBd sqA== X-Google-Smtp-Source: AGHT+IFdLsiMiQR9ssEjYgvfJ+TOiOlvv8h202qp7bAbI4MCMFrC9IRsz+IufRcGyfoerq3Ms7MkoMKw8Uo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:f143:b0:20c:5beb:9c6a with SMTP id d9443c01a7336-20c80510505mr43225ad.4.1728584715291; Thu, 10 Oct 2024 11:25:15 -0700 (PDT) Date: Thu, 10 Oct 2024 11:23:13 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-12-seanjc@google.com> Subject: [PATCH v13 11/85] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_192518_991859_F30966E4 X-CRM114-Status: GOOD ( 12.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and communicate its true purpose, as the "atomic" aspect is essentially a side effect of the fact that x86 uses the API while holding mmu_lock. E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, as the goal is to opportunistically grab surrounding pages that have already been accessed and/or dirtied by the host, and to do so quickly. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04228a7da69a..5fe45ab0e818 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2958,7 +2958,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, if (!slot) return -1; - ret = gfn_to_page_many_atomic(slot, gfn, pages, end - start); + ret = kvm_prefetch_pages(slot, gfn, pages, end - start); if (ret <= 0) return -1; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 36b2607280f0..143b7e9f26dc 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -549,7 +549,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - if (gfn_to_page_many_atomic(slot, gfn, &page, 1) != 1) + if (kvm_prefetch_pages(slot, gfn, &page, 1) != 1) return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, page_to_pfn(page), NULL); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ab4485b2bddc..56e7cde8c8b8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1207,8 +1207,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages); +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2032292df0b0..957b4a6c9254 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3053,8 +3053,8 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, - struct page **pages, int nr_pages) +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **pages, int nr_pages) { unsigned long addr; gfn_t entry = 0; @@ -3068,7 +3068,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); } -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); /* * Do not use this helper unless you are absolutely certain the gfn _must_ be