From patchwork Thu Jan 26 18:40:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13117607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E49DC54EAA for ; Thu, 26 Jan 2023 18:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232002AbjAZSk4 (ORCPT ); Thu, 26 Jan 2023 13:40:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229844AbjAZSkz (ORCPT ); Thu, 26 Jan 2023 13:40:55 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 557B665363 for ; Thu, 26 Jan 2023 10:40:51 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-509ab88f98fso20454777b3.10 for ; Thu, 26 Jan 2023 10:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ezUgtXKhXuPYjha0ePsG0F4btuD50o3umoewEXzLrYo=; b=Gpsqv88RToC4u5Pj2L8SL3R5JmaiEzsseLsKjAGf2W0ai4XCaxcEaZ40FOPplTN3Vf VWWZzjE3wvU4Omx/EMDJWor6p+CtVw0vx0kDX4wR56THOvuviIRRUdTOkdAkpumMlSPj 2zThrcMr4+h/AOO7bVBaZwevOoeLNgtReYqVQJMGTPSO1q03cu0HXb5a+PX1OeH6hg2l 3Jofp5ODG6BhmiK9Qwhevnx1voxW8epNatR33FNb9J447ubNEHUzqrKe76y9VpF6Z5H6 jWIm+4zdMRwjYA9FmNVHnCZBKD8f7tIeSPpFlW7q4hW2EeGrxce34SDvzpd92ahkOFeM ACqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ezUgtXKhXuPYjha0ePsG0F4btuD50o3umoewEXzLrYo=; b=4oQvP3HUTbTzCH5xb1nEY9rUkuor8NJ1ojBmQievjFmBZov1AR/nBPxtN9G0Kz/DFO d3Sai2jVHkdyU/IW+LuG1IRl6UObedaN9X0QVyIhgiM/V7lIhZ6C2kiTTDJCmSDidFtf NMcr9joRS81aKODvJrJxn4kmoTxemKOS1+Im0+MupNSrgCPFQtce1HMF5Qelp0oC9zsl 4TbiWaEd4wxsQ7w/29vTTBBRN2Zna1jZRb6HyKK6ge9yDXu7MoM+r9MyRnsIvc6Nl9OT HdInkUiZcilq9KIbIYabUP8oFiWdre/nOYoRtsPwER/YKcGG9OaBD4SuM1TBue3FGQG0 jB/g== X-Gm-Message-State: AO0yUKU6wHK9qbLkTFQFxXjNzvOx0m12r6Fq6rsIvgGt9odiZOUTUVID P6avtjgeukIq1s1MZN+oAwJXNAI9OW8QTw== X-Google-Smtp-Source: AK7set8bVJ/lfXxo8mtCMuZKx7MZIeH+UU2DJ/YhhlgL2AZRSTGRffnv0EiriJ0BPSKw5ZpJDDFOnX5KH94/1Q== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a0d:e701:0:b0:506:66f5:fd24 with SMTP id q1-20020a0de701000000b0050666f5fd24mr1102441ywe.130.1674758450457; Thu, 26 Jan 2023 10:40:50 -0800 (PST) Date: Thu, 26 Jan 2023 10:40:24 -0800 In-Reply-To: <20230126184025.2294823-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230126184025.2294823-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230126184025.2294823-7-dmatlack@google.com> Subject: [PATCH v2 6/7] KVM: Allow range-based TLB invalidation from common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future features/cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 2 -- include/linux/kvm_host.h | 9 +++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 27 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 561d31d9a7da..6b2d6ced87ef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1816,6 +1816,9 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 34b670e719af..284c812db63b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -247,7 +247,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -258,8 +258,7 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) if (kvm_x86_ops.tlb_remote_flush_with_range) ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); - if (ret) - kvm_flush_remote_tlbs(kvm); + return ret; } static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 0dba4d8304a6..45de82d8e162 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -170,8 +170,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); - /* Flush the given page (huge or not) of guest memory. */ static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9366fea46e30..276187f084c8 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1356,6 +1356,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1484,6 +1485,14 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) } #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 pages) +{ + return -EOPNOTSUPP; +} +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fefd3e3c8fe1..c9fc693a39d9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -368,6 +368,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm);