From patchwork Fri Aug 11 04:51:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01577EB64DD for ; Fri, 11 Aug 2023 06:01:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aP0adPAf8cP0Yy4OyWHTrGEmzcDwjUBZqw/w24LyDVI=; b=tLMcnO+xqgHO99+bXiSzYV7UHp y7coBj/Pc8WwSil9DmxRsNW+bvcWN35e9OSuXX1YqCIHOM1DtgvZP3tQbwgRaBmgM2r7PSjeKflvx E2A6QJOj8profQfYp9FEx31kt7swhsagnS/jEOasd4Re57wTqASuO3Uha1nznKcWZwRPVUDoVrISV uvMi01J0u+CEk6A9cl7Wk6C0+fWqOABlJHofZFA5gSNJbhxG/BcS6X0g9JKNps31XeYrEaHj2eQsJ LMxd9UJG6s10yCN9HTsXZlPWIaV4DCl4+DBrP2hbzdUk+6EnK4CecMf2lhoh7g8KYU590/6y9CeYn 6Jx9TznQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qULCw-009ZQw-0Z; Fri, 11 Aug 2023 06:01:10 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7g-009PTI-0B for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:42 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-c5fc972760eso1745843276.1 for ; Thu, 10 Aug 2023 21:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729497; x=1692334297; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=72O4JjoYZq3iLcv3bAZ3OmpFstwA2kMJRyaM4BuQEBE=; b=HddCJ+lp3+jj0ct27zpd1lNr8C4Kzeyb3GiDQVC9OipT4plhEUG6QInYijh173H3ds BI05ZFWTfJKE1MKb0GXzQfP5/She9vTxOrKuUYYstIZt8/bEjACjKBi0/oTDTY8t7Jcm Ow+WuPnPrdQ4lJKwlEDxpICQU7Bx+Pdi1qQO4UJkw8lUmEurZJImS8W6xBxvYILyhzbm Zt/Apvf0YXgwF6mhZPXiO3Hog0Z/bd/jtMVs9sWAPHRKOKVU6kT0sW4BuHoNFvUKx004 X9FA/gPsfCblAg7uQFIvawvYaaCS5u2ovb7i0ng5dWtYtlCksBlGnHijFp3TN8c3dzJr pm7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729497; x=1692334297; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=72O4JjoYZq3iLcv3bAZ3OmpFstwA2kMJRyaM4BuQEBE=; b=lJJJBgOZRdUMBwd/ihH3ZFb977eC+quVDBNIuNqAfr1IyuoIgiV9ptAdsIHRL3Ozll wLBbLsFP5XzgF9eThIr0bmtRBjNUeFqy/YtTmw4ixT4Nxk0y/82Ud0ACgJAAwfvGo7Jr tNI+Su9vPOwzVM98UuyIF3wye3rhAD2HkA4j8tvUYxw8VThXTSDxld6r7/uB40QZp40F CkFml+vQeKD58+MQO6zt7CxFamkwJPhU62kijPbmgtwzPLb9FB03GzPQU1VSyH+dioUd BVLw2jNLH2BXA9n+L5jlVrNm04Xhy22O5Ovt4Ux1H5gFNYInR3aHjSDNZczb2toXUxkm L0JQ== X-Gm-Message-State: AOJu0YyfvhXNLzdFQW6aVymCV8YNId+FQS2OLcwtMuJUDwMqM7VkDIaC YmqErnnxSCLmDpF2mG2VHUBVVu3q32G6 X-Google-Smtp-Source: AGHT+IHrE1zqHDQMkwvCDnx4McTAflvKFNeP5WMV4qmwSVl14mBhjosxgpufAzi6g1c9LXTVfiPEOwUlOxgs X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:b190:0:b0:d0e:d67d:6617 with SMTP id h16-20020a25b190000000b00d0ed67d6617mr9413ybj.4.1691729496798; Thu, 10 Aug 2023 21:51:36 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:14 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-2-rananta@google.com> Subject: [PATCH v9 01/14] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , " =?utf-8?q?Philippe_?= =?utf-8?q?Mathieu-Daud=C3=A9?= " , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215140_124461_5D343BB7 X-CRM114-Status: GOOD ( 10.48 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: David Matlack Rename kvm_arch_flush_remote_tlb() and the associated macro __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB to kvm_arch_flush_remote_tlbs() and __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS respectively. Making the name plural matches kvm_flush_remote_tlbs() and makes it more clear that this function can affect more than one remote TLB. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Shaoqin Huang Acked-by: Sean Christopherson --- arch/mips/include/asm/kvm_host.h | 4 ++-- arch/mips/kvm/mips.c | 2 +- arch/x86/include/asm/kvm_host.h | 4 ++-- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 04cedf9f88115..9b0ad8f3bf327 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -896,7 +896,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -int kvm_arch_flush_remote_tlb(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index aa5583a7b05be..4b7bc39a41736 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -981,7 +981,7 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -int kvm_arch_flush_remote_tlb(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { kvm_mips_callbacks->prepare_flush_shadow(kvm); return 1; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 28bd38303d704..a2d3cfc2eb75c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1794,8 +1794,8 @@ static inline struct kvm *kvm_arch_alloc_vm(void) #define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { if (kvm_x86_ops.flush_remote_tlbs && !static_call(kvm_x86_flush_remote_tlbs)(kvm)) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9d3ac7720da9f..e3f968b38ae97 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1479,8 +1479,8 @@ static inline void kvm_arch_free_vm(struct kvm *kvm) } #endif -#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index dfbaafbe3a009..70e5479797ac3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -361,7 +361,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that * barrier here. */ - if (!kvm_arch_flush_remote_tlb(kvm) + if (!kvm_arch_flush_remote_tlbs(kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; } From patchwork Fri Aug 11 04:51:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8944EEB64DD for ; Fri, 11 Aug 2023 04:52:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ww585J0FQmofYBnYH8/LVSIpnwaUSmMi7XPVId6XIzA=; b=McVhzFtYLTMyUebe0qMsQh0erS /q+XvorAQiQWKMbYyowGlbfmY5gIzAsNjovqgAp41fQvzupv+XVMMGuBLB8usiyBLrrsMuO53V+kv UW1wk7kEHcBmTSt0eMNagjM8D0ZUAyx7PDwIZWnai7VSCgWFWUv/e0VDSWvF3SHXARaCJYvAe6ZL9 xrd6ME5czdN9ss8+KceA8fDfz1DvAWR1wToboRRF9YjofOjZZuVA2cX85d/djY2hQf+khKRjewg18 QCB9wkT4oJxBQ41YnM56vkSvmQRxOgCl2L8gASTZ0ipT6XExZ9XX5i2Dye6v1UdGdQBES3wK8xOcL tJ78E3BA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK81-009PmT-04; Fri, 11 Aug 2023 04:52:01 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7g-009PTR-1b for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:46 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5734d919156so19829897b3.3 for ; Thu, 10 Aug 2023 21:51:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729498; x=1692334298; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FjqK2i9iNfdCdXxEpH6EmuQikOPCWBkIYlUFD/yGaDg=; b=r3i6w54chxe/m30aMtVSgvkKBw1KsUaf9mnPdfDF9T7G23UAZLBlzAbYwBcw+8/TEP TIXUprktoIJvbhghifoTokWKfb3fsWuJEGznw9UnxoxDzZASXjGptbwbxjCA6kWdWKbv 1BXyNfas0RyElNo3kSpbNMxJzkQ5DV1JnViee3PvCsXKXzhyRRy66gQtf/gMlx6nPZLS W3ZvPJVBTfpDVbg4rpa+Yudvhp7wHsWOtEJhRLCCZ4ljXaPQ97jKH76YhbAKHzFA8haO RHWuu4Ft7sFeRAtPpG2uAYJ8O0PHxSXrSq3mACDc91bgObmkiHbVAYRiWp4aDtF5bXrr V1ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729498; x=1692334298; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FjqK2i9iNfdCdXxEpH6EmuQikOPCWBkIYlUFD/yGaDg=; b=DFhOcWH+Nmrw/bJNf0uMvDmW1CdlSoxHjmBLEqbA457RnQEeZ8vVdZw76pv2O+ZSFT bJ/1o50rxgty0Jnc0R+CzjeLXBCn3+TpAP6JDS0A5FInipp9fvLO1BQvphr663k295Q3 pJFB/6ctRqILlBEvOfHFBjCyHyYFXgFiYPxXgk4uT8jSvzWEgQXYVvZ7zSTPv7cDGP4w 1UR+2NXy9juMcsBA05dqdmsdw1U2SZBhwQ7cAs5RGRyRoJJbckgKmUi8ouXdyZK9YWeZ W3cZDYleFCz8wPZtyI0WVDjSJ7CDGdNXdi52Fi+vXffCfb+bV2l4wNZHQj7gZflL/dIU L19g== X-Gm-Message-State: AOJu0YxICDkMqSPD1bSg+zW80Q+65ieaq6QwKuNx5oMG00hRcUv+nWYx 0bej+Lkx1dkKIVlMKc602CbfbSgAcDam X-Google-Smtp-Source: AGHT+IGBvRwTUndHgAap/N4BT+dWrSJMB4+V6DQ3W4WJT35AoKaFobGrZSBaBrpjzeE8RCoth/jJXPL4hWmQ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ae0e:0:b0:576:8cb6:62a9 with SMTP id m14-20020a81ae0e000000b005768cb662a9mr13062ywh.6.1691729497878; Thu, 10 Aug 2023 21:51:37 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:15 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-3-rananta@google.com> Subject: [PATCH v9 02/14] KVM: Declare kvm_arch_flush_remote_tlbs() globally From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215140_901863_AA46C2D6 X-CRM114-Status: UNSURE ( 8.74 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org There's no reason for the architectures to declare kvm_arch_flush_remote_tlbs() in their own headers. Hence to avoid this duplication, make the declaration global, leaving the architectures to define only __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS as needed. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang --- arch/mips/include/asm/kvm_host.h | 1 - include/linux/kvm_host.h | 2 ++ 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 9b0ad8f3bf327..54a85f1d4f2c8 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -897,6 +897,5 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS -int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e3f968b38ae97..ade5d4500c2ce 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1484,6 +1484,8 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } +#else +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA From patchwork Fri Aug 11 04:51:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EC99DEB64DD for ; Fri, 11 Aug 2023 04:52:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RqUjeZJiVTpkPAGeMWZIZg3G5e6JqdRnXB/c0bVH1iU=; b=XnRLykdRATimAsv6gfpmVXm3Co 7aeWtUFaZbGwf5NDNAX65JaVjCw+n+0ZH1lr0yPB9I+vqxazyp+OzJ/1YIx3liRLQoEHSHWrDzuE6 0Az9z5SNtSo2P626dC3TNoXTrHzrb0IRf3atVgW1WFiojkXqkVGW6AkpKLLTn1xwIZINeBY3mTGfW YxM2P1gN6/SOlGvztSV7FoJt/bw8trno8HVSMCFByIvVbRVk8r9HRqcbyTvGMkHfUGVlXEJqRVP8a BjfbNT1SwAS3BQI4zJ0tcOSa9kvzsXyGALTZt4Rn27BrfFI4uS7uavi4+OxZhMR9IVVoJnThGFwcG BykI0A+w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK81-009Pn9-2q; Fri, 11 Aug 2023 04:52:01 +0000 Received: from mail-ot1-x34a.google.com ([2607:f8b0:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7g-009PTZ-26 for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:46 +0000 Received: by mail-ot1-x34a.google.com with SMTP id 46e09a7af769-6bb0bb2bdd5so1690419a34.3 for ; Thu, 10 Aug 2023 21:51:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729498; x=1692334298; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ASzbcsqwIIPxgx+XyAMT2G1wf8WK9Z2hf5Cip/a2faA=; b=uAabUmiZLe4MoUbS9wWG4jcqfN0u7QlPThTWrmHmQHgIqkO2BufLaqzmmlZnIyCOO+ T/L4kbkpAFt+ZBdM8kowjgzj6ukuum0CjGTGLYU3iRP5tAAb3o0Ntc7xlF3UR3cHB4tn DeUzU/obpskCQQfPFVk0wKOJHfy6VGzilCBOqGZ4AzgEZ8gbc7vfKZNsMiDf+Mnw4tyT 6nipzycm7kXiOKYibxHOYUWZ2mkVxSRxc5NUa35+yCEee1ubON6GL22QYtP7OtxRf/u6 JsUs87W8hFKFBeYOzN3POKlGxAmYSwY+LebXiY8Q3ZfGJNrmkf5yu6H0xJ2oAOGtMDcq RlQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729498; x=1692334298; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ASzbcsqwIIPxgx+XyAMT2G1wf8WK9Z2hf5Cip/a2faA=; b=YBm8DpZyCPvirItTB7qb6YtRSag3UlFpMulCeaF7oZze9ZXx0fQgqqxzBiZOb4saDG 21UuVg7MvyeInqeSIH+8y8ZVIq5055x4cQXt7BwqdIzRJBzNfUqoKWJgv3lHOzHdKcJG eksb53H0gzEAOOb93mkw/fndAHaYyljrSPJrwZZZmVa6D0os1QBZ3pI0S9zV3zfomL/L QC9FqVOnVLsZ2cHu67BclTa/0LMti2/EOJ1JckWRhfGYJogeE0QQ/rX96ghU8VzrP5W6 sG2Cdl51O9oehQc/inAyau1yAPTtg7ogmdR3RwfnnrBxmTSKPIYTcTFVwQNqxhNvBcMK nQGg== X-Gm-Message-State: AOJu0YxgoKJIMlNZSPG5o4ythndT3Uh2joj6Z9OdVfCdayN5xhMn9qgY I5RNYW+E4LSFoeHG9TxuZpCCPcvGvMfG X-Google-Smtp-Source: AGHT+IE/nnO7E1ywpn5IoMTUCGRwpLBu6xlqSUrvCHD1gTpM15da9wapCOhtsOAsZCjiWO4g5dJl7qthkafm X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:a8a8:b0:1bf:d3b8:5cae with SMTP id eb40-20020a056870a8a800b001bfd3b85caemr12117oab.10.1691729498753; Thu, 10 Aug 2023 21:51:38 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:16 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-4-rananta@google.com> Subject: [PATCH v9 03/14] KVM: arm64: Use kvm_arch_flush_remote_tlbs() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang , Gavin Shan X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215140_907787_56623687 X-CRM114-Status: UNSURE ( 9.75 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Stop depending on CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL and opt to standardize on kvm_arch_flush_remote_tlbs() since it avoids duplicating the generic TLB stats across architectures that implement their own remote TLB flush. This adds an extra function call to the ARM64 kvm_flush_remote_tlbs() path, but that is a small cost in comparison to flushing remote TLBs. In addition, instead of just incrementing remote_tlb_flush_requests stat, the generic interface would also increment the remote_tlb_flush stat. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang Reviewed-by: Gavin Shan --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/mmu.c | 6 +++--- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8b6096753740c..20f2ba149c70c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1111,6 +1111,8 @@ int __init kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index f531da6b362e9..6b730fcfee379 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -25,7 +25,6 @@ menuconfig KVM select MMU_NOTIFIER select PREEMPT_NOTIFIERS select HAVE_KVM_CPU_RELAX_INTERCEPT - select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_XFER_TO_GUEST_WORK diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6db9ef288ec38..0ac721fa27f18 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -161,15 +161,15 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) } /** - * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8 + * kvm_arch_flush_remote_tlbs() - flush all VM TLB entries for v7/8 * @kvm: pointer to kvm structure. * * Interface to HYP function to flush all VM TLB entries */ -void kvm_flush_remote_tlbs(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - ++kvm->stat.generic.remote_tlb_flush_requests; kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + return 0; } static bool kvm_is_device_pfn(unsigned long pfn) From patchwork Fri Aug 11 04:51:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53B5EC0015E for ; Fri, 11 Aug 2023 04:52:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=6O73bV0idNjI67DAbQlJZkz4ehE8as4OSWknHwW5QvQ=; b=zzew6NUO8aeu1XiiYp3fijvW70 Lv9Fq4pj2G7843mnWVQeOFWolwHlSLKTVHINZnJfGpDWVrOkhQygHcfeXAqEY7mNed6pwOTowOOEh XY581AnJcdm2CjunWjljxeaQgcC9DbEUAMlvWO8TLhz39cXobgdimHl5JbqNQBWauCYobwSGa4P8n ajNgoD923pcDEUPxMLsk4ujAIQyNB2U9AKBhDfFIkkxp8nH5GEqtRPwqXp7HV9F+Xu5XSi6lvAGBp PAEiQ70Jgwsc50/5k0Wn/IE09+K1x3eUDMibJXldVHLS0LE8LzMUzT7ZiMkCOdbdsGhYaFjRsBg2/ OKyT2twg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK83-009Pnr-0L; Fri, 11 Aug 2023 04:52:03 +0000 Received: from mail-oa1-x49.google.com ([2001:4860:4864:20::49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7h-009PTd-1j for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:46 +0000 Received: by mail-oa1-x49.google.com with SMTP id 586e51a60fabf-1bb6fe22b11so1805411fac.3 for ; Thu, 10 Aug 2023 21:51:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729499; x=1692334299; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=KMunAMeBPKEY9T5yrPL+aMpFgaQV0/xsp/XormRaoBM=; b=oGyGjViqZNQB5Og/8KSGJT7aIHo2wwBSCnK8zrh4WX1dkLcj8j7++RIJ6MMhwLKJJM TFwYrscibPCB3QmC1vwbGjOxG+yFgDZoqgTW46Q0I43fh6plUvYPK40YZO8XMwTjrOdi fnlSk08NlXYIbGNj/3hHqeyZAiHuO84I1eN3h2UfQvsIoxOYp1344xAWxUoDUNlguK/d 5kLKAQj90+6/pX7UAxmtBlnRcJeCRHfDdeQh/yfwKQtOSPDaI07tcbauCsIMB854pARh XzBdqJA4i4gXWPKDMXc/sVC1BnOhZZ28pfZehmEJ44oNDRLqO12bBeCK/TDek5KZpg/u BZkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729499; x=1692334299; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=KMunAMeBPKEY9T5yrPL+aMpFgaQV0/xsp/XormRaoBM=; b=aQVb3Qm32499B32xB9sFarDXKVV+tFOAy49Xz2gzAZlrZH668wvTtPUxZ669zTCjFP svm6ZPEbOdCz5X1P2U5yc9ths3UWpNKUkE8WB2TUkKJJBh4s5NbafUrGBdOBTUaqGqwB dnZajjbweLOTbKJq/atpwU+wPnz21VNV4eZWaKvX1qI95i82OVvVJIV3hgQ9n2qq6KnJ NT0kJ+X3noM7rgAOzoL+RwDB9oOKzi1CvbV/ahbyrx7HXby/J5B8XsFD+yZBfpy3rU5L jnWhoqOMPoUWWphtgUOOv/yVhA5jHgs3DqZhKXY+0Vrn92CVBB49nhw4ovxBSDXXjbBY vLsQ== X-Gm-Message-State: AOJu0Yy9Vn9bGhgfVosgcOfz8qCLDAMJDRNrDIyyTfO+WFdYHECNOBfD 2EdRYBveWK4Yg7qtQSDaTsMVaB7TDIxF X-Google-Smtp-Source: AGHT+IE6iu8kZBBGXiPktI0w4Wht3tt9FXqeaiP6SFiaZO7WXbsaeVAitcRGbh4LxMh2/nvqjLQ5ZRjkCZrA X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:1ab5:b0:1c0:e9e9:ae91 with SMTP id ef53-20020a0568701ab500b001c0e9e9ae91mr13992oab.3.1691729499613; Thu, 10 Aug 2023 21:51:39 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:17 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-5-rananta@google.com> Subject: [PATCH v9 04/14] KVM: Remove CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang , Gavin Shan , " =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= " X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215141_661934_FC5A62DE X-CRM114-Status: UNSURE ( 7.50 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org kvm_arch_flush_remote_tlbs() or CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL are two mechanisms to solve the same problem, allowing architecture-specific code to provide a non-IPI implementation of remote TLB flushing. Dropping CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL allows KVM to standardize all architectures on kvm_arch_flush_remote_tlbs() instead of maintaining two mechanisms. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang Reviewed-by: Gavin Shan Reviewed-by: Philippe Mathieu-Daudé --- virt/kvm/Kconfig | 3 --- virt/kvm/kvm_main.c | 2 -- 2 files changed, 5 deletions(-) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index b74916de5183a..484d0873061ca 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -62,9 +62,6 @@ config HAVE_KVM_CPU_RELAX_INTERCEPT config KVM_VFIO bool -config HAVE_KVM_ARCH_TLB_FLUSH_ALL - bool - config HAVE_KVM_INVALID_WAKEUPS bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 70e5479797ac3..d6b0507861550 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -345,7 +345,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) } EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); -#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { ++kvm->stat.generic.remote_tlb_flush_requests; @@ -366,7 +365,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) ++kvm->stat.generic.remote_tlb_flush; } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); -#endif static void kvm_flush_shadow_all(struct kvm *kvm) { From patchwork Fri Aug 11 04:51:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B33F7EB64DD for ; Fri, 11 Aug 2023 04:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=GAABVTnITJkhTpierVM8pxi4/gSISfJcPbAfO//6dUM=; b=KDlBCwJPaoku1mXak6+R4MEZv3 jNQ5cuBM3T1Itx7NipUWORoRLcfzEGPMmoUmxwyk+CxW2wE/91eZ0PtGQOtxMmG0YmibjQZ/Hsa8a CG5KObDE8ydWFD3KkHHQSQ5Z0djBfJ7EukJ/cukjKmBBAxbyBK4lWcueX8+DWddsrduzZ6rtvq9jb cnn7lB4LsMocinKdghNVLerdV2l3Os85+A3mHzg2BKMIMmGOMjCwR7v6HlB8Cmx3EU2i0Su1Xn+y4 0rY6AMmAfFz05e2kqHvKZA+cvYoOMdFzDeXfvv/hfMNbM+AOc4oSfLNvoTk7xn/omHuIthQwwF+80 jvmFbXQw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK95-009Qjk-37; Fri, 11 Aug 2023 04:53:07 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7y-009Pjp-2f for linux-riscv@bombadil.infradead.org; Fri, 11 Aug 2023 04:51:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=K3pnh3N/YIF0SxqOcKLFCFAu/7kR/Uv7ltNd2DAApng=; b=oI2Q+8/htJpeSjMfpqHE/Z61/k ZLgxqM9Lxp6iDXtPAJWUgtXuoxPnxjF5jOeXv/RQndrtar54/Y0FtWfz4fSI890dnamCNNG0jmIcZ y/MCRBnVVpldC82Tiy2GBaFSMF5QsskBBsiHUZcUAzWAYxqTyhayoBgioYULAJxb78RQ4STSRQbUg hhPDnddzVbEyBWQW0xtEWFhZhpm44DlSmh0x9gBC1qpJc+Dcx4Bxh9w0Q4VM5oTAGt/FL9tpjHkky qgB0CRby9cb6xKYGLlLv3o8YB+UtBCTP435rOE1+EbZxrIpAA+PBeDbAUNBWM1CvvPv/bG1yTVfaJ GF72UqLg==; Received: from mail-ot1-x349.google.com ([2607:f8b0:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7t-007HYM-1T for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:57 +0000 Received: by mail-ot1-x349.google.com with SMTP id 46e09a7af769-6bb31d627ebso1836851a34.2 for ; Thu, 10 Aug 2023 21:51:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729500; x=1692334300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K3pnh3N/YIF0SxqOcKLFCFAu/7kR/Uv7ltNd2DAApng=; b=Q/eD8y0h2l6JvWr9yxmhkzDz3/gU+uCMrAapE+RfKfpEujedU2m/cioIEtNiwyWcDq 6oU/mRQTOMuz+meNoZkHUXpRaB1Kf60Q4R97OmOR3SyHx7RuYNOseDNnfQaON1qyo5sp Yy7qpgB8KdX4r7NgDYWnxRanbAd0O25zsnOgYls8iU4oaEYwKOd4OuGHRAElOOULjwZ7 zv+qj9vFVZl3xEcKvgpLumSuTYsp2PPSqwEIKmfHkh1NMsSMLhmU4qETWPBj1KkWIsjx AfAcpwHEssHw6PsET2R5l/KBjsPVbkindLK/jYg8Jg4IOgwBWrE2JstApGnRH9vE/HVL nJiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729500; x=1692334300; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K3pnh3N/YIF0SxqOcKLFCFAu/7kR/Uv7ltNd2DAApng=; b=UPdxQ/WAH05KBqrz7UAvAiNKvJUTBpeC2Tax5StqJb7oQLfBN3a/8mdV3mKbhbUuC1 POa7XfBRK5TUNBXatQazbYOuY2RclAsF77vqDtHNVLNydWU1Ecnxa3JRWZcGVltYTzTD oGbs/eUXJjEnv7jRTVgVlzkgTORLIPdqMHu+vOPC6tC6iZF1LkANQtEabzLBc6yj/dRD yhogknzrN2XiiqkKXPZonNZleS/dvDBSuj930hFW4joV9ZW+pDj5ZxrduqjqDmC0fVl+ 4vCZSkhCoTXTKU/gPKbFQMTr7iF+3Xy1FRPN9dF0Vfqu/KULW4YZlkm+SRzv7xcEoTXR SEhg== X-Gm-Message-State: AOJu0Yw1cTsZDwp+uisAxv4GumfxeZdtpyKcHMf4GXRhA8hisp0DqyMx nRGMJstpNVjFt8Z/Y5QZEU87jlBp5WwF X-Google-Smtp-Source: AGHT+IFGVe7NxFpz5VE0E4/WvtFaQI/uiF+IxB2dFVJtq+EJbiYPxTNleKK3q6v99h+PTIzxrx3AKKcqdkl2 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:7685:b0:1c2:d3de:c298 with SMTP id dx5-20020a056870768500b001c2d3dec298mr3535oab.11.1691729500567; Thu, 10 Aug 2023 21:51:40 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:18 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-6-rananta@google.com> Subject: [PATCH v9 05/14] KVM: Allow range-based TLB invalidation from common code From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230811_055153_640688_2FCD08A5 X-CRM114-Status: GOOD ( 12.93 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: David Matlack Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future features/cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang Reviewed-by: Anup Patel Acked-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 10 ++++------ arch/x86/kvm/mmu/mmu_internal.h | 3 --- include/linux/kvm_host.h | 11 +++++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 30 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a2d3cfc2eb75c..b547d17c58f63 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1804,6 +1804,8 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce2..00f7bda9202f2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -278,16 +278,14 @@ static inline bool kvm_available_flush_remote_tlbs_range(void) return kvm_x86_ops.flush_remote_tlbs_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { int ret = -EOPNOTSUPP; if (kvm_x86_ops.flush_remote_tlbs_range) - ret = static_call(kvm_x86_flush_remote_tlbs_range)(kvm, start_gfn, - nr_pages); - if (ret) - kvm_flush_remote_tlbs(kvm); + ret = static_call(kvm_x86_flush_remote_tlbs_range)(kvm, gfn, nr_pages); + + return ret; } static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d39af5639ce97..86cb83bb34804 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -170,9 +170,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages); - /* Flush the given page (huge or not) of guest memory. */ static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ade5d4500c2ce..89d2614e4b7a6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1488,6 +1489,16 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages) +{ + return -EOPNOTSUPP; +} +#else +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d6b0507861550..26e91000f579d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -366,6 +366,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, nr_pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Fri Aug 11 04:51:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F46BEB64DD for ; Fri, 11 Aug 2023 04:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=d3dTQhgN4615/x7iVYzAHHaWD48l52SWjAvR8In+3/k=; b=nWkqUbvGzPVgOM0qd4F2/6RxVJ /TpbZIHtkBnCUa3fY04uKO4xaDCP9WjjS9sG8pLilebxC7HpHuU7ApktsyzDmjkYy0D5tx7xgyWzA G+lBG1GwMQnqcf+8ZaBQRcgeaauYIk+xwQP687GTugZVWZAbsUe9Qe1E0yrAS6Al0BCjhBZ6SliyC feXqLG9/P7ukAHYN+pXESImLMwVLlscZGv5zNGUjie0Up9DivcRgI0Ol5xm41BsPwp0nRdjmKjHjq Hq+t8GsLq4IC6Kb26HBHwOhw8eMcBQEmZXB2l43fOtwTR+blFGNhxr9hV72Qcp5forKJg7BdYoWKv uH52mWIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK99-009QnM-1u; Fri, 11 Aug 2023 04:53:11 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7z-009Pkj-2E for linux-riscv@bombadil.infradead.org; Fri, 11 Aug 2023 04:51:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5cEKwCCkl0Wuj7mvrmbqadCjnsVGY9rMTSQflZ1Y/nU=; b=VH7qf/TZ4ouNKJItdgSV/pizkj Gwelyia0Ymn2tekdNWLXSqmiZVDHEGb5PYy/C+gfcPasVGj2XDcthIeuhoMHS10SXVrbIZb+qwboY 6iGEWbFxVua61rbKa3piypZP/meXwny9BdKio6hxc85pgqBacfjyNZT2L7dGPMwmMplBz6diw3ttr xd75/UmevoqIgp81a6i61+/m3o1flvOV2LbNkn2nwfYx5xOKlPqv56lnGXWJsdyA+dnIDDkk9l3sf EOFhmWXqbwiOw6QWajk1Bgo7Xl6aHsqT0l9OHlWHdDKYCuQiK9Ufo6h/wwP44rmlH0WY/BAVVWH2a Oilx/96Q==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7t-007HYO-2b for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:58 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-589a4c6ab34so20410677b3.2 for ; Thu, 10 Aug 2023 21:51:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729501; x=1692334301; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5cEKwCCkl0Wuj7mvrmbqadCjnsVGY9rMTSQflZ1Y/nU=; b=1aHZmZAhLdckQslyujv1CXJJIEugGYcQ2KETwtnv+YT5CvsZJB5BSQN33PTF72qeoH SL+23x92jRlBiUagVt9sdA5Qn5tOJ2fBSDDSYdqk8Q+tm+gkVxMLnZURI4sobZVvyMvT 6R1+O+dCpkAIBWS3QeEddiSouZUlmBvPenpSGWMKZFsVx+KtCbLAaWDCIHl+ScCot/6a hrNE+Vmo4lDE1y5xWbNDYzCxtAf2/2uSb/Y9nOWE2HxM0M+rJbwBrpzqpF2xOsL14tLW vgxUO8+MVtYvplVOcpRyS0MJ3r0eYcyvD64GVKYVtRBBqUHI2bS1Cy8mGJ8d8l090tz1 zSDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729501; x=1692334301; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5cEKwCCkl0Wuj7mvrmbqadCjnsVGY9rMTSQflZ1Y/nU=; b=Lo2RCiqHP4P6VsKyI3s249ad9vhbel8eWlNSSYrb3/iZ6+QFP+6277oEO5SpRBu3dm LGYUmsHB7LE1qBOxMaOg2SJ8dvWoZWq37mI5XQqRz8rlzBx4rJMTiF2ZDO16r+C4eJda z5DwBCt8UYzTYOIVRCqvFCJcXyEsKkN7BfYjtEu/Q8ZkW7bZ/cPYMT1+nhvi2LOw4DS9 6/Ay8gtfRzhviYcp0FRRol5k5IMvwW2GCviQTCBnkuxh2EjxzHsH3wiWgR92dhMyDfiH eCSWZ/X8Dp12EYFWowhuYHf5j4YrdEk0Jk1b1w46YybZDAsazXdB5lDCeHa98890ivqs dxbg== X-Gm-Message-State: AOJu0YyPeqnQ9sTtLzwRn+tjOrPhZwBDKlwBueceWHYxX0dbs6pI8FHQ Llt9HkigSineD59+9q2fdrus1P1YPQmJ X-Google-Smtp-Source: AGHT+IG05B59OseX5uumX7BQ1X0OpRdGwq/cwn2uehx61Q3ZHuSAhOD7tVQrEf9SK4EW3LH1nI3ERCEMkLuu X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ac21:0:b0:577:d5b:7ce3 with SMTP id k33-20020a81ac21000000b005770d5b7ce3mr13408ywh.9.1691729501524; Thu, 10 Aug 2023 21:51:41 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:19 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-7-rananta@google.com> Subject: [PATCH v9 06/14] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230811_055154_015210_F7614849 X-CRM114-Status: GOOD ( 17.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: David Matlack Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slots_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. MIPS has calls to kvm_flush_remote_tlbs_memslot(), but they all hold the slots_lock, so the lockdep assertion continues to hold true. Also drop the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT ifdef gating kvm_flush_remote_tlbs_memslot(), since it is no longer necessary. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang Acked-by: Anup Patel Acked-by: Sean Christopherson --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 18 ++++++++++++++++-- 7 files changed, 23 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c2c14059f6a8c..ed7bef4d970b9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1525,12 +1525,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 4b7bc39a41736..231ac052b506b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -199,7 +199,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -235,7 +235,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -987,12 +987,6 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { int r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f2eb47925806b..97e129620686c 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 00f7bda9202f2..43314ca606e2f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6668,7 +6668,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (walk_slot_rmaps(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6687,20 +6687,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6b9bea62fb8a..faeb2e307b36a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12751,7 +12751,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 89d2614e4b7a6..394db2ce11e2e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1360,6 +1360,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1388,10 +1390,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 26e91000f579d..5d4d2e051aa09 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -379,6 +379,20 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * are related to dirty logging, and many do the TLB flush out of + * mmu_lock. The interaction between the various operations on memslot + * must be serialized by slots_locks to ensure the TLB flush from one + * operation is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2191,7 +2205,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2308,7 +2322,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; } From patchwork Fri Aug 11 04:51:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45867C0015E for ; Fri, 11 Aug 2023 04:52:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pjMwaWFaNyLLLyFDyBwi1paOuJCnkx/Gqz1M0qp/zOc=; b=Jmq76bCwXUhPeKUfS2c/1+BKkU CIsTwTaRMzLqft0XUD/sVZp+EVIvbtJ2OqwakX2CGlcn2GcdxNeraszL1scFQu6R86O8c20IC/9ui X5LAYg5z8SNfK0J8l0pM8HR+QpVm0W8Q2cn8oFY7QYGhN+gfnCvFVzIqOzz0u6H3/lNLAjqA/XZWX cyZOSxQRypz1FTOsnRRngJytZyrvIYvVNf31/weNINXqkYcLzx72psE2PPAfcecXxErnYuLMFoMgO mZ+1BklBfT7tjlS8zlNfajghwauFValX/YcxZRHRa8gUMlJPl1z3EnLso6TZ2uJo22q4FPZ0WjYEU OcTSsgpA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK89-009Psy-0t; Fri, 11 Aug 2023 04:52:09 +0000 Received: from mail-yb1-f201.google.com ([209.85.219.201]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7l-009PXg-0Z for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:49 +0000 Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-d0d27cd9db9so3197816276.0 for ; Thu, 10 Aug 2023 21:51:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729502; x=1692334302; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=GBv+6JjPWHDQgAZ9CQtQ6Gsgjb+DLspTKR+wRzk0n4Kw1z3V6nDVNWmzqUqqoukAg1 1OlMEvUj8LrRWVXNysqG95jpTH8CChKkSCXz9apVTe5JeuZb6gqUegnorgzVhdK9S7IL oXKpr/ShUufm+B4pk8Ws4+HuIT6nXUq9e5H5XX1gGqdIFF/78t+yezXNaoo588QxYUh/ c+FwwEuzFq0HTnIlwaWXmrqFOviHEebfM+P2e4EX/BA6adX59jK0oApE1mVLs3HMzHmR tA8dMni1QXHRJO/9x+FD5u/Y+wTdEXf2oG6z5XOPlVqxE/D+oQ2jxzr5IaXwOsRB1HO+ caTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729502; x=1692334302; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=Sqz/0Tt94m/in7ToEDLxUt+HjnFCu7WEBGXRVvI4rgI2lEaJJP9CRrlLcAArKPJwqk NwKF7jI1opgJSQt728dG07XKBLv+yb/ckD5NtOICq4O85P09dM+3HqhLKTV/J6QilQvf VLpy8b+VGTNc/+R43qIgwPTl7rCKY0VO/i34aJQ2HkYlLlOJeRPFwzlckMOCJiYnUk+R CN7Cg/5Eve34papXhK1DGDkgy/Wpi3KccoUGo37WpZZWK4dhboXCWhhhEA/lYDGsKcZt mRI2V/hK1hBPEIDpyfSC6IBTgyL4i1a/nhGAwFvyBEug7vm3shyRGPRbuFTq1vH7+ydL uydw== X-Gm-Message-State: AOJu0Yzs0OyamSTU64JtjbPppv9GfL2MA7d22Ht60YiALEEOD+JWL91e zCjL3AwlaPDi3X6pE6wHON9ueZ8smDH3 X-Google-Smtp-Source: AGHT+IGrUmmFojlnNP1SsmNVGsoM061TpbAAPYUCbWDsvJBoCc9vnu1LZ+i0L7Joo7w2+TUqeRyfvhq6mEYk X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:c508:0:b0:d29:958c:e431 with SMTP id v8-20020a25c508000000b00d29958ce431mr79924ybe.1.1691729502582; Thu, 10 Aug 2023 21:51:42 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:20 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-8-rananta@google.com> Subject: [PATCH v9 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215145_442958_8C2401DE X-CRM114-Status: GOOD ( 18.04 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 121 +++++++++++++++++------------- 1 file changed, 68 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..b9475a852d5be 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,74 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* + * __flush_tlb_range_op - Perform TLBI operation upon a range + * + * @op: TLBI instruction that operates on a range (has 'r' prefix) + * @start: The start address of the range + * @pages: Range as the number of pages from 'start' + * @stride: Flush granularity + * @asid: The ASID of the task (0 for IPA instructions) + * @tlb_level: Translation Table level hint, if known + * @tlbi_user: If 'true', call an additional __tlbi_user() + * (typically for user ASIDs). 'flase' for IPA instructions + * + * When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) \ +do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +367,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Fri Aug 11 04:51:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D687C001DE for ; Fri, 11 Aug 2023 04:53:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vQb/tYnUrfTlbqxf4ZkXLZUX2Ernz2KIs9ttgcvlwEw=; b=Fd2P6pyjGHKKgdBXEb67fpz+2c IYDvy96XG88qyphSEcB8epjhUB1iFCnS8z1Kq8rt4jr1BUrfDlbvo4ROfNwEG50FFUJyAvBGGNPfA MOqsdSkjPurOXp+3A7d0hJE6gLAzdbLz+GJFYqlhiqUpKcCb0FGT5Cld9CVUDOAY7unGIh7cyjI50 41PeepwhUD44zfxw0p5Dxx9sA3eCFf9BB+AynXEYo4zXXOZPthA7pgUSXvFgj4uevsHR8pgwlLqPp /4UaKC8dKvLxlKClYlKB6EC7VbjMIb08+n4IS5a5uA+WcagO90vzHHU6aufrsgA77ZLBIZZYJ3ac7 6KlkBheg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK90-009Qep-19; Fri, 11 Aug 2023 04:53:02 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7y-009PjK-1X for linux-riscv@bombadil.infradead.org; Fri, 11 Aug 2023 04:51:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=OIa9YyTqwhNrd4vOTw3RHcrRyaG5nOQ7ik48y43Y4K8=; b=GBOttilx83cFglISeq5Vcmnd59 MdOd6kgxb/SYZ8UT4AdCsFyCn3TXqj41HLp85Ig1opQrKCjuE6aaPuSBPP6giuvy4TyEQqk6jat++ bY/y13MilOiuxVY121ufZYW4YXCYf90G8vUZaF+IU+/LxVReK3EVERh2B1aiA5wxuFk1efP/1IUaV DGocw/POQ3Sm8/LhXl/l34CvIl4YM6yv3asd2FBNASuKXRO56arTyCXk8izOCAaVFUAq9d24RFu9r Ys6Z8zuNcsISFOSATR+C/Nmh+UbHEc//KXU++mu67N4GRGV5i2L8s9L4KXgK4E0Qb8B1+oilZ0iFq HhnSl2gw==; Received: from mail-oa1-x4a.google.com ([2001:4860:4864:20::4a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7t-007HYS-1d for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:57 +0000 Received: by mail-oa1-x4a.google.com with SMTP id 586e51a60fabf-1c0e84a8032so1459626fac.2 for ; Thu, 10 Aug 2023 21:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729504; x=1692334304; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OIa9YyTqwhNrd4vOTw3RHcrRyaG5nOQ7ik48y43Y4K8=; b=svuT6WkKlP0CJcBY9yHu6f0ZIVXj+TBAwFC4vkyG4qte4KX/hQhTloOdeOotpgH1+T DXpyYVGUloxDx92v1VAZ665bF3PzFTvuogex9H+ez+4kOcSEs1NpPoKrr5j/QVd2rwl6 g2f9NRZm1/rbKvT/4R6+hin2d0V69Zt6Nx474DYbJlUKYFarceVSUjKNMNT4xaVAyiPS urZw2SxeG19jsWZAnALLEXkBB/2p9q8lAuwOxHwws4YfL9owdTZpPTLkoayBgxmdvQOR pgBNB6+4Kx/FfI/gTiUIZ6F5vL9QrNKLOP24qyMOVlYGOsPi0f18WfZOnb7/7uRUapaO 9pDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729504; x=1692334304; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OIa9YyTqwhNrd4vOTw3RHcrRyaG5nOQ7ik48y43Y4K8=; b=e5e0biW/Cx8fcWWknNs0eMc2H7T8lR+bLdXTUMdvhg4nOKzh+JraUjmNUWBgzJT2vV EjB5YDU9ZUkv4pgQ501oQ+F+jeJbGQr++J17kR9++Tt3bVQT3NUHdehqoTcpWaLC1K8z MlfHcgCLlr8YVLjP9czmyh3ZcejMstgVyGqkQ2aklb/0T0BKXePawQva0zGPHfWH64a8 0xitq2ld02qXMGHRc1SRAW2AvCAcNw/El8DJKFcNMv9gYDZJRPq34Jzh8uYOHXszoYR6 deP6RxdA3q6PaXlDBd0Oz0ShnFUn6X+SembjalzEiY4QqE1SsWYcZ6fbDdEWh9EwgsDx 9pvA== X-Gm-Message-State: AOJu0YzqyaN5Ok6VNavpEsYPHPAiYySXbUPdSiWTr53YOfbhR9w7Hjgl IB671k9I2V7yHji5B10fmnXp95teZjP9 X-Google-Smtp-Source: AGHT+IESE8XY2qQ2H+qcMtIVXtwq5XJmfXR57n+CPxM8RC3XV+4+ZlJZGDmVohMH8Hvnfx7qNufXK5PRnARF X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:8c32:b0:1bb:7b48:32ab with SMTP id ec50-20020a0568708c3200b001bb7b4832abmr11929oab.7.1691729504052; Thu, 10 Aug 2023 21:51:44 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:21 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-9-rananta@google.com> Subject: [PATCH v9 08/14] arm64: tlb: Implement __flush_s2_tlb_range_op() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230811_055153_695057_537FE3E8 X-CRM114-Status: UNSURE ( 6.46 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Define __flush_s2_tlb_range_op(), as a wrapper over __flush_tlb_range_op(), for stage-2 specific range-based TLBI operations that doesn't necessarily have to deal with 'asid' and 'tlbi_user' arguments. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b9475a852d5be..93f4b397f9a12 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -340,6 +340,9 @@ do { \ } \ } while (0) +#define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ + __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, From patchwork Fri Aug 11 04:51:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDBE2C04A94 for ; Fri, 11 Aug 2023 04:52:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nAzAr2SKtVspMjMDVt6QaOWUphLX6yWLgKvsplM1J3o=; b=1YvyfcWjMbFmQ8bqxgLfbXM67u MAcv+3J0Zxg+E9S/Gk7yzc6Z7PLDyKtWJOlOPlg0A6zqrLUj+JWC/t8gOF6xYvrF7/VFum8gCMV+K s/pLNpUrBVkYjz7h8vH3snrBUkTaHnF9FiVJCdRCSVVb3jTzSnariJDlICu8pFKygpBQLbVfiWa1y v6G+JgbtTjSqE5s22O2E8wO4dj6RIq/bOJYpEUuxCRpctZWMco0AKBxBT1G6dIqV9jgjXNn8HM+dy XZXUv2+yVn/53qkOTdvCMbPLkzr2c+blhJGZOAb5Mbe194rXhOx88R5n8uQilPctTkWZjvRfdNaF3 U5TCq0cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK8J-009Q2b-0H; Fri, 11 Aug 2023 04:52:19 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7m-009PZ4-1D for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:50 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5840ea40c59so20285567b3.2 for ; Thu, 10 Aug 2023 21:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729505; x=1692334305; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ka1OcRccHM0KzEQYloqEzMiFCDcvf9Ljd92aHTE8DkA=; b=aRejMGUPo1BwB9fnmACxBlgUBSmWF5n3S38cjBVTry8VjYJJYNSUmg4qGBnlANKX+U PvIHln/z8UVJnunOsobTz8CZeG2ihxqxiL8K4oH+vxhVjj/sdQciVqjozGbzeB3sK3PH +O8zR43aPIVBXvb5LGjZP7eQcXRPpigDS7SodVI1uWzRu3Iwn/qyjpfBtwUqunHv/wn/ FkNAGpsKGqxQZz1clEEtwVknSipLiNAveAXRA8n6WSIK+J6gVaBOt9dcZSm5gMSx6mvr XDOZO4szLDkzg45P74pHluyCNZSb61jMuOoSjNieX1vU1nq3x6fuN7BZz/31rx9tsXJt c+7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729505; x=1692334305; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ka1OcRccHM0KzEQYloqEzMiFCDcvf9Ljd92aHTE8DkA=; b=f9CMpOaNj8sodEDRHkuSOwgbsLm0JuBLteeFEkwGDYFam4wQ05DiQF2xYWF504SGCK m1diPWP0H09VY8oZ5egRdGMPxDBnvDHJJlmMykF1c8Eadp1wMTCKtJw0kgNIi5y8CTXw TbI6taoEFWnLbft4ThYvXAL25H1dm0CnAzSY7sTutPq5QmGm2MQdAxaA0h7nlgm6pUHZ lMGpNMVgexmGHDtv9XOwe3KqyrANK0uK/xTXeZxalPVRp8XjYzye6aGWu3dvMkD/yFlh EQubfyipKERbgr+58h7xnrI2HmKCSDVZdu9kXWZt4IAEQAFArhHNIOJho3NIflTsj9Pl Yzeg== X-Gm-Message-State: AOJu0YyVmpyyaDumP/S9liNz7iVFFfw3JqNNJfAmpVR6vy2z0JWWmwEo UIyRdVTYYZRynqd9fUWTjH+0TcUtBMko X-Google-Smtp-Source: AGHT+IHRB39pelr3lcw8RQc1XS9fdfPqIA5sv82Q1SkA+ocurHMiLq+0bqfDdSHLIalUMyU7Z133wXt6RV5Y X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:4509:0:b0:586:b4e9:753 with SMTP id s9-20020a814509000000b00586b4e90753mr13588ywa.4.1691729505019; Thu, 10 Aug 2023 21:51:45 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:22 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-10-rananta@google.com> Subject: [PATCH v9 09/14] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215146_551430_5E26CCCB X-CRM114-Status: GOOD ( 12.72 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conveniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7d170aaa2db41..2c27cb8cf442d 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -229,6 +230,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index a169c619db60b..857d9bc04fd48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -135,6 +135,16 @@ static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctx __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -327,6 +337,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index b9991bbd8e3fd..1b265713d6bed 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -182,6 +182,36 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index e69da550cdc5b..46bd43f61d76f 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -143,6 +143,34 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Fri Aug 11 04:51:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45C63C41513 for ; Fri, 11 Aug 2023 04:52:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3AXiIGPI1u3s2nMJAFyK5/q0+69tRB7cM2MLQtMEhwg=; b=43WfSkBlNyJOwDJaMHXUcjB+TM enZP5QugTkmU+Men9k7gLcLKQ55RHzSjq7qhtwQIUVNnOpeXHwwMO9X9M6QHyBX+zhA1vbn69Lvjd EuDTtDZA84GUuj3p+q1Ra9j6W3rrx4aLtd2AKWweb5jU4RG6eBwya5Sk1Oopjd9PjfYnPxwweP/JV 2hkW/xNpuG7iVv9lIOsUYwchQuDLqXHBqZ5maCI17piruOsjapouOvm87jT95IrANwiTmTdtXI6+c TXqfBrc4FVPgISMEU9n4TDM5s9l0Kmb+Rml/+sEBQWff+pBHlI6Sz0QAVcEL3ZKkPNT3Ulvye0X9g oE1OLgVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK8Z-009QFp-0W; Fri, 11 Aug 2023 04:52:35 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7m-009PZn-2P for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:52 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-583d1d0de65so20507187b3.0 for ; Thu, 10 Aug 2023 21:51:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729506; x=1692334306; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CwKTqZRHBF6wuN1C9cAW2HapO35gWwZnaXscRj6lwnc=; b=siKzlnyeJYxNxz3Le8APQ8uLcQDtQqwiSQdt6bti4OnKCrnHioFgjEot+EwjAKZlZk P9katyZ2JtDsI8FPdIPMQYuckWNVQ5RfjwpRdQhAMFEwhgD7E0eUrp3A0uStx/N0UzhL 4Ib4FS/HKhPlrslYEhf4bRkLeti0q5O+S43TqWtGnA5mbcXMOQ0sOHExR4e18KudVMIM +gsKAXATZIeVV6zWjmXpe2nel8pvZAoV6Cdm7wSDFmb+FgKWoTaCWMpAZi1va3S+D+BS 4KbaNzlRFGZu0d060Mbd+mqHAW0BvsFI1Oql8h2q/mQt8DVk9UFWyigyB13jSZ+L3pM7 Nm/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729506; x=1692334306; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CwKTqZRHBF6wuN1C9cAW2HapO35gWwZnaXscRj6lwnc=; b=LF1OG1rvgQEGIe/TiPIWCpD4YYv+FoMxr2e7dCTTVeIdqZSnUeSrcmx3RIg3k6CUPa AfaRgCCwQoeUjW0DuewHumreASMkZTRULTVzeEWN0qZFMzgAsega65W5SJLCOW9wyYYR PXwEVArU5uD4It3mxi44WNCtT5FAHaJc74ppXNCwsVICqkGTeSHo1ePkhcDXrQSmhwt2 2vDEVx1nKuvalWEB2Z0NVvQhW6XtZafqsukp2RUIGAIV05/JmCllG2dCgnbh9TR1gLQm 2MQoxlscMHXYvEFMum/3kD3sPHw935KIinmhbJ+DpZd0e8RUlWbrADodsJ3VefiPAQFM UJaQ== X-Gm-Message-State: AOJu0YyGXHsrhrN98W4BiTRvHZSXqGaRIwH4icfgyiaHUSk6qFQnVV0E uqTQe1gLV5iiaEdqlSQ/r5ai95hIDpEl X-Google-Smtp-Source: AGHT+IEoLRdyr3lsTK4vKyN6f7vXGgpqz2mVAAYTzW+KAu5EcGudZd07SchN8aesdD/7sDPFXHJ3RHFbNwel X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:8185:0:b0:d63:1d2b:624e with SMTP id p5-20020a258185000000b00d631d2b624emr9342ybk.0.1691729505983; Thu, 10 Aug 2023 21:51:45 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:23 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-11-rananta@google.com> Subject: [PATCH v9 10/14] KVM: arm64: Define kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215146_888267_FD44A1DA X-CRM114-Status: UNSURE ( 9.25 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI instructions to do the job or fallback to invalidating all the TLB entries. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8294a9a7e566d..5e8b1ff07854b 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); * kvm_pgtable_prot format. */ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); + +/** + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries + * + * @mmu: Stage-2 KVM MMU struct + * @addr: The base Intermediate physical address from which to invalidate + * @size: Size of the range from the base to invalidate + */ +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index aa740a974e024..5d14d5d5819a1 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size) +{ + unsigned long pages, inval_pages; + + if (!system_supports_tlb_range()) { + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + return; + } + + pages = size >> PAGE_SHIFT; + while (pages > 0) { + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); + + addr += inval_pages << PAGE_SHIFT; + pages -= inval_pages; + } +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, From patchwork Fri Aug 11 04:51:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A943AC0015E for ; Fri, 11 Aug 2023 04:52:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Oa6B327hMuMFxZ5ivyvDDxc+jTcSXJYXrYAB7OAYqQk=; b=yme0c2rA/TYr1LctWOt66hzXQ7 n+ONlJdCE+0ZzoCCyYzpV5XUMyXmsAP6LJWqxhfHIxOrfkKxzscTeYB2n39hKdjciEocvlRWm8cDT jskYCD+HDJOjbrGDSRi7eLunX2jRL0y4zNjk9GNHjGhDAxOb922dagVgLlKrGq6QjuyUuO4njJRH5 Au1T+XQGSAwpc3kmzCDDU5z77KbDQCOkY8T3BLNqxnxIOBKrgFwkZ4CGD1lhVyNjNjMUdoXxKeTJ+ Q73H5lxKJQ/AV2hXi+eE4vIsStbXgmjF5oeaw138omvZJ+N2cXZBJMmID61xhFJRG2PzE3LL4XUfN CEl1MsXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK8h-009QNM-0y; Fri, 11 Aug 2023 04:52:43 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7o-009Pbh-24 for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:52 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-58667d06607so19956477b3.1 for ; Thu, 10 Aug 2023 21:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729507; x=1692334307; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oLpltBl1pXuOSkpdQMGZFCQJKBJp7Al35vzbWbo573Q=; b=ABgFIwVolq/6ag26A1K+kK7FXdPo3peqsgbFXc8qiwvbhPI3bZRUdLezxZ8tQik92l T+iaoKDZaDOv6IfoUeiWPTwxhMI85ie6LyPMFxNj8XDmursgFf3en60dEzF4dqf3h+RK WWx7VeXuvcWe37vd84r2WsEbYnf4b0bMiMo48syBpgvZj3Mksg0RwjMn5w1l5qhHm9a0 rzrR9Pv0NaqoQijOdbcWJVVU6XHeWI2Lcm9tCQY3XKwKjOaLQmbXEehuzJJrIwVg3DQc UD1yUHkb20HLZXzkBKX91nLMJ4OeAFKhFEZgnnRUJwp6sPgDeXUVCPtkFFCTN0I7E6Kc DCjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729507; x=1692334307; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oLpltBl1pXuOSkpdQMGZFCQJKBJp7Al35vzbWbo573Q=; b=K8z8cmhcluGxo0T9Fncc9bMXbnUMnNAgyRq9HmRc5ATDZXl4jifFZQxp3LtHjEKDWS 11TEsklV2ztp6WbDZL6P0Klai9UOfE/pI1xGccVuWtlsmHyQSDX4QybdhXzhkppZP66J sM+C44VX7YXmlogFXSxppAjE0QXnmbyegMV1je8L5NSsofxnr1sk7GW1Ob/3tfVXEkWe JQpGMObFyOJN5lFuyXxj1Jw61C+HW9Abd648lpegiWxlwF7C999tJWyaVHTbnwbnPDEZ FhSkz2QVyZpSastBpErDYPz2Kd8d8jt0TK45nptQJFqmcscrqA4D916boFIFTuknXgu3 aK9w== X-Gm-Message-State: AOJu0Yyh0envm/enjd2aYXUS/uZiWelAw6gJHS3e7ATSZlQM2EgNCgBg EVnfoSNUTZYLkZ2HUOFwJ1MkkiTOwGPy X-Google-Smtp-Source: AGHT+IEO0sAyUJ6vzJq2dvhSW/1XhcAzkAPBREyGCgte4h6OEGCasNix0f17CE/S4vT/rvUZ3pDeQps2/zyR X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ad51:0:b0:583:a8dc:1165 with SMTP id l17-20020a81ad51000000b00583a8dc1165mr13228ywk.10.1691729507118; Thu, 10 Aug 2023 21:51:47 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:24 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-12-rananta@google.com> Subject: [PATCH v9 11/14] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215148_708688_4DDFC2A5 X-CRM114-Status: UNSURE ( 7.00 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/mmu.c | 8 ++++++++ 2 files changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 20f2ba149c70c..8f2d99eaab036 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1113,6 +1113,8 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0ac721fa27f18..702f8715f9fe7 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -172,6 +172,14 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages) +{ + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, + gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Fri Aug 11 04:51:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8108C04FE0 for ; Fri, 11 Aug 2023 04:52:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FiLgX7yc29VrXN/xRPi9oIbpmK+mpert/txpJLHDM64=; b=jXo44QwU1y/iMUUDpMXnlt9yNT 4o4RlA+pNMYb9dymNS3BW0dmpN+zxw/QP6OISVGsas9qpG6KGlzPhG9mHsEOyyGxjmD82HNhEGMGF iN2UIscxij8FhfOUgNV1qKmyK7AvkUoGDyDxPSWftxIjKGs3r1cL3vYtYWC36CDs8/yAkqopUo5RF Bipas+JaRreCSh91u07S4TO7yPk7brA++6afVkuL50sWzkY+7gRhNsjWH9up36UZJkWr54PiI3s9B tZltC0ZIbLRVXAYft1cARya25VlKrbNlRlLom35ooHH7XLfn8VBhZDTdrHuXGr/U1I3MUXkeiJ/kP HmCkceWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK8U-009QBo-1r; Fri, 11 Aug 2023 04:52:30 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7p-009PcQ-0R for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:52 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58419550c3aso20136417b3.0 for ; Thu, 10 Aug 2023 21:51:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729508; x=1692334308; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P6qPiqYiGdEy+ALMWmUClgOJtf7MJwHf5G0engyYFuk=; b=qrcaMEcSpq+ULmZt9JgV/iXS5+x9ZwEslbJLOWWZHpe9tEaJ28kJ0dsZTFh+wHaZVY z/+PWbhe4kPyyucCymXKpbVkBUvDdebb4moSiMcKue6hrkGuTyTFlQ2SS1ZDAmF+7o2K JIqhRUzm+nxJK7sCky9lN7UwOaQ9GwOZduGzIbm/TwSEScjePA4FFoYCus4bBKN6ke8D wf+zv3/75Th+zNlJ44hzGeBSVubCWPJjE5lDfkpmjoMYIWdYXie5RHK61lA9sT3xtR4j o2HtqwDGpSxcDAOl6CF9D9EeEbTPBni26ZJIeebSWG7wMNG4nNsXFqjiD04Lfm8rlpNh SVOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729508; x=1692334308; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P6qPiqYiGdEy+ALMWmUClgOJtf7MJwHf5G0engyYFuk=; b=XXPItjwPSoiidrer9Ap5uTNgREaV8POTEcr1BJzvl8rCv1GRJuMZ9rKptB4DQJxnVa 2HR8tQRRbBgpkzdAWUi5tBBhQQBcXRgY997EcWDSXDqFIuR3zb00TkDSZnwgQ/yKyPM+ iq6+NBwyrh1YQ8Yj8DkWwqWJGNUZvusSLvLxfSu6PchfF3tncY3uPEoOZ8tcMDoHj5S+ iAaUNI2RLeoJhMsgd/0J9nqwWeHAUPvs8ejT8BbizjQfkI2mSeWmw6nlTNX3Ysnd/LC+ wEKUYSqBdBYps0SM8/wjpMWTFrPPcsJCeaztxAzDLxIEHmf3bZRbT5e2suL2bjFz7ya3 FAzQ== X-Gm-Message-State: AOJu0YzYWh5DGjjebv57zbasoBceG5ZulBfcN3bcBPHEl1qx89L4a143 TcyxFn8OShLk0/AdIiOadh7yt+ajcKLs X-Google-Smtp-Source: AGHT+IGXV3TrUe7VdEmoyRMybbrBwXMmeig9Hy139aV3xPrm/bppO8Uhrj80L5wM2TEVNSyoFL3mzf/IgjNd X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:b71b:0:b0:589:bfc5:d80b with SMTP id v27-20020a81b71b000000b00589bfc5d80bmr14055ywh.2.1691729508012; Thu, 10 Aug 2023 21:51:48 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:25 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-13-rananta@google.com> Subject: [PATCH v9 12/14] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215149_222734_FEBE7DFD X-CRM114-Status: UNSURE ( 9.44 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 702f8715f9fe7..6f44896936b47 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1083,7 +1083,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Fri Aug 11 04:51:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE74DEB64DD for ; Fri, 11 Aug 2023 04:53:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=j89bj2JxUJWKHaU2MyxGpXGIdpQJ9gB6xbVAyGhNDqQ=; b=gBo+nyFEmMXQ3AwqHcwlhP00Lq zXhifKEg7zVInhJV+ND/hEBZUt25y1UZmquMYZXYUkheDiY1BtrNu80TicAfYEAtL3fk57f3cRXNk aa9Yj1QAUkOLFHYzZ+1p6u8W0xm3F5gZ3l3p4yKZ0W/I7dvwLNaoG38xpDZAiMEHrs3bEaWyzjbn8 3Ic38w8US1ngAG5ofRuFQiMdJVb48wINezo7tWZP7P1CqhcJgmTcV+5VMMZCKF7Nx9SYFAz3bhBJH 7RiMLsp0rQi9ngPXk+2ZvsKhH4HVls2Fsf+RmXocRQTvTT8nd77bLWragCoHTE6FKBg6ayL2YcZZs qKxATZ+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK8r-009QWt-1D; Fri, 11 Aug 2023 04:52:53 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7q-009Pdh-20 for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:54 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d4ddbcbbaacso1644365276.1 for ; Thu, 10 Aug 2023 21:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729509; x=1692334309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ceh3/nJpRP3LCUTMqceYFYfxrY0H2MNGDIBSz6bT86E=; b=lHqWfp61VXzWhun3liSCo1CSCBHiEyU47FTCxsnQq8PbPrZVJoSsWVZ4fEX98C2DkW 5tek7XQP9mJy7NeheveiRehPOgCFv2062IYZFMuFBJd/9R8dr0dmve4e4xKZzUd6v89o VeSHW0hjn2DbsqdsgVscCVTcn0H4yM4N/0YEqjJ196nf8SdcTQtyOZZaVghqfh6h34Re E0oHw2Y6uHdjAkl15YMtMg9Z6P8rco3KWSpUtvl1TY8Fm49+r+1TuIERD17QjTEoKmLD O3fI87xp18xcDlB5HLBTUsrTm5rIbVZpwNSq52s4Abs8Cp3fKn2aFj26qWXruJf9eih2 dIFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729509; x=1692334309; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ceh3/nJpRP3LCUTMqceYFYfxrY0H2MNGDIBSz6bT86E=; b=Ypmo2JgUwHmyUSBH6pUSNlprcMa+UorHAUoNFBHkD7F6nhghQnDgijYSH4lz69INQS Zy6OrAfU8U7PS7yBC4xvR5GiKfHfEPYMYapLAZXdd5U6wc7eyoIqeKdVtdtvhQnnCsR+ TACk/mzsPJUftyeou4cBeibK6llz5TOFCvT9exHPHLKI+nZAik4/9BoMK3PhHb9xiTfF kSpcosZoZNTFu+7/LFxWLd9y3W76pJ4V1nIDybiPUzy06r1SPUl5J1+Kq+o5fvl0TkLy 2BO6jfUCu8ylN+VmOoCe4yhz+WiyH4awNX9nR4sUwmHg2NOKrYCTtEU9+O1Wvita09cN gb1Q== X-Gm-Message-State: AOJu0YwBoYfnhqloHIV6sNPe3BEfwEzrYGKcWtU3hO9PYFgCyWULoscP jZz7EY9m0AEgU/L5axk9lA84DYEn3Zad X-Google-Smtp-Source: AGHT+IGOJteWPd/cnLBRhVwGynPlFvhmV8TWzRZj/7clA/O6mEkppvIui53pAv4SPGvWg3HXNvX6L5O7yhJy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:d814:0:b0:d48:c04:f256 with SMTP id p20-20020a25d814000000b00d480c04f256mr8024ybg.11.1691729509090; Thu, 10 Aug 2023 21:51:49 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:26 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-14-rananta@google.com> Subject: [PATCH v9 13/14] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230810_215150_736945_392B6F86 X-CRM114-Status: UNSURE ( 8.95 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, leverage kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/hyp/pgtable.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 5d14d5d5819a1..5ef098af17362 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -806,7 +806,8 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * evicted pte value (if any). */ if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + kvm_tlb_flush_vmid_range(mmu, ctx->addr, + kvm_granule_size(ctx->level)); else if (kvm_pte_valid(ctx->old)) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); From patchwork Fri Aug 11 04:51:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13350133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79A30C001DE for ; Fri, 11 Aug 2023 04:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lfM8E1z/HOcm5F3hharerqBh7pZQfzNgIqhlIFmI/0k=; b=jdjgo/deuCZMBlGAdsh3vg/TXw 8+RwOm03jr+F2oxPh3RH8D8LhvJ1/WrV3VUesFjTg5iappsMmvszJiPlyjpkxL/LGCf1d7uuRywR+ ORSQ6zzSCDjXt8/AesyHrOaAlO5kSNbtXmBvu8Sv1TMBJG2n982hDq6qyt14SsGRr8PlSpQN4JCvB oJrGUEGoHgRwUfv7i92KZabCIKx6pdkIw14L/vYdl1HOOS3Ih7Rfl6Q6CrRLuXu9Edy6Cuua/0QZ3 qMR/cK7RTZJ6GhBXryUuxDt10xjvy3cUujd+g2UXcSvjZCAKEnq0FNAh2zdLQyTiSZ2A4Kb2cWj8d oZmRRG/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUK9A-009QoF-1p; Fri, 11 Aug 2023 04:53:12 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK80-009Pl5-16 for linux-riscv@bombadil.infradead.org; Fri, 11 Aug 2023 04:52:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=G001bKqgoFEW2hLjB7/836JCEIDwO1wziDEr7SkyYCE=; b=jt4myudSgoyWrKvwh9QQFCgKXU VjmpBdwhUt6gu2u9OhHh+8tH28KeCx1Hm0XYivuC0//xEj8h0rXNM46wm2w//19ChevGS3xyVuR/4 BuK3upVYnOmG91RrWzy53768eaU9QGLuYD/22dw0OMqWokTRVPAycYvlq5aOSm/2TIozj+jrQWa/T I+5kcC6ycGvM3tDkT/rRftZn4o4Lp6ST1+KHZouaYEVIQedVGd0C+7tNqC2kKxi0CFgQtPeYdtP8F OostgwUJMo9KnffkozH3l9jroElsiZs8DCfaP2FQy6pyiwD16Yus9pgo5IQSqE32F5OZAU+3a8Ukv 82a3iTLg==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUK7t-007HYZ-2w for linux-riscv@lists.infradead.org; Fri, 11 Aug 2023 04:51:59 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d0d27cd9db9so3197971276.0 for ; Thu, 10 Aug 2023 21:51:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691729510; x=1692334310; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=G001bKqgoFEW2hLjB7/836JCEIDwO1wziDEr7SkyYCE=; b=W3WTn+szTpGRYJ4Nrsk4rxOWrftcekNscs22xwqlhuJB93Ar0BfqRGdEWyk/Q+jqhP ri5qHyFNgBLpKKH06LH+HsWb1MpZhFmzfXxeF/ORAT07Zun/zgnJLSX6I3leqAoY8QXR Cn/et64Ad2ndHQvQUM0T7mioUm2x1aprIMcTE/Orm1a2GcZucR8dO9nfLrJGDH4DDtBH Zqv+GVr8tvgyxrI1x+huGGPPptiOeQdCMsGwnp+KGtkceCK4aB9KNjMvLKxJMtaqApLO 0pB9A/SZQYJ4zNbgL0iGKFGckcIBEfW0AiexbMyF7GNuqQ22QUyKuY/29OVGxcSI10vu hGpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691729510; x=1692334310; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=G001bKqgoFEW2hLjB7/836JCEIDwO1wziDEr7SkyYCE=; b=YoWnD0/DkX/LvBOLYFc5xkurN7JELLKUFL+PPSSnZN7b6ZP90XINUzt5nqsHFhcQkl DwQWLeRXJ+pWLWZhHJEsvQ+hwC2zKGIWkpHnX2O7vN10F1Vz4RfxWiWltvu/RrINwyUr +fbV1asINMZfm4VC7O0PXeWNjmzWDurTbkusemlyLVSGktE7c7eE9VeOwuLZu4AO5P0P 1zslT2cARpM3EN8sURHEapBK3CtYjS5Cyuj9unhvCQKZFIw7qyzyfB90Ad+sG2HkWq4M AaKsNpSQ1PCHgbOndEaBtgQu3WuJma+AunmW22oeuYTazk46eB2UBKwgqovgr5VwFNDX 6c3w== X-Gm-Message-State: AOJu0YxYH5qdWkCAchiPhI5xEKF5KqJmEQVGsx+G+CaNc/RKP0YY39xb DclIq+UBn5XSUiGVrCox4BRyV/lIKp0+ X-Google-Smtp-Source: AGHT+IGp8zLlvGKgsKPT+euh9i0KtpqBPibbJyYiSM4xY/zJavK9m1k0FyNfqcnUQJamiQjlUlH81wBUvAac X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:91b:b0:d15:53b5:509f with SMTP id bu27-20020a056902091b00b00d1553b5509fmr80327ybb.2.1691729510277; Thu, 10 Aug 2023 21:51:50 -0700 (PDT) Date: Fri, 11 Aug 2023 04:51:27 +0000 In-Reply-To: <20230811045127.3308641-1-rananta@google.com> Mime-Version: 1.0 References: <20230811045127.3308641-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230811045127.3308641-15-rananta@google.com> Subject: [PATCH v9 14/14] KVM: arm64: Use TLBI range-based instructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230811_055154_034957_CBEAE351 X-CRM114-Status: GOOD ( 14.87 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/hyp/pgtable.c | 40 +++++++++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 5ef098af17362..eaaae76481fa9 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -831,16 +831,36 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +static bool stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) { /* - * Clear the existing PTE, and perform break-before-make with - * TLB maintenance if it was valid. + * If FEAT_TLBIRANGE is implemented, defer the individual + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition deferred TLB invalidation on the + * system supporting FWB as the optimization is entirely + * pointless when the unmap walker needs to perform CMOs. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + struct kvm_pgtable *pgt = ctx->arg; + + /* + * Clear the existing PTE, and perform break-before-make if it was + * valid. Depending on the system support, defer the TLB maintenance + * for the same until the entire unmap walk is completed. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(pgt)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1098,7 +1118,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1112,13 +1132,19 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(pgt)) + /* Perform the deferred TLB invalidations */ + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); + + return ret; } struct stage2_attr_data {