From patchwork Thu Jun 25 13:14:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11625835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97ECB14E3 for ; Thu, 25 Jun 2020 18:06:12 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 602B020789 for ; Thu, 25 Jun 2020 18:06:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="xRcsLlOG"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="ptUYaqaC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 602B020789 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GDfH6GGdfPIRfhlyD6nNJhpLpHWKg0C0INmsEQ1A5xQ=; b=xRcsLlOGxXpBH61Ur07HrP+Rs kKfEVKu4JTT17txNirNS64vegRbdI43aJY4v0IseV6Q7FbPyIH0inFrLSn2MXsV8hRr/Cy7Rsn5OW z6T9G60k5u4SRxB10tCk/mV2KOsd+xVmf6ikctq7YqfExYpRtdeqJoF6YA9fu+RcxudtL94rU+CUr ST1cc85OCIOz08gGOmAoFT5GkMscWvR2ibgEXwhz1tOfWsj5Se5JumzFQfwZGrY/PLCPn4LdGctzS 2bsP9XK9bNG/NqC0Vj+y6vXmfpu5nc88tIz6GyJZAkKLLWUVu+wGKooVYK2x3xsmQOmXe7SvFfiVj LeTjIF87g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1joRkp-0008BV-TT; Thu, 25 Jun 2020 13:17:23 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1joRiD-0006z4-S9 for linux-arm-kernel@lists.infradead.org; Thu, 25 Jun 2020 13:14:49 +0000 Received: by mail-wr1-x430.google.com with SMTP id k6so5809391wrn.3 for ; Thu, 25 Jun 2020 06:14:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8YaXLDxxzlKAiESgNRq8wdJ40whWlk3lT5J1UV9ShEk=; b=ptUYaqaC2fNbn4dFnNBvjrml34RgtIWxecFGQZ/rtO5cK2Ktr69rNMtVd+7KLtrVaU vYbQR9jiUt2YGFm5AMtGMFiW2vJmnma8HuPx5HTyDOhNzmq5X+4pq3GCJLYvxqgjhijQ SB7i4uAk7yRQQp1Y07MtuvJ2wYX9JlJgabFaFGDUUpfL+f/cLzxrEnV3Eqyx5z9p5oT5 MFnD5BFal4/GRzbmpW9b0KeuJYSCq2vMQohzeTX/7DeLKFSw97ti4GuZqYVHYes8/i+X z6mly0+ExPsofWqW/OKDtF1ACJS1CVODPDde9p3tM9DesBZodxOHvuqZNEvElb2KLOoq FnJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8YaXLDxxzlKAiESgNRq8wdJ40whWlk3lT5J1UV9ShEk=; b=WQrJaCR5tL7x57RGTLXoPrSEu/KF7dUX7E27rIR+Jup4cDeJTXq4xBz8XVzaK+v+nh 4+jhqXmtTQhWvrZ1ATz2WTGCzqHNOp4ocb641srJh0PJ6Fq3vliVrf2MZka95iFXtli9 Q2L3uxpewFiKLI3OTEaTt00UGUcWwAQ3nc4GObzazsA7FzjIrh/NnwaF59KIv/TN1BqC 4MPjqogOFePh75P8iZ0mq3cmfOoeRsSKlMnI72WWixAObrfV0hnopX0c6F9kgrgZl/wK fkpVy0Zqy4tkWz8zp+6ReEQP8fJAfKSto4CxENuH+pO1Jpd71ikr3g/6+BmDpXyys11X j5uQ== X-Gm-Message-State: AOAM530Mjwzqd4V2kkvvL38z0Alpss7gMojBUFFk04fGTb7zLxdu8QxL I62kZrDKtheZr6zfQqYFRsEL/A== X-Google-Smtp-Source: ABdhPJyX21tUaR8lSQzSoYuCZextAN9yErT7506pCYgBsfXJoeyLWsHRNE0sQ44n4PXOoIfMXH4qYg== X-Received: by 2002:adf:a507:: with SMTP id i7mr39649354wrb.0.1593090879212; Thu, 25 Jun 2020 06:14:39 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c126:6748:7a9d:2d]) by smtp.gmail.com with ESMTPSA id 26sm10326690wmj.25.2020.06.25.06.14.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 25 Jun 2020 06:14:38 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v4 08/15] arm64: kvm: Duplicate hyp/tlb.c for VHE/nVHE Date: Thu, 25 Jun 2020 14:14:13 +0100 Message-Id: <20200625131420.71444-9-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200625131420.71444-1-dbrazdil@google.com> References: <20200625131420.71444-1-dbrazdil@google.com> MIME-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:430 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, David Brazdil , kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org tlb.c contains code for flushing the TLB, with code shared between VHE/nVHE. Because common code is small, duplicate tlb.c and specialize each copy for VHE/nVHE. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 14 +-- arch/arm64/kvm/hyp/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/{ => nvhe}/tlb.c | 94 +--------------- arch/arm64/kvm/hyp/vhe/Makefile | 2 +- arch/arm64/kvm/hyp/vhe/tlb.c | 162 ++++++++++++++++++++++++++++ 6 files changed, 178 insertions(+), 98 deletions(-) rename arch/arm64/kvm/hyp/{ => nvhe}/tlb.c (62%) create mode 100644 arch/arm64/kvm/hyp/vhe/tlb.c diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index c3643df22a9b..5d7cb61bfa9a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -81,12 +81,6 @@ KVM_NVHE_ALIAS(__kvm_enable_ssbs); /* Symbols defined in timer-sr.c (not yet compiled with nVHE build rules). */ KVM_NVHE_ALIAS(__kvm_timer_set_cntvoff); -/* Symbols defined in tlb.c (not yet compiled with nVHE build rules). */ -KVM_NVHE_ALIAS(__kvm_flush_vm_context); -KVM_NVHE_ALIAS(__kvm_tlb_flush_local_vmid); -KVM_NVHE_ALIAS(__kvm_tlb_flush_vmid); -KVM_NVHE_ALIAS(__kvm_tlb_flush_vmid_ipa); - /* Symbols defined in vgic-v3-sr.c (not yet compiled with nVHE build rules). */ KVM_NVHE_ALIAS(__vgic_v3_get_ich_vtr_el2); KVM_NVHE_ALIAS(__vgic_v3_init_lrs); @@ -113,6 +107,14 @@ KVM_NVHE_ALIAS(panic); /* Vectors installed by hyp-init on reset HVC. */ KVM_NVHE_ALIAS(__hyp_stub_vectors); +/* Kernel symbol used by icache_is_vpipt(). */ +KVM_NVHE_ALIAS(__icache_flags); + +/* Kernel symbols needed for cpus_have_final/const_caps checks. */ +KVM_NVHE_ALIAS(arm64_const_caps_ready); +KVM_NVHE_ALIAS(cpu_hwcap_keys); +KVM_NVHE_ALIAS(cpu_hwcaps); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 8b0cf85080b5..87d3cce2b26e 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -14,7 +14,7 @@ obj-$(CONFIG_KVM) += hyp.o vhe/ nvhe/ obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ - debug-sr.o entry.o switch.o fpsimd.o tlb.o + debug-sr.o entry.o switch.o fpsimd.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index bf2d8dea5400..a5316e97d373 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -6,7 +6,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -obj-y := hyp-init.o ../hyp-entry.o +obj-y := tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c similarity index 62% rename from arch/arm64/kvm/hyp/tlb.c rename to arch/arm64/kvm/hyp/nvhe/tlb.c index d063a576d511..9513ad41db9a 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -4,8 +4,6 @@ * Author: Marc Zyngier */ -#include - #include #include #include @@ -16,52 +14,8 @@ struct tlb_inv_context { u64 sctlr; }; -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - u64 val; - - local_irq_save(cxt->flags); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* - * For CPUs that are affected by ARM errata 1165522 or 1530923, - * we cannot trust stage-1 to be in a correct state at that - * point. Since we do not want to force a full load of the - * vcpu state, we prevent the EL1 page-table walker to - * allocate new TLBs. This is done by setting the EPD bits - * in the TCR_EL1 register. We also need to prevent it to - * allocate IPA->PA walks, so we enable the S1 MMU... - */ - val = cxt->tcr = read_sysreg_el1(SYS_TCR); - val |= TCR_EPD1_MASK | TCR_EPD0_MASK; - write_sysreg_el1(val, SYS_TCR); - val = cxt->sctlr = read_sysreg_el1(SYS_SCTLR); - val |= SCTLR_ELx_M; - write_sysreg_el1(val, SYS_SCTLR); - } - - /* - * With VHE enabled, we have HCR_EL2.{E2H,TGE} = {1,1}, and - * most TLB operations target EL2/EL0. In order to affect the - * guest TLBs (EL1/EL0), we need to change one of these two - * bits. Changing E2H is impossible (goodbye TTBR1_EL2), so - * let's flip TGE before executing the TLB operation. - * - * ARM erratum 1165522 requires some special handling (again), - * as we need to make sure both stages of translation are in - * place before clearing TGE. __load_guest_stage2() already - * has an ISB in order to deal with this. - */ - __load_guest_stage2(kvm); - val = read_sysreg(hcr_el2); - val &= ~HCR_TGE; - write_sysreg(val, hcr_el2); - isb(); -} - -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { u64 val; @@ -84,37 +38,8 @@ static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_guest_vhe(kvm, cxt); - else - __tlb_switch_to_guest_nvhe(kvm, cxt); -} - -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - /* - * We're done with the TLB operation, let's restore the host's - * view of HCR_EL2. - */ - write_sysreg(0, vttbr_el2); - write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); - isb(); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* Restore the registers to what they were */ - write_sysreg_el1(cxt->tcr, SYS_TCR); - write_sysreg_el1(cxt->sctlr, SYS_SCTLR); - } - - local_irq_restore(cxt->flags); -} - -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) { write_sysreg(0, vttbr_el2); @@ -126,15 +51,6 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, } } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_host_vhe(kvm, cxt); - else - __tlb_switch_to_host_nvhe(kvm, cxt); -} - void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { struct tlb_inv_context cxt; @@ -183,7 +99,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) * The moral of this story is: if you have a VPIPT I-cache, then * you should be running with VHE enabled. */ - if (!has_vhe() && icache_is_vpipt()) + if (icache_is_vpipt()) __flush_icache_all(); __tlb_switch_to_host(kvm, &cxt); diff --git a/arch/arm64/kvm/hyp/vhe/Makefile b/arch/arm64/kvm/hyp/vhe/Makefile index 323029e02b4e..704140fc5d66 100644 --- a/arch/arm64/kvm/hyp/vhe/Makefile +++ b/arch/arm64/kvm/hyp/vhe/Makefile @@ -6,7 +6,7 @@ asflags-y := -D__KVM_VHE_HYPERVISOR__ ccflags-y := -D__KVM_VHE_HYPERVISOR__ -obj-y := ../hyp-entry.o +obj-y := tlb.o ../hyp-entry.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c new file mode 100644 index 000000000000..35e8e112ba28 --- /dev/null +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -0,0 +1,162 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +struct tlb_inv_context { + unsigned long flags; + u64 tcr; + u64 sctlr; +}; + +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) +{ + u64 val; + + local_irq_save(cxt->flags); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + /* + * For CPUs that are affected by ARM errata 1165522 or 1530923, + * we cannot trust stage-1 to be in a correct state at that + * point. Since we do not want to force a full load of the + * vcpu state, we prevent the EL1 page-table walker to + * allocate new TLBs. This is done by setting the EPD bits + * in the TCR_EL1 register. We also need to prevent it to + * allocate IPA->PA walks, so we enable the S1 MMU... + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + val = cxt->sctlr = read_sysreg_el1(SYS_SCTLR); + val |= SCTLR_ELx_M; + write_sysreg_el1(val, SYS_SCTLR); + } + + /* + * With VHE enabled, we have HCR_EL2.{E2H,TGE} = {1,1}, and + * most TLB operations target EL2/EL0. In order to affect the + * guest TLBs (EL1/EL0), we need to change one of these two + * bits. Changing E2H is impossible (goodbye TTBR1_EL2), so + * let's flip TGE before executing the TLB operation. + * + * ARM erratum 1165522 requires some special handling (again), + * as we need to make sure both stages of translation are in + * place before clearing TGE. __load_guest_stage2() already + * has an ISB in order to deal with this. + */ + __load_guest_stage2(kvm); + val = read_sysreg(hcr_el2); + val &= ~HCR_TGE; + write_sysreg(val, hcr_el2); + isb(); +} + +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) +{ + /* + * We're done with the TLB operation, let's restore the host's + * view of HCR_EL2. + */ + write_sysreg(0, vttbr_el2); + write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); + isb(); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + /* Restore the registers to what they were */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + write_sysreg_el1(cxt->sctlr, SYS_SCTLR); + } + + local_irq_restore(cxt->flags); +} + +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi(ipas2e1is, ipa); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +void __kvm_tlb_flush_vmid(struct kvm *kvm) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct tlb_inv_context cxt; + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +void __kvm_flush_vm_context(void) +{ + dsb(ishst); + __tlbi(alle1is); + + /* + * VIPT and PIPT caches are not affected by VMID, so no maintenance + * is necessary across a VMID rollover. + * + * VPIPT caches constrain lookup and maintenance to the active VMID, + * so we need to invalidate lines with a stale VMID to avoid an ABA + * race after multiple rollovers. + * + */ + if (icache_is_vpipt()) + asm volatile("ic ialluis"); + + dsb(ish); +}