From patchwork Wed Aug 14 12:34:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 13763444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AA1ECC3DA4A for ; Wed, 14 Aug 2024 12:36:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QSU1HX9QkMZr2pL/9PwXXjBls8OHSQjq6P215LsbTIU=; b=vRGPMXq9FuIWFOuf4cBz3DH42n XsqU2dVh0lkl6gWQJPNK+H61jDOFLXRSP24JksvwE6+76sVZVXcyQWJsKEpGXbspp6Hc93BmyUBGn dZQWJclgs8Gj7ES2iVuQeFDF2wyaTpL2ibbuxfXo5kD9xi7+HuNg8UCCcbjT0iz3vsoa404u4Z1RP j5jrrPWbjyvHIw08VKa/V0kYABm0fufNB4wWMeaPwGGgvz7LfX2lcleUdyD0uWgvD5pWNE0TSRnnf BbAM3BjiYKiKJezMoGCJOhGnyDFLTyR6s28CYiAxexaYQHYDSELPcWZoixzX4nQ6172cc0MGgGdWj ZOgaLrqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1seDEs-00000006xDP-280j; Wed, 14 Aug 2024 12:36:30 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1seDD7-00000006wt7-10OX for linux-arm-kernel@lists.infradead.org; Wed, 14 Aug 2024 12:34:42 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 2B0CCCE1934; Wed, 14 Aug 2024 12:34:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B104C4AF0F; Wed, 14 Aug 2024 12:34:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723638877; bh=SI9hLU+Or0+BIYHu7IxOEE+FRI74wNTkfKN9m2X+qIY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m41DBnEkuZzQXiBPWqDVtAfQYIR+76WzHtf4ojE+XtbKqyK4rp9VAqFcTKg57oyhx owEiP/4BZ6d01vCWfLKfs9+DW0s8498PWh6HMG7rhdGxIg1pFbv2wdCtpVquNcgkey rxNce//66CabRrWCj8DclvoBYN2kLXK5pRuV3t1XQdrtgpLgMbW/an8rp3tonmVR2k eDDl/yzt+rX1oQu2KYfpagfPCcxLNNv2HxG+jhFFPasB6S0VXS1Di44KaZlYOPspqo Q0i1BxpV68mHPXz9I85Fte9OPg8T17n4CLjC7O/dQ2tj9eHQ1uP6W3L1Jo01qAOh/F weudeHNopNTCg== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: Will Deacon , Marc Zyngier , Oliver Upton , Fuad Tabba , kvmarm@lists.linux.dev Subject: [PATCH 2/2] KVM: arm64: Ensure TLBI uses correct VMID after changing context Date: Wed, 14 Aug 2024 13:34:29 +0100 Message-Id: <20240814123429.20457-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20240814123429.20457-1-will@kernel.org> References: <20240814123429.20457-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240814_053441_499352_E4277DB5 X-CRM114-Status: GOOD ( 12.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When the target context passed to enter_vmid_context() matches the current running context, the function returns early without manipulating the registers of the stage-2 MMU. This can result in a stale VMID due to the lack of an ISB instruction in exit_vmid_context() after writing the VTTBR when ARM64_WORKAROUND_SPECULATIVE_AT is not enabled. For example, with pKVM enabled: // Initially running in host context enter_vmid_context(guest); -> __load_stage2(guest); isb // Writes VTCR & VTTBR exit_vmid_context(guest); -> __load_stage2(host); // Restores VTCR & VTTBR enter_vmid_context(host); -> Returns early as we're already in host context tlbi vmalls12e1is // !!! Can use the stale VMID as we // haven't performed context // synchronisation since restoring // VTTBR.VMID Add an unconditional ISB instruction to exit_vmid_context() after restoring the VTTBR. This already existed for the ARM64_WORKAROUND_SPECULATIVE_AT path, so we can simply hoist that onto the common path. Cc: Marc Zyngier Cc: Oliver Upton Cc: Fuad Tabba Fixes: 58f3b0fc3b87 ("KVM: arm64: Support TLB invalidation in guest context") Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/tlb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index ca3c09df8d7c..48da9ca9763f 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -132,10 +132,10 @@ static void exit_vmid_context(struct tlb_inv_context *cxt) else __load_host_stage2(); - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* Ensure write of the old VMID */ - isb(); + /* Ensure write of the old VMID */ + isb(); + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { if (!(cxt->sctlr & SCTLR_ELx_M)) { write_sysreg_el1(cxt->sctlr, SYS_SCTLR); isb();