From patchwork Fri Aug 6 11:31:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12423297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97FE3C4338F for ; Fri, 6 Aug 2021 11:33:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5FA6761050 for ; Fri, 6 Aug 2021 11:33:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5FA6761050 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=F8OFOtPV6yPpvP+B8vSYs+ngUN6lPI6RfzB1NvZeIkQ=; b=FUWn7Nq0YNAId0 D8AdjdnpAhDkK34AwK7Z2TV2c3lwUNnftgTRsRfdnZZ0s9fHhGALuci9XnmyXWSvnLBHZv7ExDgJ7 +tVyY68PnPosg3IZdaVdwMjIRKz9+lg90uw8kw6AKe+lL+RWpu4b+1MRK5tFejc2e6YyLXfd2P5rp 2fWuxStG+zyLtDqrIj9BTarSy19X4qJyaMPCwR8hoHSxTFBw+Dp0NQwRmxAIc9/G3lPZ0IXMUDDYP gQMTzMI7wp0rrZMWBCe5byjWmtQyhO7G+vJ8FgH7KKyzMonxhRpcXOJ8UmnjrYbIf1e6JU54Flp2q 0NCB+aV76sCZtP9sv0Ww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy5F-00CI2j-6S; Fri, 06 Aug 2021 11:32:13 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBy4Y-00CHoZ-6v for linux-arm-kernel@lists.infradead.org; Fri, 06 Aug 2021 11:31:31 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 593FB61058; Fri, 6 Aug 2021 11:31:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628249489; bh=5NQN7uGLAAQxZwXl2h2sRA+r5PlFi6czNdWb7Ldo0CU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nj2fwVLS/etY4Lsdgf4m0PUn2NfmLHD0YQ6fnapugiyDw3izp6RC4KBDavpoenBG/ Gw9RkCNqMcN9WFc7i/KQie2pPZiC7qO84ipye39uURApw4od+viZTk4z6ysGMlDCvk UVUWv5L/0MuYVO5+6SFSPxt6msfyVhDa/Wr1YhINJMx6eVB3tQIYNyVZmemZcwQUvF wRGjK6RpHpVseLfWwTvgeq0u/Lg00YwcQ3ObwlBf50Ku2We7tey3LkEUkJ9X0CXKrK rRdeTjcA0uf949aDf0Jw/S5xhtqxJ5HGCi6iPKh24QE7+C/xbavxo0d1p8W1+9STlC TLUlq9O3ut5eQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Catalin Marinas , Marc Zyngier , Jade Alglave , Shameer Kolothum , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org Subject: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Date: Fri, 6 Aug 2021 12:31:07 +0100 Message-Id: <20210806113109.2475-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210806113109.2475-1-will@kernel.org> References: <20210806113109.2475-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210806_043130_309140_A2628702 X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier The protected mode relies on a separate helper to load the S2 context. Move over to the __load_guest_stage2() helper instead. Cc: Catalin Marinas Cc: Jade Alglave Cc: Shameer Kolothum Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon --- arch/arm64/include/asm/kvm_mmu.h | 11 +++-------- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 2 +- 3 files changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 05e089653a1a..934ef0deff9f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu) * Must be called from hyp code running at EL2 with an updated VTTBR * and interrupts disabled. */ -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr) +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, + struct kvm_arch *arch) { - write_sysreg(vtcr, vtcr_el2); + write_sysreg(arch->vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); /* @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); } -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu, - struct kvm_arch *arch) -{ - __load_stage2(mmu, arch->vtcr); -} - static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu) { return container_of(mmu->arch, struct kvm, arch); diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 9c227d87c36d..a910648bc71b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); static __always_inline void __load_host_stage2(void) { if (static_branch_likely(&kvm_protected_mode_initialized)) - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); else write_sysreg(0, vttbr_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index d938ce95d3bd..d4e74ca7f876 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void) kvm_flush_dcache_to_poc(params, sizeof(*params)); write_sysreg(params->hcr_el2, hcr_el2); - __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + __load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch); /* * Make sure to have an ISB before the TLB maintenance below but only