From patchwork Thu May 19 13:41:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA317C433F5 for ; Thu, 19 May 2022 14:14:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OBuU3V2hthYRv1FDUBdjWyK3kx4gF7yB/HJVF0nl5oc=; b=qkquNtw2PVSENW bwpb4w9CVHg4DMoeO5pt8hDy+ULH8aO7CVCGKDcqqZaZh75/Ddz9sFmrycw7HyG6bgAy9xOfogM+p fz1RRG/CwlQoHvpJys59/pG1yxB0JTwLA/9jEM6dkolIxhY6ciRRDWxlCTHQp2xF4WvVtujefsQDB iO1+AmHCtpzQCXzKXVv0URxY3bCoxXgj6vmpG7ad6Q24DW5Z5uzH6kh9DWcs3pWPU0JVsW2OwDrGk xNf4TmfGOrSDpirImQrSixF1gvOPp9ABUAnmxxmd7kI6B2QKakQT71uANnZh6LzlmWBBhTS6BCsve Zz1XeqozUdvub2Iosfkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrguS-007JGC-0g; Thu, 19 May 2022 14:13:48 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgSq-0076GI-UZ for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:45:21 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 71A056176B; Thu, 19 May 2022 13:45:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 633BCC34115; Thu, 19 May 2022 13:45:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967915; bh=NfURsrYW4h19F3WtHdQkYC/dH7t8OmUGfRpzGnNJnCw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V8CE1dB72f2xw7zuWqcSJwBuXFyA8DJKPJRmNTFu5MMnncefdQvH44S7N0U1mB9Zc ltLAoKsJukAnREAop57PfnCskiIzNHdCzq3VrvaaDi5HSEiG1O3LYAEQKg9LTzqyMJ jhaNiN+GUuxos+/jHZyFfxDfYMgM6mxlabIuj8N7wT/wDz928AScV3Zp5V/6vhNnaH g6qjSBqWMkvTW4b8Sp+O4eXKMhz5/NuR4xPxSQqt9j+Yc0IzJNUvluddSX1bRjO3sK 0DgQsZntMcDT4vYQ+Zv0jsXgGqvSdgwffsIBbH3ABN83srItSWA4LM6em6bcJkfJzZ k431H+QX4noLw== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 43/89] KVM: arm64: Add the {flush, sync}_vgic_state() primitives Date: Thu, 19 May 2022 14:41:18 +0100 Message-Id: <20220519134204.5379-44-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064517_111779_EDEEE6F8 X-CRM114-Status: GOOD ( 17.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier Rather than blindly copying the vGIC state to/from the host at EL2, introduce a couple of helpers to copy only what is needed and to sanitise untrusted data passed by the host kernel. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 50 +++++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 5b46742d9f9b..58515e5d24ec 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -18,10 +18,51 @@ #include #include +#include + DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_vgic_state(struct kvm_vcpu *host_vcpu, + struct kvm_vcpu *shadow_vcpu) +{ + struct vgic_v3_cpu_if *host_cpu_if, *shadow_cpu_if; + unsigned int used_lrs, max_lrs, i; + + host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + + max_lrs = (read_gicreg(ICH_VTR_EL2) & 0xf) + 1; + used_lrs = READ_ONCE(host_cpu_if->used_lrs); + used_lrs = min(used_lrs, max_lrs); + + shadow_cpu_if->vgic_hcr = READ_ONCE(host_cpu_if->vgic_hcr); + /* Should be a one-off */ + shadow_cpu_if->vgic_sre = (ICC_SRE_EL1_DIB | + ICC_SRE_EL1_DFB | + ICC_SRE_EL1_SRE); + shadow_cpu_if->used_lrs = used_lrs; + + for (i = 0; i < used_lrs; i++) + shadow_cpu_if->vgic_lr[i] = READ_ONCE(host_cpu_if->vgic_lr[i]); +} + +static void sync_vgic_state(struct kvm_vcpu *host_vcpu, + struct kvm_vcpu *shadow_vcpu) +{ + struct vgic_v3_cpu_if *host_cpu_if, *shadow_cpu_if; + unsigned int i; + + host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; + + WRITE_ONCE(host_cpu_if->vgic_hcr, shadow_cpu_if->vgic_hcr); + + for (i = 0; i < shadow_cpu_if->used_lrs; i++) + WRITE_ONCE(host_cpu_if->vgic_lr[i], shadow_cpu_if->vgic_lr[i]); +} + static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; @@ -43,16 +84,13 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; - shadow_vcpu->arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; + flush_vgic_state(host_vcpu, shadow_vcpu); } static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) { struct kvm_vcpu *shadow_vcpu = &shadow_state->shadow_vcpu; struct kvm_vcpu *host_vcpu = shadow_state->host_vcpu; - struct vgic_v3_cpu_if *shadow_cpu_if = &shadow_vcpu->arch.vgic_cpu.vgic_v3; - struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; - unsigned int i; host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; @@ -63,9 +101,7 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) host_vcpu->arch.flags = shadow_vcpu->arch.flags; - host_cpu_if->vgic_hcr = shadow_cpu_if->vgic_hcr; - for (i = 0; i < shadow_cpu_if->used_lrs; ++i) - host_cpu_if->vgic_lr[i] = shadow_cpu_if->vgic_lr[i]; + sync_vgic_state(host_vcpu, shadow_vcpu); } static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)