From patchwork Thu May 19 13:41:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 401F6C433EF for ; Thu, 19 May 2022 14:38:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hOInTZBq9F18HBT6CzDg1o1EpWnuNZKF4P608Oy42kU=; b=z+BzZriYu63V50 g5Gtv4Rw1f4tzjUkTKN8jFUhVRMP8Qf9zLn3UusGH5UzHmcKiKSrHCvf4BvAjNaxHNvnzzqq6NeKR J5ueuthqS0iZScQeREzGy27686P2iioQtEomtQu40Mzu1BIsnZezMD2J+aPvOo8cw2Toct3jaNDA4 VCBXHxFa8VN42EcNjl3JtrjF86YzMLyE42B6ypNEEkN/CWfAuuRUVwUsrodQg9vF6S7HS0zSLaXJq etPZ6zbg+Hud6kq38kkPM8Iq/KqxE5babCUKd8UvHHTdznx7xS3aOr5LuuhhEHCmrMjk/HYrZy8Em UwJmbjI5efWlK9e3HnXw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhGv-007USr-15; Thu, 19 May 2022 14:37:01 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUj-0077C6-C1 for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:15 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0917FB824AB; Thu, 19 May 2022 13:47:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A117C385AA; Thu, 19 May 2022 13:47:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968030; bh=hCh5aMXsjpfqNJ0ujhIzhZM+KshM303G+POWXF8vjmk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SpObs3l33bmi2hgGcZfQGielUdcM4kiaTitgszKPs+CAXPpQgg3DOTn+vtj7f/FA8 3O7wmJz2Oo92OeK3eiHNZlE6YbQAjYjHcighQwArM8zp8MEPexpWB7X6kBaW1LapNl vlA/B03vD6+WJFpPbWuX0cEA/qfvGGjKgYbz7J5NnxLhCMcHIJd0bl7ZQcJQIkZkyM jO5NXEPxUQw4aaO5yh2+J+j3OzdL96JkVUrCojCtQtJ5H4I4+H/zLufXur3kEGH6jD FxeD34LXtYEIE1Fi2SopnbaioFPax+Dt4ve6AN5gds75b8IFtvPsi+pbGGYFiCjcii dwLjo9/zYy7PQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 72/89] KVM: arm64: Track the SVE state in the shadow vcpu Date: Thu, 19 May 2022 14:41:47 +0100 Message-Id: <20220519134204.5379-73-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064713_758021_F7591335 X-CRM114-Status: GOOD ( 16.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier When dealing with a guest with SVE enabled, make sure the host SVE state is pinned at EL2 S1, and that the shadow state is correctly initialised (and then unpinned on teardown). Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 ++++---- arch/arm64/kvm/hyp/nvhe/pkvm.c | 33 ++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 5d6cee7436f4..1e39dc7eab4d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -416,8 +416,7 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) if (host_flags & KVM_ARM64_PKVM_STATE_DIRTY) __flush_vcpu_state(shadow_state); - shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + shadow_vcpu->arch.flags = host_flags; shadow_vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS & ~(HCR_RW | HCR_TWI | HCR_TWE); shadow_vcpu->arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2); @@ -488,8 +487,10 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, BUG(); } - host_flags = READ_ONCE(host_vcpu->arch.flags) & - ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC); + host_flags = shadow_vcpu->arch.flags; + if (shadow_state_is_protected(shadow_state)) + host_flags &= ~(KVM_ARM64_PENDING_EXCEPTION | KVM_ARM64_INCREMENT_PC); + WRITE_ONCE(host_vcpu->arch.flags, host_flags); shadow_state->exit_code = exit_reason; } diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 51da5c1d7e0d..9feeb0b5433a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -372,7 +372,19 @@ static void unpin_host_vcpus(struct kvm_shadow_vcpu_state *shadow_vcpu_states, for (i = 0; i < nr_vcpus; i++) { struct kvm_vcpu *host_vcpu = shadow_vcpu_states[i].host_vcpu; + struct kvm_vcpu *shadow_vcpu = &shadow_vcpu_states[i].shadow_vcpu; + size_t sve_state_size; + void *sve_state; + hyp_unpin_shared_mem(host_vcpu, host_vcpu + 1); + + if (!test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features)) + continue; + + sve_state = shadow_vcpu->arch.sve_state; + sve_state = kern_hyp_va(sve_state); + sve_state_size = vcpu_sve_state_size(shadow_vcpu); + hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size); } } @@ -448,6 +460,27 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm, if (ret) return ret; + if (test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features)) { + size_t sve_state_size; + void *sve_state; + + shadow_vcpu->arch.sve_state = READ_ONCE(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = READ_ONCE(host_vcpu->arch.sve_max_vl); + + sve_state = kern_hyp_va(shadow_vcpu->arch.sve_state); + sve_state_size = vcpu_sve_state_size(shadow_vcpu); + + if (!shadow_vcpu->arch.sve_state || !sve_state_size || + hyp_pin_shared_mem(sve_state, + sve_state + sve_state_size)) { + clear_bit(KVM_ARM_VCPU_SVE, + shadow_vcpu->arch.features); + shadow_vcpu->arch.sve_state = NULL; + shadow_vcpu->arch.sve_max_vl = 0; + return -EINVAL; + } + } + pkvm_vcpu_init_traps(shadow_vcpu, host_vcpu); kvm_reset_pvm_sys_regs(shadow_vcpu); }