From patchwork Thu May 19 13:41:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12855157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 976F5C433EF for ; Thu, 19 May 2022 14:25:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BZKYgBIocn+NFEl5rW643tB98e0PT/gENPEw7kPtRyM=; b=xo8JpjdRIRP7N3 KrJrzdIJVhPPDjLo3eA4RyVeL2Q4averrI1j/ZAAAyg5NRbY3HBHwaZyrSAuIIdItTgWlMc57wQdr WObv8fawZdDp0lXp6cG+yZmhxCm6dujHPHhGVNkZrw6RDN4hcio00he63QIQXMwe4ehDWp57Z1UFF Yp9DA3pAbI2+MmUlpsjuV3aRhsPbFjMHSMB9v0sr/ig+juanvqL+bMw8jvb16mb8Oyk+o7lfDqmTT hUdUngO/NSOUVmogCmjj/rzVpHYoB6TixxwKNcD1oYk6WcCvbAX349HXE5dIAJ/GHg1x9EJ25qgEZ AiFhaJkp/jz67U9aRZoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrh42-007NvZ-PQ; Thu, 19 May 2022 14:23:43 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgTY-0076aP-Iq for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:46:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 181F061783; Thu, 19 May 2022 13:46:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A3E1C36AE5; Thu, 19 May 2022 13:45:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652967959; bh=cvBKGfWFAbmH68bmeDTUSse0RpVg4TXL+HSHAKVcsow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XQp2SFcckGWaZ9r0J9knvbktkXOKat0vRrw0drOozmkrj5NmVw+v4Z1R1k2SIUxh/ tZNo0nxM6bw4OZGYXFVFUR2oF9sQ2KLHxFgTEMcna2qXdo7NDaeG3hkx+WH2jsnXiH xlxQcV3WCwpZWdkEK6BnY/k5Ca1DblJjAl/yKHX6pct9uCx/nLEZa2UBebBkYHq9JZ l0fQFEOiUQMJpeZbA51jyKB3gtseKc8z4xyhtMzLuWBxXsc2mphJwz61qYXJ1cNL/f SimpwFLv03rxan7C6ccAVnO/6rzgBCSNmoIIWii5vMFHu+SEutvx5hKD5X4YPZmAmq mmtHtc6gGT3FQ== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 54/89] KVM: arm64: Reduce host/shadow vcpu state copying Date: Thu, 19 May 2022 14:41:29 +0100 Message-Id: <20220519134204.5379-55-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064600_764475_D924F156 X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Marc Zyngier When running with pKVM enabled, protected guests run with a fixed CPU configuration and therefore features such as hardware debug and SVE are unavailable and their state does not need to be copied from the host structures on each flush operation. Although non-protected guests do require the host and shadow structures to be kept in-sync with each other, we can defer writing back to the host to an explicit sync hypercall, rather than doing it after every vCPU run. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 228736a9ab40..e82c0faf6c81 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -196,17 +196,18 @@ static void flush_shadow_state(struct kvm_shadow_vcpu_state *shadow_state) if (host_flags & KVM_ARM64_PKVM_STATE_DIRTY) __flush_vcpu_state(shadow_state); - } - shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); - shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + shadow_vcpu->arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + shadow_vcpu->arch.sve_max_vl = host_vcpu->arch.sve_max_vl; - shadow_vcpu->arch.hcr_el2 = host_vcpu->arch.hcr_el2; - shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS & ~(HCR_RW | HCR_TWI | HCR_TWE); + shadow_vcpu->arch.hcr_el2 |= READ_ONCE(host_vcpu->arch.hcr_el2); - shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + shadow_vcpu->arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + shadow_vcpu->arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + } - shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + shadow_vcpu->arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; flush_vgic_state(host_vcpu, shadow_vcpu); flush_timer_state(shadow_state); @@ -238,10 +239,10 @@ static void sync_shadow_state(struct kvm_shadow_vcpu_state *shadow_state, unsigned long host_flags; u8 esr_ec; - host_vcpu->arch.ctxt = shadow_vcpu->arch.ctxt; - - host_vcpu->arch.hcr_el2 = shadow_vcpu->arch.hcr_el2; - + /* + * Don't sync the vcpu GPR/sysreg state after a run. Instead, + * leave it in the shadow until someone actually requires it. + */ sync_vgic_state(host_vcpu, shadow_vcpu); sync_timer_state(shadow_state);