From patchwork Wed Aug 25 16:18:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92408C4338F for ; Wed, 25 Aug 2021 16:37:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 646FC60F5C for ; Wed, 25 Aug 2021 16:37:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 646FC60F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8iRwv5u15xFp/8zVrsAlCCbSlJWn6A4maoWz82B3q9E=; b=uKY+6hZpYhqk1E QC3SFrC0EglOLwUotOGDGS65Uc/RgHgpd5uQmkR0BnV3/vLOFx857GRO1xLachPnO8eLrWRSxJ7UE 7J4vYC+argGAnvK0t45DXTLztI8XN2cMOO53/3Xfzyfv3WKRmV0SaK30ADi38dcL/HFI5Mp4G3YZ6 sqyaThC7JopVCDF9w/H3OTU50DuTrW8OZYSCYBn8tfLbjU4PfJT33w9wMdwkOUuv5cWaAEW+gTmIU YIlhAODa0BeDdeCOHRvBf5pYYqAXmrpdcGtl7+tyY67/RBNiN4l86EZ9qswh6DBiBosY6EScgffbV amuhJaVR0UJTxtVqwsyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvsO-007pTZ-2P; Wed, 25 Aug 2021 16:35:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb8-007ggF-MO for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6EF94142F; Wed, 25 Aug 2021 09:17:54 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 20EE23F66F; Wed, 25 Aug 2021 09:17:52 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 26/39] KVM: arm64: VHE: Change MDCR_EL2 at world switch if VCPU has SPE Date: Wed, 25 Aug 2021 17:18:02 +0100 Message-Id: <20210825161815.266051-27-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091754_869040_62F2B89B X-CRM114-Status: GOOD ( 14.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a VCPU has the SPE feature, MDCR_EL2 sets the buffer owning regime to EL1&0. Write the guest's MDCR_EL2 value as late as possible and restore the host's value as soon as possible at each world switch to make the profiling blackout window as small as possible for the host. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/debug.c | 14 +++++++++++-- arch/arm64/kvm/hyp/vhe/switch.c | 33 +++++++++++++++++++++++------- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 2 +- 4 files changed, 40 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..657d0c94cf82 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr); #ifndef __KVM_NVHE_HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); -void deactivate_traps_vhe_put(void); +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); #endif u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 64e8211366b6..70712cd85f32 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -249,9 +249,19 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; /* Write mdcr_el2 changes since vcpu_load on VHE systems */ - if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2) - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (has_vhe()) { + /* + * MDCR_EL2 can modify the SPE buffer owning regime, defer the + * write until the VCPU is run. + */ + if (kvm_vcpu_has_spe(vcpu)) + goto out; + + if (orig_mdcr_el2 != vcpu->arch.mdcr_el2) + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + } +out: trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1)); } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 983ba1570d72..ec4e179d56ae 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -31,12 +31,29 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +static void __restore_host_mdcr_el2(struct kvm_vcpu *vcpu) +{ + u64 mdcr_el2; + + mdcr_el2 = read_sysreg(mdcr_el2); + mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS; + write_sysreg(mdcr_el2, mdcr_el2); +} + +static void __restore_guest_mdcr_el2(struct kvm_vcpu *vcpu) +{ + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); +} + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; ___activate_traps(vcpu); + if (kvm_vcpu_has_spe(vcpu)) + __restore_guest_mdcr_el2(vcpu); + val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; @@ -81,7 +98,11 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); + if (kvm_vcpu_has_spe(vcpu)) + __restore_host_mdcr_el2(vcpu); + write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); + write_sysreg(vectors, vbar_el1); } NOKPROBE_SYMBOL(__deactivate_traps); @@ -90,16 +111,14 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { __activate_traps_common(vcpu); - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (!kvm_vcpu_has_spe(vcpu)) + __restore_guest_mdcr_el2(vcpu); } -void deactivate_traps_vhe_put(void) +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) { - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS; - - write_sysreg(mdcr_el2, mdcr_el2); + if (!kvm_vcpu_has_spe(vcpu)) + __restore_host_mdcr_el2(vcpu); __deactivate_traps_common(); } diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 2a0b8c88d74f..007a12dd4351 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) struct kvm_cpu_context *host_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - deactivate_traps_vhe_put(); + deactivate_traps_vhe_put(vcpu); __sysreg_save_el1_state(guest_ctxt); __sysreg_save_user_state(guest_ctxt);