From patchwork Thu Jan 4 16:27:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13511322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09F52C47073 for ; Thu, 4 Jan 2024 16:28:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=y20/bqXW+U9NzYhy08ZvNsdSeHM/oRbz4siXIwvh09U=; b=0ZIxSebN0EZ8/S SX64Hg6iihFj/jtdtWgKBWJOTYY9B1Njqus0qsjg+LzA2C3WvftlgluyaqGgalOhIRQ71AyDSoZPw UGEq7VlecGgM12Kp+V9ap9Y1iyHYSmD4bj5y7Urk0sGLgtAvSfj6flbRfM+Ck5NfGbKV3QCxkOgaD geAB2VnP5Wh+2e2l2CX7M4aG5S8oN1cnkBlDiLZnuyLN8/ZbpkwAM7lD+WQVfI0aWp99nAf2zQGxJ PhdVImrTSBLwIPw9QtL6L8T3xJGUt0QHY9k7HkuEFey9z6DKNxzl6RKuGCrWCXPD6hgHpYR0qbhKN O40xOMVZlPpmUgNkUTTw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rLQZh-00EfO4-0z; Thu, 04 Jan 2024 16:28:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rLQZe-00EfM7-1t for linux-arm-kernel@lists.infradead.org; Thu, 04 Jan 2024 16:28:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6D47153B; Thu, 4 Jan 2024 08:28:44 -0800 (PST) Received: from e127643.. (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1F43A3F5A1; Thu, 4 Jan 2024 08:27:51 -0800 (PST) From: James Clark To: coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, broonie@kernel.org, maz@kernel.org, suzuki.poulose@arm.com, acme@kernel.org Cc: James Clark , Oliver Upton , James Morse , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Leo Yan , Alexander Shishkin , Anshuman Khandual , Rob Herring , Miguel Luis , Jintack Lim , Ard Biesheuvel , Mark Rutland , Arnd Bergmann , Vincent Donnefort , Kristina Martsenko , Fuad Tabba , Joey Gouly , Akihiko Odaki , Jing Zhang , linux-kernel@vger.kernel.org Subject: [PATCH v4 2/7] arm64: KVM: Use shared area to pass PMU event state to hypervisor Date: Thu, 4 Jan 2024 16:27:02 +0000 Message-Id: <20240104162714.1062610-3-james.clark@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240104162714.1062610-1-james.clark@arm.com> References: <20240104162714.1062610-1-james.clark@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240104_082802_742206_A61E10CB X-CRM114-Status: GOOD ( 26.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently the state of the PMU events is copied into the VCPU struct before every VCPU run. This isn't scalable if more data for other features needs to be added too. So make a writable area that's shared between the host and the hypervisor to store this state. Normal per-cpu constructs can't be used because although the framework exists for the host to write to the hypervisor's per-cpu structs, this only works until the protection is enabled. And for the other way around, no framework exists for the hypervisor to access the host's size and layout of per-cpu data. Instead of making a new framework for the hypervisor to access the host's per-cpu data that would only be used once, just define the new shared area as an array with NR_CPUS elements. This also reduces the amount of sharing that needs to be done, because unlike this array, the per-cpu data isn't contiguous. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 8 ++++++++ arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kvm/arm.c | 16 ++++++++++++++-- arch/arm64/kvm/hyp/nvhe/setup.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 9 +++++++-- arch/arm64/kvm/pmu.c | 4 +--- include/kvm/arm_pmu.h | 17 ----------------- 7 files changed, 42 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 824f29f04916..93d38ad257ed 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -466,6 +466,14 @@ struct kvm_cpu_context { struct kvm_vcpu *__hyp_running_vcpu; }; +struct kvm_host_global_state { + struct kvm_pmu_events { + u32 events_host; + u32 events_guest; + } pmu_events; +} ____cacheline_aligned; +extern struct kvm_host_global_state kvm_host_global_state[NR_CPUS]; + struct kvm_host_data { struct kvm_cpu_context host_ctxt; }; diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 119ca121b5f8..1a9dbb02bb4a 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -59,6 +59,7 @@ KVM_NVHE_ALIAS(alt_cb_patch_nops); /* Global kernel state accessed by nVHE hyp code. */ KVM_NVHE_ALIAS(kvm_vgic_global_state); +KVM_NVHE_ALIAS(kvm_host_global_state); /* Kernel symbols used to call panic() from nVHE hyp code (via ERET). */ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4796104c4471..bd6b2eda5f4f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -47,6 +47,20 @@ static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT; +/* + * Host state that isn't associated with any VCPU, but will affect any VCPU + * running on a host CPU in the future. This remains writable from the host and + * readable in the hyp. + * + * PER_CPU constructs aren't compatible between the hypervisor and the host so + * just define it as a NR_CPUS array. DECLARE_KVM_NVHE_PER_CPU works in both + * places, but not after the hypervisor protection is initialised. After that, + * kvm_arm_hyp_percpu_base isn't accessible from the host, so even if the + * kvm_host_global_state struct was shared with the host, the per-cpu offset + * can't be calculated without sharing even more data with the host. + */ +struct kvm_host_global_state kvm_host_global_state[NR_CPUS]; + DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); @@ -1016,8 +1030,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_vgic_flush_hwstate(vcpu); - kvm_pmu_update_vcpu_events(vcpu); - /* * Ensure we set mode to IN_GUEST_MODE after we disable * interrupts and before the final VCPU requests check. diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index b5452e58c49a..3e45cc10ba96 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -159,6 +159,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; + /* + * Similar to kvm_vgic_global_state, but this one remains writable by + * the host rather than read-only. Used to store per-cpu state about the + * host that isn't associated with any particular VCPU. + */ + prot = pkvm_mkstate(KVM_PGTABLE_PROT_RW, PKVM_PAGE_SHARED_OWNED); + ret = pkvm_create_mappings(&kvm_host_global_state, + &kvm_host_global_state + 1, prot); + if (ret) + return ret; + ret = create_hyp_debug_uart_mapping(); if (ret) return ret; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index c50f8459e4fc..89147a9dc38c 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -130,13 +130,18 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) } } +static struct kvm_pmu_events *kvm_nvhe_get_pmu_events(struct kvm_vcpu *vcpu) +{ + return &kvm_host_global_state[vcpu->cpu].pmu_events; +} + /* * Disable host events, enable guest events */ #ifdef CONFIG_HW_PERF_EVENTS static bool __pmu_switch_to_guest(struct kvm_vcpu *vcpu) { - struct kvm_pmu_events *pmu = &vcpu->arch.pmu.events; + struct kvm_pmu_events *pmu = kvm_nvhe_get_pmu_events(vcpu); if (pmu->events_host) write_sysreg(pmu->events_host, pmcntenclr_el0); @@ -152,7 +157,7 @@ static bool __pmu_switch_to_guest(struct kvm_vcpu *vcpu) */ static void __pmu_switch_to_host(struct kvm_vcpu *vcpu) { - struct kvm_pmu_events *pmu = &vcpu->arch.pmu.events; + struct kvm_pmu_events *pmu = kvm_nvhe_get_pmu_events(vcpu); if (pmu->events_guest) write_sysreg(pmu->events_guest, pmcntenclr_el0); diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index a243934c5568..136d5c6c1916 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -6,8 +6,6 @@ #include #include -static DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events); - /* * Given the perf event attributes and system type, determine * if we are going to need to switch counters at guest entry/exit. @@ -28,7 +26,7 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr) struct kvm_pmu_events *kvm_get_pmu_events(void) { - return this_cpu_ptr(&kvm_pmu_events); + return &kvm_host_global_state[smp_processor_id()].pmu_events; } /* diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 4b9d8fb393a8..71a835970ab5 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -18,14 +18,8 @@ struct kvm_pmc { struct perf_event *perf_event; }; -struct kvm_pmu_events { - u32 events_host; - u32 events_guest; -}; - struct kvm_pmu { struct irq_work overflow_work; - struct kvm_pmu_events events; struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; int irq_num; bool created; @@ -79,17 +73,6 @@ void kvm_vcpu_pmu_resync_el0(void); #define kvm_vcpu_has_pmu(vcpu) \ (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) -/* - * Updates the vcpu's view of the pmu events for this cpu. - * Must be called before every vcpu run after disabling interrupts, to ensure - * that an interrupt cannot fire and update the structure. - */ -#define kvm_pmu_update_vcpu_events(vcpu) \ - do { \ - if (!has_vhe() && kvm_vcpu_has_pmu(vcpu)) \ - vcpu->arch.pmu.events = *kvm_get_pmu_events(); \ - } while (0) - /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */