From patchwork Tue Aug 8 06:30:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3602BC05051 for ; Tue, 8 Aug 2023 19:02:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232528AbjHHTCl (ORCPT ); Tue, 8 Aug 2023 15:02:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232573AbjHHTBs (ORCPT ); Tue, 8 Aug 2023 15:01:48 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 003E58DCD4; Tue, 8 Aug 2023 10:31:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515897; x=1723051897; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vG6JB09+GdX1oexJrBx3ofjZYO0vUM/vdHaobdJsMXA=; b=GPXTmSBvKShziwPJpNMQ/NqkizdqsyKzjE+FqqleekuE2zTH1QPDwDpn FjzgaS1tOjI4xbGvAvt3y8oZpdxJcbAb9AQMwDYXzCnPCnCvy/9gAfs0V zNciEbnmDLiE4oRF2L6T2UWfp2O5aFszXvsL3C6k8dafEZx4KQ5/BN6V9 GuMp1Rg65oSJDsMxGV3cDsru5OjtTjATPL8jyv2OP3Zakix1ktBdgRmHi pk1T3SaHOFtWzIQx6FCKvMUmsZS2Srnja3uk18fxaEPYgACcDjvulRbke e8LbiwaSL6ZzzvTbcixu0wKRc7Ysn9fHf8y+xGeLG9v+DCSp41PFyM1eb g==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434581889" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434581889" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:25:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377216" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377216" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:25:32 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 01/13] KVM: x86/pmu: Add Intel CPUID-hinted TopDown slots event Date: Tue, 8 Aug 2023 14:30:59 +0800 Message-Id: <20230808063111.1870070-2-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds support for the architectural topdown slots event which is hinted by CPUID.0AH.EBX. The topdown slots event counts the total number of available slots for an unhalted logical processor. Software can use this event as the denominator for the top-level metrics of the topDown Microarchitecture Analysis method. Although the MSR_PERF_METRICS MSR required for topdown events is not currently available in the guest, relying only on the data provided by the slots event is sufficient for pmu users to perceive differences in cpu pipeline machine-width across micro-architectures. The standalone slots event, like the instruction event, can be counted with gp counter or fixed counter 3 (if any). Its availability is also controlled by CPUID.AH.EBX. On Linux, perf user may encode "-e cpu/event=0xa4,umask=0x01/" or "-e cpu/slots/" to count slots events. This patch only enables slots event on GP counters. The enabling on fixed counter 3 will be supported in subsequent patches. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index f2efa0bf7ae8..7322f0c18565 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -34,6 +34,7 @@ enum intel_pmu_architectural_events { INTEL_ARCH_LLC_MISSES, INTEL_ARCH_BRANCHES_RETIRED, INTEL_ARCH_BRANCHES_MISPREDICTED, + INTEL_ARCH_TOPDOWN_SLOTS, NR_REAL_INTEL_ARCH_EVENTS, @@ -58,6 +59,7 @@ static struct { [INTEL_ARCH_LLC_MISSES] = { 0x2e, 0x41 }, [INTEL_ARCH_BRANCHES_RETIRED] = { 0xc4, 0x00 }, [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, + [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, }; From patchwork Tue Aug 8 06:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 877E7C04FDF for ; Tue, 8 Aug 2023 19:02:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233297AbjHHTCt (ORCPT ); Tue, 8 Aug 2023 15:02:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232825AbjHHTBt (ORCPT ); Tue, 8 Aug 2023 15:01:49 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC1948DCD8; Tue, 8 Aug 2023 10:31:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515898; x=1723051898; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fPtNZahJRPH8ZMXlj5BwpQG1xqg/GrHrKft4TWcLJSs=; b=lXIeEm+sNa2dkzTLVDUjC0UHYGiFvCERpY3xP8nnJv4rCrDZ3AUrcjtw nfzdqtXj4Bp5nG2nc0yPIneEKgplaqJ8Y744+f3p9/a466CbS9Mns4Ra5 XWMY43J8yw1P2y7E6jNJkJ/pz4PIR+QqRMpux6SNmZOOEGN6skxXd3GtM lAJbFEQ9IynUWVvQFbwyeAXOBP3ZHni5umR/2wXpptEBW9CaQErbhrYpG uIqsgboRs98oMTOcMF/IPloM6GoI6E6dtU7I1kKNKcE0EgkMWJeJeBaq/ 8mIV/JHVMGhRB0wY8cSFUWglLp4T+SPcvGbtPN3FNccvM5M9MBC1KPys/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434581893" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434581893" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:25:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377223" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377223" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:25:42 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 02/13] KVM: x86/pmu: Support PMU fixed counter 3 Date: Tue, 8 Aug 2023 14:31:00 +0800 Message-Id: <20230808063111.1870070-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TopDown slots event can be enabled on gp counter or fixed counter 3 and it does not differ from other fixed counters in terms of the use of count and sampling modes (except for the hardware logic for event accumulation). According to commit 6017608936c1 ("perf/x86/intel: Add Icelake support"), KVM or any perf in-kernel user needs to reprogram fixed counter 3 via the kernel-defined TopDown slots event for real fixed counter 3 on the host. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 10 ++++++++++ arch/x86/kvm/x86.c | 4 ++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 60d430b4650f..51fb7728c407 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -509,7 +509,7 @@ struct kvm_pmc { #define KVM_INTEL_PMC_MAX_GENERIC 8 #define MSR_ARCH_PERFMON_PERFCTR_MAX (MSR_ARCH_PERFMON_PERFCTR0 + KVM_INTEL_PMC_MAX_GENERIC - 1) #define MSR_ARCH_PERFMON_EVENTSEL_MAX (MSR_ARCH_PERFMON_EVENTSEL0 + KVM_INTEL_PMC_MAX_GENERIC - 1) -#define KVM_PMC_MAX_FIXED 3 +#define KVM_PMC_MAX_FIXED 4 #define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1) #define KVM_AMD_PMC_MAX_GENERIC 6 struct kvm_pmu { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7322f0c18565..044d61aa63dc 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -45,6 +45,14 @@ enum intel_pmu_architectural_events { * core crystal clock or the bus clock (yeah, "architectural"). */ PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS, + /* + * Pseudo-architectural event used to implement IA32_FIXED_CTR3, a.k.a. + * topDown slots. The topdown slots event counts the total number of + * available slots for an unhalted logical processor. The topdwon slots + * event with PERF_METRICS MSR together provides support for topdown + * micro-architecture analysis method. + */ + PSEUDO_ARCH_TOPDOWN_SLOTS, NR_INTEL_ARCH_EVENTS, }; @@ -61,6 +69,7 @@ static struct { [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, + [PSEUDO_ARCH_TOPDOWN_SLOTS] = { 0x00, 0x04 }, }; /* mapping between fixed pmc index and intel_arch_events array */ @@ -68,6 +77,7 @@ static int fixed_pmc_events[] = { [0] = INTEL_ARCH_INSTRUCTIONS_RETIRED, [1] = INTEL_ARCH_CPU_CYCLES, [2] = PSEUDO_ARCH_REFERENCE_CYCLES, + [3] = PSEUDO_ARCH_TOPDOWN_SLOTS, }; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 34945c7dba38..73db594f855b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1459,7 +1459,7 @@ static const u32 msrs_to_save_base[] = { static const u32 msrs_to_save_pmu[] = { MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1, - MSR_ARCH_PERFMON_FIXED_CTR0 + 2, + MSR_ARCH_PERFMON_FIXED_CTR2, MSR_ARCH_PERFMON_FIXED_CTR3, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, @@ -7181,7 +7181,7 @@ static void kvm_init_msr_lists(void) { unsigned i; - BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 3, + BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 4, "Please update the fixed PMCs in msrs_to_save_pmu[]"); num_msrs_to_save = 0; From patchwork Tue Aug 8 06:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CEFEC04A6A for ; Tue, 8 Aug 2023 19:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232796AbjHHTCq (ORCPT ); Tue, 8 Aug 2023 15:02:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232975AbjHHTBu (ORCPT ); Tue, 8 Aug 2023 15:01:50 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 252A58DCDA; Tue, 8 Aug 2023 10:31:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515898; x=1723051898; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8pf7f7oxoXEvzdcueT5j+AnOqZhPfLnr6A/Hm2lt40=; b=MnR6ttFqU/yBONXExwo7HPpz4k62UzQHpaSzXS0VjcYQTkZnynRKpv+t TM7Vu9ilFSoxsz2XwMdWnsHsZC88YJMx9tBCxu3bRZ1ngHQhHuHv3U0I3 4R1acUNy5zY/eVwes8vebLeEMD0POoLWVkXKWmDWVlQrakCAwdGxFIw9D AtszYK7i/QSTpfwh/jv5a1BjMhPhbc6sx17BpDol6otH2frBFPZjYJtwm JRoyEUHS2ZQkp461l/zXyRejSli0ERphQMtAII/g9gVxK16Jf0MrHndHe Bvc5sChVf4P6oj6/weQYgTkLvQoSCyu8X/ephhg2YqUupbDNLp/4Hzi1h w==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434581921" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434581921" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:26:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377629" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377629" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:26:12 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 03/13] perf/core: Add function perf_event_group_leader_check() Date: Tue, 8 Aug 2023 14:31:01 +0800 Message-Id: <20230808063111.1870070-4-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group leader checking code in function sys_perf_event_open() to create a new function perf_event_group_leader_check(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same check for group leader event just like function sys_perf_event_open() does. So extract the checking code into a separate function and avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 143 +++++++++++++++++++++++-------------------- 1 file changed, 78 insertions(+), 65 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 78ae7b6f90fd..616391158d7c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12324,6 +12324,81 @@ perf_check_permission(struct perf_event_attr *attr, struct task_struct *task) return is_capable || ptrace_may_access(task, ptrace_mode); } +static int perf_event_group_leader_check(struct perf_event *group_leader, + struct perf_event *event, + struct perf_event_attr *attr, + struct perf_event_context *ctx, + struct pmu **pmu, + int *move_group) +{ + if (!group_leader) + return 0; + + /* + * Do not allow a recursive hierarchy (this new sibling + * becoming part of another group-sibling): + */ + if (group_leader->group_leader != group_leader) + return -EINVAL; + + /* All events in a group should have the same clock */ + if (group_leader->clock != event->clock) + return -EINVAL; + + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) + return -EINVAL; + + /* + * Make sure we're both on the same context; either task or cpu. + */ + if (group_leader->ctx != ctx) + return -EINVAL; + + /* + * Only a group leader can be exclusive or pinned + */ + if (attr->exclusive || attr->pinned) + return -EINVAL; + + if (is_software_event(event) && + !in_software_context(group_leader)) { + /* + * If the event is a sw event, but the group_leader + * is on hw context. + * + * Allow the addition of software events to hw + * groups, this is safe because software events + * never fail to schedule. + * + * Note the comment that goes with struct + * perf_event_pmu_context. + */ + *pmu = group_leader->pmu_ctx->pmu; + } else if (!is_software_event(event)) { + if (is_software_event(group_leader) && + (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { + /* + * In case the group is a pure software group, and we + * try to add a hardware event, move the whole group to + * the hardware context. + */ + *move_group = 1; + } + + /* Don't allow group of multiple hw events from different pmus */ + if (!in_software_context(group_leader) && + group_leader->pmu_ctx->pmu != *pmu) + return -EINVAL; + } + + return 0; +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12518,71 +12593,9 @@ SYSCALL_DEFINE5(perf_event_open, } } - if (group_leader) { - err = -EINVAL; - - /* - * Do not allow a recursive hierarchy (this new sibling - * becoming part of another group-sibling): - */ - if (group_leader->group_leader != group_leader) - goto err_locked; - - /* All events in a group should have the same clock */ - if (group_leader->clock != event->clock) - goto err_locked; - - /* - * Make sure we're both events for the same CPU; - * grouping events for different CPUs is broken; since - * you can never concurrently schedule them anyhow. - */ - if (group_leader->cpu != event->cpu) - goto err_locked; - - /* - * Make sure we're both on the same context; either task or cpu. - */ - if (group_leader->ctx != ctx) - goto err_locked; - - /* - * Only a group leader can be exclusive or pinned - */ - if (attr.exclusive || attr.pinned) - goto err_locked; - - if (is_software_event(event) && - !in_software_context(group_leader)) { - /* - * If the event is a sw event, but the group_leader - * is on hw context. - * - * Allow the addition of software events to hw - * groups, this is safe because software events - * never fail to schedule. - * - * Note the comment that goes with struct - * perf_event_pmu_context. - */ - pmu = group_leader->pmu_ctx->pmu; - } else if (!is_software_event(event)) { - if (is_software_event(group_leader) && - (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { - /* - * In case the group is a pure software group, and we - * try to add a hardware event, move the whole group to - * the hardware context. - */ - move_group = 1; - } - - /* Don't allow group of multiple hw events from different pmus */ - if (!in_software_context(group_leader) && - group_leader->pmu_ctx->pmu != pmu) - goto err_locked; - } - } + err = perf_event_group_leader_check(group_leader, event, &attr, ctx, &pmu, &move_group); + if (err) + goto err_locked; /* * Now that we're certain of the pmu; find the pmu_ctx. From patchwork Tue Aug 8 06:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E68C04FDF for ; Tue, 8 Aug 2023 19:02:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231587AbjHHTCi (ORCPT ); Tue, 8 Aug 2023 15:02:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233155AbjHHTBu (ORCPT ); Tue, 8 Aug 2023 15:01:50 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8069A8DCDE; Tue, 8 Aug 2023 10:31:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515899; x=1723051899; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7C56/gt/RGvreA0L41MZxGhb9elLWu8ezfqy+Cf9NVY=; b=nJECElYykVz7TU0yU1QDQ3LXTXHaMzhrIWvCZ3P2QQv7we92N08QA4wf qgyivtU3WqYErA4eu//0Pqa5BGU0HDPXoyk1qTFNw/03++addTZPVoXDv G7fklKwcc85Agya53jNaIJ4eYzpewVxwOkqDq5Nf34Qn+aGIB1FEQuso/ YNyziNu/yEbJjl8W05+5+FyfV4BrB3+z5OieAZNJhnynGN4WYD4DNtfN9 UJ9v9En+XXrfBszzGUOsowIy6HHAuK7+mSYLL0LAaHCkMp64MA0cUUkAI s2V6CFX/7Q6LAG9KwJH7BDP1ZaaMNmBmtem/fpifq+6ad5R3ltKDibrow A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434581950" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434581950" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:26:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377665" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377665" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:26:20 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 04/13] perf/core: Add function perf_event_move_group() Date: Tue, 8 Aug 2023 14:31:02 +0800 Message-Id: <20230808063111.1870070-5-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group moving code in function sys_perf_event_open() to create a new function perf_event_move_group(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same group moving for group leader event just like function sys_perf_event_open() does. So extract the moving code into a separate function to avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 82 ++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 616391158d7c..15eb82d1a010 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12399,6 +12399,48 @@ static int perf_event_group_leader_check(struct perf_event *group_leader, return 0; } +static void perf_event_move_group(struct perf_event *group_leader, + struct perf_event_pmu_context *pmu_ctx, + struct perf_event_context *ctx) +{ + struct perf_event *sibling; + + perf_remove_from_context(group_leader, 0); + put_pmu_ctx(group_leader->pmu_ctx); + + for_each_sibling_event(sibling, group_leader) { + perf_remove_from_context(sibling, 0); + put_pmu_ctx(sibling->pmu_ctx); + } + + /* + * Install the group siblings before the group leader. + * + * Because a group leader will try and install the entire group + * (through the sibling list, which is still in-tact), we can + * end up with siblings installed in the wrong context. + * + * By installing siblings first we NO-OP because they're not + * reachable through the group lists. + */ + for_each_sibling_event(sibling, group_leader) { + sibling->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(sibling); + perf_install_in_context(ctx, sibling, sibling->cpu); + } + + /* + * Removing from the context ends up with disabled + * event. What we want here is event in the initial + * startup state, ready to be add into new context. + */ + group_leader->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(group_leader); + perf_install_in_context(ctx, group_leader, group_leader->cpu); +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12414,7 +12456,7 @@ SYSCALL_DEFINE5(perf_event_open, { struct perf_event *group_leader = NULL, *output_event = NULL; struct perf_event_pmu_context *pmu_ctx; - struct perf_event *event, *sibling; + struct perf_event *event; struct perf_event_attr attr; struct perf_event_context *ctx; struct file *event_file = NULL; @@ -12646,42 +12688,8 @@ SYSCALL_DEFINE5(perf_event_open, * where we start modifying current state. */ - if (move_group) { - perf_remove_from_context(group_leader, 0); - put_pmu_ctx(group_leader->pmu_ctx); - - for_each_sibling_event(sibling, group_leader) { - perf_remove_from_context(sibling, 0); - put_pmu_ctx(sibling->pmu_ctx); - } - - /* - * Install the group siblings before the group leader. - * - * Because a group leader will try and install the entire group - * (through the sibling list, which is still in-tact), we can - * end up with siblings installed in the wrong context. - * - * By installing siblings first we NO-OP because they're not - * reachable through the group lists. - */ - for_each_sibling_event(sibling, group_leader) { - sibling->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(sibling); - perf_install_in_context(ctx, sibling, sibling->cpu); - } - - /* - * Removing from the context ends up with disabled - * event. What we want here is event in the initial - * startup state, ready to be add into new context. - */ - group_leader->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(group_leader); - perf_install_in_context(ctx, group_leader, group_leader->cpu); - } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); /* * Precalculate sample_data sizes; do while holding ctx::mutex such From patchwork Tue Aug 8 06:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDB26C04A6A for ; Tue, 8 Aug 2023 19:02:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233376AbjHHTCx (ORCPT ); Tue, 8 Aug 2023 15:02:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233409AbjHHTBx (ORCPT ); Tue, 8 Aug 2023 15:01:53 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C745187EB0; Tue, 8 Aug 2023 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515899; x=1723051899; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v2TBA06nKy1Tjy6kQhxIeqPQoJIb6eI5+/NRpk5Kagw=; b=FaCNBzbTwFWhkl/78MOtcK+M2XR3pmGMrQEJT3vLf7OkZmm1mwOgLNPH pz58ek8JCbt6dvReUgRQrQkpNL8xp8+J0U96H6K08xltOdTFlR/9ihdw0 L9qVq6pTUnkfmypNlzlQhWwEH9UrLWs2ZdVBTQKdAfTDN8vJiAU7tpCBp S0U//fgCwOpXl4wSJ0HkwkmgArYxwfQG28eB7JVFO3BdF2A7c+IXcprHJ miRP3eTdvXOl6n8QgF0q1w6hAXy+mg0xiaF2jWlwiXYRE/ypcpQMU1omH l9dcIlyONEokgUt5oCA6OF15U1nOFvghbzznlnXbLMywy7fCvX/QuLj+h Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582015" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582015" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:26:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377719" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377719" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:26:40 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi , Marc Zyngier Subject: [PATCH RFV v2 05/13] perf/core: Add function perf_event_create_group_kernel_counters() Date: Tue, 8 Aug 2023 14:31:03 +0800 Message-Id: <20230808063111.1870070-6-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add function perf_event_create_group_kernel_counters() which can be used to create group perf events from kernel space. Comparing with modifying function perf_event_create_kernel_counter() directly to support create group events, creating a new function looks a better method since function perf_event_create_kernel_counter() is called by many places in kernel and modifying directly this function introduces lots of changes. Kernel space may want to create group events just like user space perf tool does. One example is to support topdown metrics feature in KVM. Current perf logic requires perf tool creates an perf events group to handle the topdown metrics profiling. The events group couples one slots event acting as group leader and multiple metric events. To support topdown metrics feature in KVM, KVM has to follow this requirement to create the events group from kernel space. That's why we need to add this new function. Suggested-by: Marc Zyngier Signed-off-by: Dapeng Mi --- include/linux/perf_event.h | 6 ++++++ kernel/events/core.c | 39 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 2166a69e3bf2..e95152531f4c 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1104,6 +1104,12 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, struct task_struct *task, perf_overflow_handler_t callback, void *context); +extern struct perf_event * +perf_event_create_group_kernel_counters(struct perf_event_attr *attr, + int cpu, struct task_struct *task, + struct perf_event *group_leader, + perf_overflow_handler_t overflow_handler, + void *context); extern void perf_pmu_migrate_context(struct pmu *pmu, int src_cpu, int dst_cpu); int perf_event_read_local(struct perf_event *event, u64 *value, diff --git a/kernel/events/core.c b/kernel/events/core.c index 15eb82d1a010..1877171e9590 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12762,11 +12762,34 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct task_struct *task, perf_overflow_handler_t overflow_handler, void *context) +{ + return perf_event_create_group_kernel_counters(attr, cpu, task, + NULL, overflow_handler, context); +} +EXPORT_SYMBOL_GPL(perf_event_create_kernel_counter); + +/** + * perf_event_create_group_kernel_counters + * + * @attr: attributes of the counter to create + * @cpu: cpu in which the counter is bound + * @task: task to profile (NULL for percpu) + * @group_leader: the group leader event of the created event + * @overflow_handler: callback to trigger when we hit the event + * @context: context data could be used in overflow_handler callback + */ +struct perf_event * +perf_event_create_group_kernel_counters(struct perf_event_attr *attr, + int cpu, struct task_struct *task, + struct perf_event *group_leader, + perf_overflow_handler_t overflow_handler, + void *context) { struct perf_event_pmu_context *pmu_ctx; struct perf_event_context *ctx; struct perf_event *event; struct pmu *pmu; + int move_group = 0; int err; /* @@ -12776,7 +12799,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, if (attr->aux_output) return ERR_PTR(-EINVAL); - event = perf_event_alloc(attr, cpu, task, NULL, NULL, + if (task && group_leader && + group_leader->attr.inherit != attr->inherit) + return ERR_PTR(-EINVAL); + + event = perf_event_alloc(attr, cpu, task, group_leader, NULL, overflow_handler, context, -1); if (IS_ERR(event)) { err = PTR_ERR(event); @@ -12806,6 +12833,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_unlock; } + err = perf_event_group_leader_check(group_leader, event, attr, ctx, + &pmu, &move_group); + if (err) + goto err_unlock; + pmu_ctx = find_get_pmu_context(pmu, ctx, event); if (IS_ERR(pmu_ctx)) { err = PTR_ERR(pmu_ctx); @@ -12833,6 +12865,9 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_pmu_ctx; } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); + perf_install_in_context(ctx, event, event->cpu); perf_unpin_context(ctx); mutex_unlock(&ctx->mutex); @@ -12851,7 +12886,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, err: return ERR_PTR(err); } -EXPORT_SYMBOL_GPL(perf_event_create_kernel_counter); +EXPORT_SYMBOL_GPL(perf_event_create_group_kernel_counters); static void __perf_pmu_remove(struct perf_event_context *ctx, int cpu, struct pmu *pmu, From patchwork Tue Aug 8 06:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B984C04FE0 for ; Tue, 8 Aug 2023 19:02:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231769AbjHHTCu (ORCPT ); Tue, 8 Aug 2023 15:02:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230035AbjHHTBy (ORCPT ); Tue, 8 Aug 2023 15:01:54 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36AAC187EB1; Tue, 8 Aug 2023 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515901; x=1723051901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BI0wCz5Y1/hfopD6qA0qFJQF81Z7fbGbcwe5ACKj33k=; b=glbnkBclzXSDsID57KE/WICXqYBWP18bhv39X3c55MT9Ek0N2Ct7xIzK v6oxRDD0448icuxAtoSrLgIDCa5uGYyzXaIgu3sUn/3gPKXVSxuOwWGCA VQLW4ooyZ5aqX84uGHOhbRkoIbpudZY2IrzBAx+vGREhuYTLp8Rua+HfN wrGMwSKshchDtpDP6fDH9SKfvwVxldnwWCztKmNpCpQ3iD09sSoZWU2Ee I5tQwFK8d07bTOe+SShCn+p6HXo1mw2cUcC5ZzeAWdFreQSWFrwxQDR+P uTMclUdSoCxxu9YIDrE0qRW4qUAeZHw2AiqBAgBuY9+qoqeKCxXkvgaHp w==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582052" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582052" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:26:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377740" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377740" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:26:50 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 06/13] perf/x86: Fix typos and inconsistent indents in perf_event header Date: Tue, 8 Aug 2023 14:31:04 +0800 Message-Id: <20230808063111.1870070-7-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is one typo and some inconsistent indents in perf_event.h header file. Fix them. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/perf_event.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 85a9fd5a3ec3..63e1ce1f4b27 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -386,15 +386,15 @@ static inline bool is_topdown_idx(int idx) * * With this fake counter assigned, the guest LBR event user (such as KVM), * can program the LBR registers on its own, and we don't actually do anything - * with then in the host context. + * with them in the host context. */ -#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. */ -#define INTEL_FIXED_VLBR_EVENT 0x1b00 +#define INTEL_FIXED_VLBR_EVENT 0x1b00 /* * Adaptive PEBS v4 From patchwork Tue Aug 8 06:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3725C04A94 for ; Tue, 8 Aug 2023 19:02:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232885AbjHHTCr (ORCPT ); Tue, 8 Aug 2023 15:02:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230043AbjHHTBy (ORCPT ); Tue, 8 Aug 2023 15:01:54 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CA9B187EB2; Tue, 8 Aug 2023 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515901; x=1723051901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NJpOTnRSj8qqZCbQHRVgYJuiVfbmVTUNUDYhP4Pi3xk=; b=asvFYbtzwOK4VtgIRVzN4RP+hi+JkooHmvZRT/ljyX1C1gVeaKG/vf49 WWCUJkpOiVTEcjgA6WK7Oxvi6yUzxGPRmccyvPKYEXgCg6M0tlIiMOAxw GU4i8eCXCFN/poSb8LMTCH9ApdbdqaiNvgIBtbQzpclY7HQaCa4tVgrJK V9UOirZjxX+T7O/gy029EoM1a5EJBYEe1qXzGY5fUXx7ztijDMdsljsoN sjGkaon4xhYRSMt+oCFjtggaPnRlMAo50pt7R2FcF5cNwYmYBynn0wfx1 4dr+WFJNs05/n84ffbZZWvy4O2eJJ4Styj3WyFjg3TgN2kvMjHYdZ1AsU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582091" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582091" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377769" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377769" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:05 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 07/13] perf/x86: Add constraint for guest perf metrics event Date: Tue, 8 Aug 2023 14:31:05 +0800 Message-Id: <20230808063111.1870070-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When guest wants to use PERF_METRICS MSR, a virtual metrics event needs to be created in the perf subsystem so that the guest can have exclusive ownership of the PERF_METRICS MSR. We introduce the new vmetrics constraint, so that we can couple this virtual metrics event with slots event as a events group to involves in the host perf system scheduling. Since Guest metric events are always recognized as vCPU process's events on host, they are time-sharing multiplexed with other host metric events, so that we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of this virtual metrics event. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 28 +++++++++++++++++++++------- arch/x86/events/perf_event.h | 1 + arch/x86/include/asm/perf_event.h | 15 +++++++++++++++ 3 files changed, 37 insertions(+), 7 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 2a284ba951b7..60a2384cd936 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3147,17 +3147,26 @@ intel_bts_constraints(struct perf_event *event) return NULL; } +static struct event_constraint *intel_virt_event_constraints[] __read_mostly = { + &vlbr_constraint, + &vmetrics_constraint, +}; + /* - * Note: matches a fake event, like Fixed2. + * Note: matches a virtual event, like vmetrics. */ static struct event_constraint * -intel_vlbr_constraints(struct perf_event *event) +intel_virt_constraints(struct perf_event *event) { - struct event_constraint *c = &vlbr_constraint; + int i; + struct event_constraint *c; - if (unlikely(constraint_match(c, event->hw.config))) { - event->hw.flags |= c->flags; - return c; + for (i = 0; i < ARRAY_SIZE(intel_virt_event_constraints); i++) { + c = intel_virt_event_constraints[i]; + if (unlikely(constraint_match(c, event->hw.config))) { + event->hw.flags |= c->flags; + return c; + } } return NULL; @@ -3357,7 +3366,7 @@ __intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx, { struct event_constraint *c; - c = intel_vlbr_constraints(event); + c = intel_virt_constraints(event); if (c) return c; @@ -5349,6 +5358,11 @@ static struct attribute *spr_tsx_events_attrs[] = { NULL, }; +struct event_constraint vmetrics_constraint = + __EVENT_CONSTRAINT(INTEL_FIXED_VMETRICS_EVENT, + (1ULL << INTEL_PMC_IDX_FIXED_VMETRICS), + FIXED_EVENT_FLAGS, 1, 0, 0); + static ssize_t freeze_on_smi_show(struct device *cdev, struct device_attribute *attr, char *buf) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index d6de4487348c..895c572f379c 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1482,6 +1482,7 @@ void reserve_lbr_buffers(void); extern struct event_constraint bts_constraint; extern struct event_constraint vlbr_constraint; +extern struct event_constraint vmetrics_constraint; void intel_pmu_enable_bts(u64 config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 63e1ce1f4b27..d767807aae91 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -390,6 +390,21 @@ static inline bool is_topdown_idx(int idx) */ #define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +/* + * We model guest TopDown metrics event tracing similarly. + * + * Guest metric events are recognized as vCPU process's events on host, they + * would be time-sharing multiplexed with other host metric events, so that + * we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of virtual + * metrics event. + */ +#define INTEL_PMC_IDX_FIXED_VMETRICS (INTEL_PMC_IDX_METRIC_BASE) + +/* + * Pseudo-encoding the guest metrics event as event=0x00,umask=0x11, + * since it would claim bit 48 which is effectively Fixed16. + */ +#define INTEL_FIXED_VMETRICS_EVENT 0x1100 /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. From patchwork Tue Aug 8 06:31:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854DAC07E8C for ; Tue, 8 Aug 2023 19:02:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232624AbjHHTCn (ORCPT ); Tue, 8 Aug 2023 15:02:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230124AbjHHTBz (ORCPT ); Tue, 8 Aug 2023 15:01:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF693187EB3; Tue, 8 Aug 2023 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515901; x=1723051901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QlboPx30hLAI4mLeYCjPSJRK7AOUQDBHz0Gu5Bov8K4=; b=YUBgdSENlP7XHM8csFYO+jgGIrYZUyCFDDLLeBEle+qmUCwabVWyHOT0 t8zVbISwiX/3JtJh7kKxLpoTKEdFH8V2h2N9//5tL4rFir+f1P7wc1gLb kkV2+n7QPjuUO6cVs2w7L5p2H96gwdu6Olr2J3diKBCYC8oML6E7B+w1U EmgzS5GYlUwS3wC1xIVxXfbxQyVkCpuow1sHho5KMLkHRo4d8AYuYGx0a OpEYDOtuyMMQp6DyK4PN74wTHiucbgbX2cZ36FYXRw/Ixnq1YjCS+/0n2 O5R0KId3CY9tglsPFuMWBWMt8h8vIt0TraB3GGtYUJGkFe9g9dUFkYTG2 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582123" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582123" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377784" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377784" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:13 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 08/13] perf/core: Add new function perf_event_topdown_metrics() Date: Tue, 8 Aug 2023 14:31:06 +0800 Message-Id: <20230808063111.1870070-9-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new function perf_event_topdown_metrics(). This new function is quite familiar with function perf_event_period(), but it updates slots count and metrics raw data instead of sample period into perf system. When guest restores FIXED_CTR3 and PERF_METRICS MSRs in sched-in process, KVM needs to capture the MSR writing trap and set the MSR values of guest into corresponding perf events just like function perf_event_period() does. Initially we tried to reuse the function perf_event_period() to set the slots/metrics value, but we found it was quite hard. The function perf_event_period() only works on sampling events but unfortunately slots event and metric events in topdown mode are all non-sampling events. There are sampling event check and lots of sampling period related check and setting in the function perf_event_period() call-chain. If we want to reuse the function perf_event_period(), we have to add lots of if-else changes on the entire function-chain and even modify the function name. This would totally mess up the function perf_event_period(). Thus, we select to create a new function perf_event_topdown_metrics() to set the slots/metrics values. This makes logic and code both be clearer. Signed-off-by: Dapeng Mi --- include/linux/perf_event.h | 13 ++++++++ kernel/events/core.c | 62 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index e95152531f4c..fe12a2ea10d9 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1661,6 +1661,11 @@ perf_event_addr_filters(struct perf_event *event) return ifh; } +struct td_metrics { + u64 slots; + u64 metric; +}; + extern void perf_event_addr_filters_sync(struct perf_event *event); extern void perf_report_aux_output_id(struct perf_event *event, u64 hw_id); @@ -1695,6 +1700,8 @@ extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); +extern int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1781,6 +1788,12 @@ static inline u64 perf_event_pause(struct perf_event *event, bool reset) { return 0; } + +static inline int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + return 0; +} #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index 1877171e9590..6fa86e8bfb89 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5776,6 +5776,68 @@ int perf_event_period(struct perf_event *event, u64 value) } EXPORT_SYMBOL_GPL(perf_event_period); +static void __perf_event_topdown_metrics(struct perf_event *event, + struct perf_cpu_context *cpuctx, + struct perf_event_context *ctx, + void *info) +{ + struct td_metrics *td_metrics = (struct td_metrics *)info; + bool active; + + active = (event->state == PERF_EVENT_STATE_ACTIVE); + if (active) { + perf_pmu_disable(event->pmu); + /* + * We could be throttled; unthrottle now to avoid the tick + * trying to unthrottle while we already re-started the event. + */ + if (event->hw.interrupts == MAX_INTERRUPTS) { + event->hw.interrupts = 0; + perf_log_throttle(event, 1); + } + event->pmu->stop(event, PERF_EF_UPDATE); + } + + event->hw.saved_slots = td_metrics->slots; + event->hw.saved_metric = td_metrics->metric; + + if (active) { + event->pmu->start(event, PERF_EF_RELOAD); + perf_pmu_enable(event->pmu); + } +} + +static int _perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + /* + * Slots event in topdown metrics scenario + * must be non-sampling event. + */ + if (is_sampling_event(event)) + return -EINVAL; + + if (!value) + return -EINVAL; + + event_function_call(event, __perf_event_topdown_metrics, value); + + return 0; +} + +int perf_event_topdown_metrics(struct perf_event *event, struct td_metrics *value) +{ + struct perf_event_context *ctx; + int ret; + + ctx = perf_event_ctx_lock(event); + ret = _perf_event_topdown_metrics(event, value); + perf_event_ctx_unlock(event, ctx); + + return ret; +} +EXPORT_SYMBOL_GPL(perf_event_topdown_metrics); + static const struct file_operations perf_fops; static inline int perf_fget_light(int fd, struct fd *p) From patchwork Tue Aug 8 06:31:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FDDBC04FE2 for ; Tue, 8 Aug 2023 19:02:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231931AbjHHTCk (ORCPT ); Tue, 8 Aug 2023 15:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230210AbjHHTBz (ORCPT ); Tue, 8 Aug 2023 15:01:55 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04BF5187EB5; Tue, 8 Aug 2023 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515901; x=1723051901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A2i155f/78hm+K7xs6LxpWipuowJCdKoFaCM1OoXNYw=; b=HtPqLcpfGuvbcMAslGqpQLeg+mGDhJnBmTO2G37SQkAVe6bMtZHvFViT JdtflhiWhTBY/NUWnXzGA5y7gAezhRRWLt30AX3UvW8XEvL38hRn5/KNE UpdvRXOugp8U4/KqZRlhiS4cfLCwsOZxIIPrpeHO4/ryZEbPq6wHurUsp QXI2KuVFV+DNLSJiMi6dgO9NJ1MkLfZSbsrDijiZgc+ceWvtTHVOdcN/C h6kY4SlVKP3VLMoDg1RRkBEOO5oTblGPB3uPP/9i7pIo/BUQgQtY1Y+e9 li28f0R0kTjPonobx4MHhiFjRD4y+Q+KQVZQTJBiIT+Uh12NX9vd+/QvL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582139" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582139" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377806" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377806" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:19 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 09/13] perf/x86/intel: Handle KVM virtual metrics event in perf system Date: Tue, 8 Aug 2023 14:31:07 +0800 Message-Id: <20230808063111.1870070-10-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM creates a virtual metrics event to claim the PERF_METRICS MSR, but this virtual metrics event can't be recognized by perf system as it uses a different event code with known metrics events. We need to modify perf system code and make the KVM virtual metrics event can be recognized and processed by perf system. The counter of virtual metrics event doesn't save the real count value like other normal events, instead it's used to store the raw data of PERF_METRICS MSR, so KVM can obtain the raw data of PERF_METRICS after the virtual metrics event is disabled. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 37 ++++++++++++++++++++++++++++-------- arch/x86/events/perf_event.h | 9 ++++++++- 2 files changed, 37 insertions(+), 9 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 60a2384cd936..564f602b81f1 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2608,6 +2608,15 @@ static void __icl_update_topdown_event(struct perf_event *event, } } +static inline void __icl_update_vmetrics_event(struct perf_event *event, u64 metrics) +{ + /* + * For the guest metrics event, the count would be used to save + * the raw data of PERF_METRICS MSR. + */ + local64_set(&event->count, metrics); +} + static void update_saved_topdown_regs(struct perf_event *event, u64 slots, u64 metrics, int metric_end) { @@ -2627,6 +2636,17 @@ static void update_saved_topdown_regs(struct perf_event *event, u64 slots, } } +static inline void _intel_update_topdown_event(struct perf_event *event, + u64 slots, u64 metrics, + u64 last_slots, u64 last_metrics) +{ + if (is_vmetrics_event(event)) + __icl_update_vmetrics_event(event, metrics); + else + __icl_update_topdown_event(event, slots, metrics, + last_slots, last_metrics); +} + /* * Update all active Topdown events. * @@ -2654,9 +2674,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) if (!is_topdown_idx(idx)) continue; other = cpuc->events[idx]; - __icl_update_topdown_event(other, slots, metrics, - event ? event->hw.saved_slots : 0, - event ? event->hw.saved_metric : 0); + _intel_update_topdown_event(other, slots, metrics, + event ? event->hw.saved_slots : 0, + event ? event->hw.saved_metric : 0); } /* @@ -2664,9 +2684,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) * in active_mask e.g. x86_pmu_stop() */ if (event && !test_bit(event->hw.idx, cpuc->active_mask)) { - __icl_update_topdown_event(event, slots, metrics, - event->hw.saved_slots, - event->hw.saved_metric); + _intel_update_topdown_event(event, slots, metrics, + event->hw.saved_slots, + event->hw.saved_metric); /* * In x86_pmu_stop(), the event is cleared in active_mask first, @@ -3847,8 +3867,9 @@ static int core_pmu_hw_config(struct perf_event *event) static bool is_available_metric_event(struct perf_event *event) { - return is_metric_event(event) && - event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX; + return (is_metric_event(event) && + event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX) || + is_vmetrics_event(event); } static inline bool is_mem_loads_event(struct perf_event *event) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 895c572f379c..e0703f743713 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -105,9 +105,16 @@ static inline bool is_slots_event(struct perf_event *event) return (event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_TD_SLOTS; } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static inline bool is_topdown_event(struct perf_event *event) { - return is_metric_event(event) || is_slots_event(event); + return is_metric_event(event) || is_slots_event(event) || + is_vmetrics_event(event); } struct amd_nb { From patchwork Tue Aug 8 06:31:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C87C5C04A94 for ; Tue, 8 Aug 2023 19:02:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230495AbjHHTCh (ORCPT ); Tue, 8 Aug 2023 15:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232819AbjHHTBt (ORCPT ); Tue, 8 Aug 2023 15:01:49 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA805187EBA; Tue, 8 Aug 2023 10:31:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515904; x=1723051904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RpnEWNOdHdlbo2PqUI4ucQ+iqH0bsKTNR+X8uHS/dMU=; b=NMTwVqCXrYazHfGPJ1voD48Xo3ycFSH6b9omUnDIgbTArIzlatQ7pWJt DUk40DzzzCeud6FzRjUMe07xKOV5isGx5eg4hixBGg0YEbgyDVTmX1AXz OefuVZ2F1bcxOSCCaIH8fAdJpTtj1XykaONsCIkLArgY1VFdBrSOPNyy1 AFJIMdbDlQ4mmzcyCrNHhhbuHf6F6VAYnmlk3OUOs9IoC/XhDQR3j5j+I YNEPutKsoZF8kvqQdzgTfwHS0gh557V1ctLMnzlGykEkeNVU7QFB/AfI+ 9RA0JCHHLgBeepF+RAMWvWu8cMF8G4sVaTIFoJvf93oJJkIGVLGW0ReRz w==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582166" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582166" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377849" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377849" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:29 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 10/13] KVM: x86/pmu: Extend pmc_reprogram_counter() to create group events Date: Tue, 8 Aug 2023 14:31:08 +0800 Message-Id: <20230808063111.1870070-11-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Current perf code creates a events group which contains a slots event that acts as group leader and multiple metric events to support the topdown perf metrics feature. To support the topdown metrics feature in KVM and reduce the changes for perf system at the same time, we follow this mature mechanism and create a events group in KVM. The events group contains a slots event which claims the fixed counter 3 and act as group leader as perf system requires, and a virtual metrics event which claims PERF_METRICS MSR. This events group would be scheduled as a whole by the perf system. Unfortunately the function pmc_reprogram_counter() can only create a single event for every counter, so this change extends the function and makes it have the capability to create a events group. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 11 +++++- arch/x86/kvm/pmu.c | 65 ++++++++++++++++++++++++++------- arch/x86/kvm/pmu.h | 22 ++++++++--- arch/x86/kvm/svm/pmu.c | 2 + arch/x86/kvm/vmx/pmu_intel.c | 4 ++ 5 files changed, 84 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 51fb7728c407..9216ac912728 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -490,12 +490,12 @@ enum pmc_type { struct kvm_pmc { enum pmc_type type; u8 idx; + u8 max_nr_events; bool is_paused; bool intr; u64 counter; u64 prev_counter; u64 eventsel; - struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* * only for creating or reusing perf_event, @@ -503,6 +503,15 @@ struct kvm_pmc { * ctrl value for fixed counters. */ u64 current_config; + /* + * Non-leader events may need some extra information, + * this field can be used to store this information. + */ + u64 extra_config; + union { + struct perf_event *perf_event; + DECLARE_FLEX_ARRAY(struct perf_event *, perf_events); + }; }; /* More counters may conflict with other existing Architectural MSRs */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index edb89b51b383..21dd7a4e5658 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -187,7 +187,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool intr) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); - struct perf_event *event; + struct perf_event *event, *group_leader; struct perf_event_attr attr = { .type = type, .size = sizeof(attr), @@ -199,6 +199,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, .config = config, }; bool pebs = test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); + unsigned int i, j; attr.sample_period = get_sample_period(pmc, pmc->counter); @@ -221,36 +222,74 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, attr.precise_ip = pmc_get_pebs_precise_level(pmc); } - event = perf_event_create_kernel_counter(&attr, -1, current, - kvm_perf_overflow, pmc); - if (IS_ERR(event)) { - pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", - PTR_ERR(event), pmc->idx); - return PTR_ERR(event); + /* + * To create grouped events, the first created perf_event doesn't + * know it will be the group_leader and may move to an unexpected + * enabling path, thus delay all enablement until after creation, + * not affecting non-grouped events to save one perf interface call. + */ + if (pmc->max_nr_events > 1) + attr.disabled = 1; + + for (i = 0; i < pmc->max_nr_events; i++) { + group_leader = i ? pmc->perf_event : NULL; + event = perf_event_create_group_kernel_counters(&attr, -1, + current, group_leader, + kvm_perf_overflow, pmc); + if (IS_ERR(event)) { + pr_err_ratelimited("kvm_pmu: event %u of pmc %u creation failed %ld\n", + i, pmc->idx, PTR_ERR(event)); + + for (j = 0; j < i; j++) { + perf_event_release_kernel(pmc->perf_events[j]); + pmc->perf_events[j] = NULL; + pmc_to_pmu(pmc)->event_count--; + } + + return PTR_ERR(event); + } + + pmc->perf_events[i] = event; + pmc_to_pmu(pmc)->event_count++; } - pmc->perf_event = event; - pmc_to_pmu(pmc)->event_count++; pmc->is_paused = false; pmc->intr = intr || pebs; + + if (!attr.disabled) + return 0; + + for (i = 0; i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); + return 0; } static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; + unsigned int i; if (!pmc->perf_event || pmc->is_paused) return; - /* update counter, reset event value to avoid redundant accumulation */ + /* + * Update counter, reset event value to avoid redundant + * accumulation. Disable group leader event firstly and + * then disable non-group leader events. + */ counter += perf_event_pause(pmc->perf_event, true); + for (i = 1; i < pmc->max_nr_events; i++) + perf_event_pause(pmc->perf_events[i], true); + pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; } static bool pmc_resume_counter(struct kvm_pmc *pmc) { + unsigned int i; + if (!pmc->perf_event) return false; @@ -264,8 +303,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) (!!pmc->perf_event->attr.precise_ip)) return false; - /* reuse perf_event to serve as pmc_reprogram_counter() does*/ - perf_event_enable(pmc->perf_event); + for (i = 0; i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); pmc->is_paused = false; return true; @@ -432,7 +471,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (pmc->current_config == new_config && pmc_resume_counter(pmc)) goto reprogram_complete; - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, false); pmc->current_config = new_config; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7d9ba301c090..5eea630b959c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -74,21 +74,31 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) return counter & pmc_bitmask(pmc); } -static inline void pmc_release_perf_event(struct kvm_pmc *pmc) +static inline void pmc_release_perf_event(struct kvm_pmc *pmc, bool reset) { - if (pmc->perf_event) { - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - pmc->current_config = 0; + unsigned int i; + + if (!pmc->perf_event) + return; + + for (i = 0; i < pmc->max_nr_events; i++) { + perf_event_release_kernel(pmc->perf_events[i]); + pmc->perf_events[i] = NULL; pmc_to_pmu(pmc)->event_count--; } + + if (reset) { + pmc->current_config = 0; + pmc->extra_config = 0; + pmc->max_nr_events = 1; + } } static inline void pmc_stop_counter(struct kvm_pmc *pmc) { if (pmc->perf_event) { pmc->counter = pmc_read_counter(pmc); - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, true); } } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index cef5a3d0abd0..861ff79ac614 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -230,6 +230,8 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 044d61aa63dc..5393ef206255 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -628,6 +628,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { @@ -635,6 +637,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; + pmu->fixed_counters[i].extra_config = 0; + pmu->fixed_counters[i].max_nr_events = 1; } lbr_desc->records.nr = 0; From patchwork Tue Aug 8 06:31:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FC9EC04A6A for ; Tue, 8 Aug 2023 18:21:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235670AbjHHSVm (ORCPT ); Tue, 8 Aug 2023 14:21:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231946AbjHHSVR (ORCPT ); Tue, 8 Aug 2023 14:21:17 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56896187EBC; Tue, 8 Aug 2023 10:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515904; x=1723051904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pdoQFhE7yR0yiU3mfGAeEWFi7JeIO0J5HwjDeeOVul8=; b=i8Gm+Y4iUx4aaD5sCb4lWN5Bf7GwV4JnxyH9fYnusy0DzLrjKkR/806J qyBzpV1Caz3Ib1CfOWhk8hAtc9skcVAhlzBYVmdY2qZVSULJ++GzeWTdR kthqB9f7SxQl5910QlUjwZlQl0kkHb1+JKOwZIMmWnvs4MCbYUbo8M7J6 tCyCOvoyrZX/sMlppE4e4nLUWI8k98asjjk2zS/pKd33JugEaBNBkVyLw 4biaKWRGTBiaTEkpIztComKa2xvZu9NWDHZwkVr5FgLCMaEMuYPWn3erx 6WWaiYRvS2MqJFklQBy+j0H0QiD6ZQoieR1B/nyaUDD0fHW3BZye6Ckil A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582201" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582201" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377879" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377879" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:36 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 11/13] KVM: x86/pmu: Support topdown perf metrics feature Date: Tue, 8 Aug 2023 14:31:09 +0800 Message-Id: <20230808063111.1870070-12-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds topdown perf metrics support for KVM. The topdown perf metrics is a feature on Intel CPUs which supports the TopDown Microarchitecture Analysis (TMA) Method. TMA is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. The details about topdown metrics support on Intel processors can be found in section "Performance Metrics" of Intel's SDM. Intel chips use fixed counter 3 and PERF_METRICS MSR together to support topdown metrics feature. Fix counter 3 counts the elapsed cpu slots, PERF_METRICS reports the topdown metrics percentage. Generally speaking, KVM has no method to know guest is running a solo slots event counting/sampling or guest is profiling the topdown perf metrics if KVM only observes the fixed counter 3. Fortunately we know topdown metrics profiling always manipulate fixed counter 3 and PERF_METRICS MSR together with a fixed sequence, FIXED_CTR3 MSR is written firstly and then PERF_METRICS follows. So we can assume topdown metrics profiling is running in guest if KVM observes PERF_METRICS writing. In current perf logic, an events group is required to handle the topdown metrics profiling, and the events group couples a slots event which acts events group leader and multiple metric events. To coordinate with the perf topdown metrics handing logic and reduce the code changes in KVM, we choose to follow current mature vPMU PMC emulation framework. The only difference is that we need to create a events group for fixed counter 3 and manipulate FIXED_CTR3 and PERF_METRICS MSRS together instead of a single event and only manipulating FIXED_CTR3 MSR. When guest write PERF_METRICS MSR at first, KVM would create an event group which couples a slots event and a virtual metrics event. In this event group, slots event claims the fixed counter 3 HW resource and acts as group leader which is required by perf system. The virtual metrics event claims the PERF_METRICS MSR. This event group is just like the perf metrics events group on host and is scheduled by host perf system. In this proposal, the count of slots event is calculated and emulated on host and returned to guest just like other normal counters, but there is a difference for the metrics event processing. KVM doesn't calculate the real count of topdown metrics, it just stores the raw data of PERF_METRICS MSR and directly returns the stored raw data to guest. Thus, guest can get the real HW PERF_METRICS data and guarantee the calculation accuracy of topdown metrics. The whole procedure can be summarized as below. 1. KVM intercepts PERF_METRICS MSR writing and marks fixed counter 3 enter topdown profiling mode (set the max_nr_events of fixed counter 3 to 2) if it's not. 2. If the topdown metrics events group doesn't exist, create the events group firstly, and then update the saved slots count and metrics data of the group events with guest values. At last, enable the events and make the guest values are loaded into HW FIXED_CTR3 and PERF_METRICS MSRs. 3. Modify kvm_pmu_rdpmc() function to return PERF_METRICS MSR raw data to guest directly. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 6 ++++ arch/x86/kvm/pmu.c | 62 +++++++++++++++++++++++++++++++-- arch/x86/kvm/pmu.h | 28 +++++++++++++++ arch/x86/kvm/vmx/capabilities.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 48 +++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 5 +++ arch/x86/kvm/x86.c | 1 + 7 files changed, 149 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9216ac912728..d233ab4c027f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -487,6 +487,12 @@ enum pmc_type { KVM_PMC_FIXED, }; +enum topdown_events { + KVM_TD_SLOTS = 0, + KVM_TD_METRICS = 1, + KVM_TD_EVENTS_MAX = 2, +}; + struct kvm_pmc { enum pmc_type type; u8 idx; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 21dd7a4e5658..c766f2041479 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -182,6 +182,30 @@ static u64 pmc_get_pebs_precise_level(struct kvm_pmc *pmc) return 1; } +static void pmc_setup_td_metrics_events_attr(struct kvm_pmc *pmc, + struct perf_event_attr *attr, + unsigned int event_idx) +{ + if (!pmc_is_topdown_metrics_used(pmc)) + return; + + /* + * setup slots event attribute, when slots event is + * created for guest topdown metrics profiling, the + * sample period must be 0. + */ + if (event_idx == KVM_TD_SLOTS) + attr->sample_period = 0; + + /* setup vmetrics event attribute */ + if (event_idx == KVM_TD_METRICS) { + attr->config = INTEL_FIXED_VMETRICS_EVENT; + attr->sample_period = 0; + /* Only group leader event can be pinned. */ + attr->pinned = false; + } +} + static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr) @@ -233,6 +257,8 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, for (i = 0; i < pmc->max_nr_events; i++) { group_leader = i ? pmc->perf_event : NULL; + pmc_setup_td_metrics_events_attr(pmc, &attr, i); + event = perf_event_create_group_kernel_counters(&attr, -1, current, group_leader, kvm_perf_overflow, pmc); @@ -256,6 +282,12 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, pmc->is_paused = false; pmc->intr = intr || pebs; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + /* KVM need to inject PMI for PERF_METRICS overflow. */ + pmc->intr = true; + } + if (!attr.disabled) return 0; @@ -269,6 +301,7 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; unsigned int i; + u64 data; if (!pmc->perf_event || pmc->is_paused) return; @@ -279,8 +312,15 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) * then disable non-group leader events. */ counter += perf_event_pause(pmc->perf_event, true); - for (i = 1; i < pmc->max_nr_events; i++) - perf_event_pause(pmc->perf_events[i], true); + for (i = 1; i < pmc->max_nr_events; i++) { + data = perf_event_pause(pmc->perf_events[i], true); + /* + * The count of vmetrics event actually stores raw data of + * PERF_METRICS, save it to extra_config. + */ + if (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS || i == KVM_TD_METRICS) + pmc->extra_config = data; + } pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; @@ -558,6 +598,21 @@ static int kvm_pmu_rdpmc_vmware(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) return 0; } +static inline int kvm_pmu_read_perf_metrics(struct kvm_vcpu *vcpu, + unsigned int idx, u64 *data) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc) { + *data = 0; + return 1; + } + + *data = pmc->extra_config; + return 0; +} + int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) { bool fast_mode = idx & (1u << 31); @@ -571,6 +626,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (is_vmware_backdoor_pmc(idx)) return kvm_pmu_rdpmc_vmware(vcpu, idx, data); + if (idx & INTEL_PMC_FIXED_RDPMC_METRICS) + return kvm_pmu_read_perf_metrics(vcpu, idx, data); + pmc = static_call(kvm_x86_pmu_rdpmc_ecx_to_pmc)(vcpu, idx, &mask); if (!pmc) return 1; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 5eea630b959c..36622316756b 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -257,6 +257,34 @@ static inline bool pmc_is_globally_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } +static inline int pmc_is_topdown_metrics_used(struct kvm_pmc *pmc) +{ + return (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS) && + (pmc->max_nr_events == KVM_TD_EVENTS_MAX); +} + +static inline int pmc_is_topdown_metrics_active(struct kvm_pmc *pmc) +{ + return pmc_is_topdown_metrics_used(pmc) && + pmc->perf_events[KVM_TD_METRICS]; +} + +static inline void pmc_update_topdown_metrics(struct kvm_pmc *pmc) +{ + struct perf_event *event; + int i; + + struct td_metrics td_metrics = { + .slots = pmc->counter, + .metric = pmc->extra_config, + }; + + for (i = 0; i < pmc->max_nr_events; i++) { + event = pmc->perf_events[i]; + perf_event_topdown_metrics(event, &td_metrics); + } +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 41a4533f9989..d8317552b634 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -22,6 +22,7 @@ extern int __read_mostly pt_mode; #define PT_MODE_HOST_GUEST 1 #define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PERF_METRICS BIT_ULL(15) #define PMU_CAP_LBR_FMT 0x3f struct nested_vmx_msrs { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 5393ef206255..d4870e92c9d3 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -229,6 +229,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) ret = (perf_capabilities & PERF_CAP_PEBS_BASELINE) && ((perf_capabilities & PERF_CAP_PEBS_FORMAT) > 3); break; + case MSR_PERF_METRICS: + ret = intel_pmu_metrics_is_enabled(vcpu) && (pmu->version > 1); + break; default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || @@ -357,6 +360,43 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu, return true; } +static int intel_pmu_handle_perf_metrics_access(struct kvm_vcpu *vcpu, + struct msr_data *msr_info, bool read) +{ + u32 index = msr_info->index; + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc || index != MSR_PERF_METRICS) + return 1; + + if (read) { + msr_info->data = pmc->extra_config; + } else { + /* + * Save guest PERF_METRICS data in to extra_config, + * the extra_config would be read to write to PERF_METRICS + * MSR in later events group creating process. + */ + pmc->extra_config = msr_info->data; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + } else { + /* + * If the slots/vmetrics events group is not + * created yet, set max_nr_events to 2 + * (slots event + vmetrics event), so KVM knows + * topdown metrics profiling is running in guest + * and slots/vmetrics events group would be created + * later. + */ + pmc->max_nr_events = KVM_TD_EVENTS_MAX; + } + } + + return 0; +} + static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -376,6 +416,10 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_PEBS_DATA_CFG: msr_info->data = pmu->pebs_data_cfg; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, true)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -438,6 +482,10 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) pmu->pebs_data_cfg = data; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, false)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c2130d2c8e24..63b6dcc360c2 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -670,6 +670,11 @@ static inline bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu) return !!vcpu_to_lbr_records(vcpu)->nr; } +static inline bool intel_pmu_metrics_is_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.perf_capabilities & PMU_CAP_PERF_METRICS; +} + void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu); int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu); void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 73db594f855b..79bb07a8e074 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1463,6 +1463,7 @@ static const u32 msrs_to_save_pmu[] = { MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, + MSR_PERF_METRICS, /* This part of MSRs should match KVM_INTEL_PMC_MAX_GENERIC. */ MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, From patchwork Tue Aug 8 06:31:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E57C04FDF for ; Tue, 8 Aug 2023 18:21:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235679AbjHHSVn (ORCPT ); Tue, 8 Aug 2023 14:21:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235640AbjHHSVS (ORCPT ); Tue, 8 Aug 2023 14:21:18 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D13F187EBF; Tue, 8 Aug 2023 10:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515904; x=1723051904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Va4FIDvyis2MT7kTBUHymJbWpd7rNtN7DbzQ10y24ho=; b=eHIG3RD9kekHPl6AkZxErdfmLuZMwbstqB81/2hgS1deCuCmC9GXgm54 X3zeWFxuPavP+r/wt/kvBOUtuf3mNEMWIZei9fALLAnF7EzqB5B5rwp6R exWzSZP6iTymuMfJbtihKNkKtifXQORJziW3OipsWPBlQT7L380oXnbgr cz8UuQqG+XWMh6/3QVSEeRDKYO/G6yDLn9vg1etMGm66sRCkhiOKmUmUj nrlur5QXb1PyHi5+37Tc7rxfAx4TFs6O+cfDxEh8zm7NsSAulKx+RR3N7 vW7vinuLizVrrMlupAwGzx6UZG/sz8NqFDUX4kkdIIEo7iRLYK43F4rRl A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582210" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582210" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377919" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377919" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:43 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 12/13] KVM: x86/pmu: Handle PERF_METRICS overflow Date: Tue, 8 Aug 2023 14:31:10 +0800 Message-Id: <20230808063111.1870070-13-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When the fixed counter 3 overflows, the PMU would also triggers an PERF_METRICS overflow subsequently. This patch handles the PERF_METRICS overflow case, it would inject an PMI into guest and set the PERF_METRICS overflow bit in PERF_GLOBAL_STATUS MSR after detecting PERF_METRICS overflow on host. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 7 ++++++- arch/x86/kvm/pmu.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 564f602b81f1..c45220804f8a 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3042,8 +3042,13 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status) * Intel Perf metrics */ if (__test_and_clear_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, (unsigned long *)&status)) { + struct perf_event *event = cpuc->events[GLOBAL_STATUS_PERF_METRICS_OVF_BIT]; + handled++; - static_call(intel_pmu_update_topdown_event)(NULL); + if (event && is_vmetrics_event(event)) + READ_ONCE(event->overflow_handler)(event, &data, regs); + else + static_call(intel_pmu_update_topdown_event)(NULL); } /* diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index c766f2041479..b3179931e9b2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -101,7 +101,7 @@ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) kvm_pmu_deliver_pmi(vcpu); } -static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) +static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi, bool metrics_of) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); bool skip_pmi = false; @@ -121,7 +121,11 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) (unsigned long *)&pmu->global_status); } } else { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); + if (metrics_of) + __set_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, + (unsigned long *)&pmu->global_status); + else + __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); } if (!pmc->intr || skip_pmi) @@ -141,11 +145,18 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) kvm_make_request(KVM_REQ_PMI, pmc->vcpu); } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static void kvm_perf_overflow(struct perf_event *perf_event, struct perf_sample_data *data, struct pt_regs *regs) { struct kvm_pmc *pmc = perf_event->overflow_handler_context; + bool metrics_of = is_vmetrics_event(perf_event); /* * Ignore overflow events for counters that are scheduled to be @@ -155,7 +166,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; - __kvm_perf_overflow(pmc, true); + __kvm_perf_overflow(pmc, true, metrics_of); kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } @@ -491,7 +502,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) goto reprogram_complete; if (pmc->counter < pmc->prev_counter) - __kvm_perf_overflow(pmc, false); + __kvm_perf_overflow(pmc, false, false); if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); From patchwork Tue Aug 8 06:31:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13346720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23F8EC04FE1 for ; Tue, 8 Aug 2023 18:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235696AbjHHSVp (ORCPT ); Tue, 8 Aug 2023 14:21:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235641AbjHHSVS (ORCPT ); Tue, 8 Aug 2023 14:21:18 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93AD53AB5; Tue, 8 Aug 2023 10:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691515904; x=1723051904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HN6umFzIzizJH0eHFuu9gWTiK+dcpuqVvLuFCdBvuNY=; b=hmczZNzOJrZIr1qoAE7flUiWwLYM9c3r+oPDonAOFt+60N0t5YWPOUxD gRMztg3sWj8RherjblshK/kDD2leb2sBcYf805Tk6LX3oLf+6l0APHago uydtZrsJWrMYoY2z5ggv5h8AUDGtE1yB0e0wWAfFPvsxAFfdIYQrT7fVB aV1cbegVnEXgywMf+KegqepMJP0W5kO4rHqpzdvPi2yOUUb4Id6C09e3+ WtbbWqRAwDeQ4Neww1kizwMvhgN2qFKELFWj/XS9X8MEJoznV8HcW4Bqt wAhLKBLWiwHvFgd1TAdusoSV1+zC+Hkj6t7MUSRKSkEe/MTTmLiIIM+r5 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="434582224" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="434582224" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2023 23:27:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="734377961" X-IronPort-AV: E=Sophos;i="6.01,263,1684825200"; d="scan'208";a="734377961" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga007.fm.intel.com with ESMTP; 07 Aug 2023 23:27:53 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFV v2 13/13] KVM: x86/pmu: Expose Topdown in MSR_IA32_PERF_CAPABILITIES Date: Tue, 8 Aug 2023 14:31:11 +0800 Message-Id: <20230808063111.1870070-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> References: <20230808063111.1870070-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Topdown support is enumerated via IA32_PERF_CAPABILITIES[bit 15]. Enable this bit for guest when the feature is available on host. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 3 +++ arch/x86/kvm/vmx/vmx.c | 2 ++ 2 files changed, 5 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index d4870e92c9d3..39afc79b0981 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -614,6 +614,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED)); pmu->global_ctrl_mask = counter_mask; + if (intel_pmu_metrics_is_enabled(vcpu)) + pmu->global_ctrl_mask &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS); + /* * GLOBAL_STATUS and GLOBAL_OVF_CONTROL (a.k.a. GLOBAL_STATUS_RESET) * share reserved bit definitions. The kernel just happens to use diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8cf1c00d9352..f9a8509f4ca8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7831,6 +7831,8 @@ static u64 vmx_get_perf_capabilities(void) perf_cap &= ~PERF_CAP_PEBS_BASELINE; } + perf_cap |= host_perf_cap & PMU_CAP_PERF_METRICS; + return perf_cap; }