From patchwork Tue Aug 22 05:11:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C35CEE49A3 for ; Tue, 22 Aug 2023 05:04:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232564AbjHVFDt (ORCPT ); Tue, 22 Aug 2023 01:03:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjHVFDr (ORCPT ); Tue, 22 Aug 2023 01:03:47 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C03618B; Mon, 21 Aug 2023 22:03:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680626; x=1724216626; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vG6JB09+GdX1oexJrBx3ofjZYO0vUM/vdHaobdJsMXA=; b=Q4jILDNFN6tZA61/D1uxf4Yr5pWpyr9S5nZ6/Qwk2EDzAOsnHwapRRkv P6D9XVp8Xa+01L067BaJfIc7Gf1igQhyzZFnbkJNvnHMxpT+aRxBUOWvf 4vB4op9B49DQGz+edHVEmcRQxNUho3SsqqJiq9mVAX4U328Y3dApevHBt qojAzkP2pygx5XTtXT/zFFW3VNbx+alxfWnBzsNpm7s2aR2SuAcdl/M51 w9w2w1pNOwT5H5Dbo9zq+KJdlEmV09roctYUjqLqnRXAqF9zuklge6EGz dvH30v0RkodLX/G6NlxLDe1Wm+u6mjjmTqSVI8rrWsBzlp4kHXcrybMbh w==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146422" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146422" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:03:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736621" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736621" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:03:40 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 01/13] KVM: x86/pmu: Add Intel CPUID-hinted TopDown slots event Date: Tue, 22 Aug 2023 13:11:28 +0800 Message-Id: <20230822051140.512879-2-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds support for the architectural topdown slots event which is hinted by CPUID.0AH.EBX. The topdown slots event counts the total number of available slots for an unhalted logical processor. Software can use this event as the denominator for the top-level metrics of the topDown Microarchitecture Analysis method. Although the MSR_PERF_METRICS MSR required for topdown events is not currently available in the guest, relying only on the data provided by the slots event is sufficient for pmu users to perceive differences in cpu pipeline machine-width across micro-architectures. The standalone slots event, like the instruction event, can be counted with gp counter or fixed counter 3 (if any). Its availability is also controlled by CPUID.AH.EBX. On Linux, perf user may encode "-e cpu/event=0xa4,umask=0x01/" or "-e cpu/slots/" to count slots events. This patch only enables slots event on GP counters. The enabling on fixed counter 3 will be supported in subsequent patches. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index f2efa0bf7ae8..7322f0c18565 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -34,6 +34,7 @@ enum intel_pmu_architectural_events { INTEL_ARCH_LLC_MISSES, INTEL_ARCH_BRANCHES_RETIRED, INTEL_ARCH_BRANCHES_MISPREDICTED, + INTEL_ARCH_TOPDOWN_SLOTS, NR_REAL_INTEL_ARCH_EVENTS, @@ -58,6 +59,7 @@ static struct { [INTEL_ARCH_LLC_MISSES] = { 0x2e, 0x41 }, [INTEL_ARCH_BRANCHES_RETIRED] = { 0xc4, 0x00 }, [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, + [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, }; From patchwork Tue Aug 22 05:11:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65620EE49AC for ; Tue, 22 Aug 2023 05:04:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232574AbjHVFD5 (ORCPT ); Tue, 22 Aug 2023 01:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232569AbjHVFD4 (ORCPT ); Tue, 22 Aug 2023 01:03:56 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DF88194; Mon, 21 Aug 2023 22:03:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680634; x=1724216634; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=drUrabYm46RgFMJ0wTDwWsyIv5fsvVBqZE8LkTEhGgI=; b=Q5cXJdXK7WQQkUrniXKqTKuQxnxn2Xi2EEUThRdcPNVX5SRX4p9qeK5o ujn3hVnmPeao5K4ox4UfX6tFM++nO24ulp/mboTkhO48kY45cVqoQNR7h ZbcBJEE9ZsMmntMzH/nwpsPGaVRB69YgMD7AI+5brT5vmg7I16B9TryiV WmbFlcR7XCi4koKd3HT6XfJk6dW+BQO3bP3bfyQBCWGZRfrqFg4lXSypT g8J+09iQCQHme34bhGKDHY5RSjUKHD+5ug85ZpgkG9wmwjgiEQy1fmsA6 LgZpYNJfyYFgl31lcoQUSs8MgOn1QB0NFK7ueoXDUzn0kefAzrUoaeIgX g==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146448" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146448" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:03:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736649" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736649" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:03:48 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 02/13] KVM: x86/pmu: Support PMU fixed counter 3 Date: Tue, 22 Aug 2023 13:11:29 +0800 Message-Id: <20230822051140.512879-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TopDown slots event can be enabled on gp counter or fixed counter 3 and it does not differ from other fixed counters in terms of the use of count and sampling modes (except for the hardware logic for event accumulation). According to commit 6017608936c1 ("perf/x86/intel: Add Icelake support"), KVM or any perf in-kernel user needs to reprogram fixed counter 3 via the kernel-defined TopDown slots event for real fixed counter 3 on the host. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 10 ++++++++++ arch/x86/kvm/x86.c | 4 ++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9f57aa33798b..057382249d39 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -509,7 +509,7 @@ struct kvm_pmc { #define KVM_INTEL_PMC_MAX_GENERIC 8 #define MSR_ARCH_PERFMON_PERFCTR_MAX (MSR_ARCH_PERFMON_PERFCTR0 + KVM_INTEL_PMC_MAX_GENERIC - 1) #define MSR_ARCH_PERFMON_EVENTSEL_MAX (MSR_ARCH_PERFMON_EVENTSEL0 + KVM_INTEL_PMC_MAX_GENERIC - 1) -#define KVM_PMC_MAX_FIXED 3 +#define KVM_PMC_MAX_FIXED 4 #define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1) #define KVM_AMD_PMC_MAX_GENERIC 6 struct kvm_pmu { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7322f0c18565..044d61aa63dc 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -45,6 +45,14 @@ enum intel_pmu_architectural_events { * core crystal clock or the bus clock (yeah, "architectural"). */ PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS, + /* + * Pseudo-architectural event used to implement IA32_FIXED_CTR3, a.k.a. + * topDown slots. The topdown slots event counts the total number of + * available slots for an unhalted logical processor. The topdwon slots + * event with PERF_METRICS MSR together provides support for topdown + * micro-architecture analysis method. + */ + PSEUDO_ARCH_TOPDOWN_SLOTS, NR_INTEL_ARCH_EVENTS, }; @@ -61,6 +69,7 @@ static struct { [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, + [PSEUDO_ARCH_TOPDOWN_SLOTS] = { 0x00, 0x04 }, }; /* mapping between fixed pmc index and intel_arch_events array */ @@ -68,6 +77,7 @@ static int fixed_pmc_events[] = { [0] = INTEL_ARCH_INSTRUCTIONS_RETIRED, [1] = INTEL_ARCH_CPU_CYCLES, [2] = PSEUDO_ARCH_REFERENCE_CYCLES, + [3] = PSEUDO_ARCH_TOPDOWN_SLOTS, }; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e4a939471df1..95b1ac3bc0b6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1459,7 +1459,7 @@ static const u32 msrs_to_save_base[] = { static const u32 msrs_to_save_pmu[] = { MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1, - MSR_ARCH_PERFMON_FIXED_CTR0 + 2, + MSR_ARCH_PERFMON_FIXED_CTR2, MSR_ARCH_PERFMON_FIXED_CTR3, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, @@ -7180,7 +7180,7 @@ static void kvm_init_msr_lists(void) { unsigned i; - BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 3, + BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 4, "Please update the fixed PMCs in msrs_to_save_pmu[]"); num_msrs_to_save = 0; From patchwork Tue Aug 22 05:11:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE419EE49B0 for ; Tue, 22 Aug 2023 05:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232587AbjHVFEP (ORCPT ); Tue, 22 Aug 2023 01:04:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232594AbjHVFEN (ORCPT ); Tue, 22 Aug 2023 01:04:13 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67765199; Mon, 21 Aug 2023 22:04:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680646; x=1724216646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8pf7f7oxoXEvzdcueT5j+AnOqZhPfLnr6A/Hm2lt40=; b=fQEyP+h7DNkDn6XOq0YNptgPbTWg9NrlsbhwB4ncUFrrNlJvZkrqT5Ld ykmmqsO6hqoAngGFmBQo7m8BsYeemFUscTY/Z1LZjrDLkN9ugCgqJpx6R pYvfJk9b5gNaTP8EMBXdFXUArrDU5CVphY+yWwg0kl7QHks7WpcnI3d7W HsS52ba8nHY/AxSZ9hFKXaT2tTeasWyH0yMqfUMzS+DFzNb38jkcUMwur UqOx9ZPxYoINB4m7peEOQTpJmgJCcsCBNeOvPtGuiN5b6fkvA7jWFDFne c8UXY9n2I+99a/z+cPZmrYj9RKw71Mvo4ji7lGxbt1YcPGFBcDFHoJQY1 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146504" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146504" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736713" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736713" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:00 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 03/13] perf/core: Add function perf_event_group_leader_check() Date: Tue, 22 Aug 2023 13:11:30 +0800 Message-Id: <20230822051140.512879-4-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group leader checking code in function sys_perf_event_open() to create a new function perf_event_group_leader_check(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same check for group leader event just like function sys_perf_event_open() does. So extract the checking code into a separate function and avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 143 +++++++++++++++++++++++-------------------- 1 file changed, 78 insertions(+), 65 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 78ae7b6f90fd..616391158d7c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12324,6 +12324,81 @@ perf_check_permission(struct perf_event_attr *attr, struct task_struct *task) return is_capable || ptrace_may_access(task, ptrace_mode); } +static int perf_event_group_leader_check(struct perf_event *group_leader, + struct perf_event *event, + struct perf_event_attr *attr, + struct perf_event_context *ctx, + struct pmu **pmu, + int *move_group) +{ + if (!group_leader) + return 0; + + /* + * Do not allow a recursive hierarchy (this new sibling + * becoming part of another group-sibling): + */ + if (group_leader->group_leader != group_leader) + return -EINVAL; + + /* All events in a group should have the same clock */ + if (group_leader->clock != event->clock) + return -EINVAL; + + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) + return -EINVAL; + + /* + * Make sure we're both on the same context; either task or cpu. + */ + if (group_leader->ctx != ctx) + return -EINVAL; + + /* + * Only a group leader can be exclusive or pinned + */ + if (attr->exclusive || attr->pinned) + return -EINVAL; + + if (is_software_event(event) && + !in_software_context(group_leader)) { + /* + * If the event is a sw event, but the group_leader + * is on hw context. + * + * Allow the addition of software events to hw + * groups, this is safe because software events + * never fail to schedule. + * + * Note the comment that goes with struct + * perf_event_pmu_context. + */ + *pmu = group_leader->pmu_ctx->pmu; + } else if (!is_software_event(event)) { + if (is_software_event(group_leader) && + (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { + /* + * In case the group is a pure software group, and we + * try to add a hardware event, move the whole group to + * the hardware context. + */ + *move_group = 1; + } + + /* Don't allow group of multiple hw events from different pmus */ + if (!in_software_context(group_leader) && + group_leader->pmu_ctx->pmu != *pmu) + return -EINVAL; + } + + return 0; +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12518,71 +12593,9 @@ SYSCALL_DEFINE5(perf_event_open, } } - if (group_leader) { - err = -EINVAL; - - /* - * Do not allow a recursive hierarchy (this new sibling - * becoming part of another group-sibling): - */ - if (group_leader->group_leader != group_leader) - goto err_locked; - - /* All events in a group should have the same clock */ - if (group_leader->clock != event->clock) - goto err_locked; - - /* - * Make sure we're both events for the same CPU; - * grouping events for different CPUs is broken; since - * you can never concurrently schedule them anyhow. - */ - if (group_leader->cpu != event->cpu) - goto err_locked; - - /* - * Make sure we're both on the same context; either task or cpu. - */ - if (group_leader->ctx != ctx) - goto err_locked; - - /* - * Only a group leader can be exclusive or pinned - */ - if (attr.exclusive || attr.pinned) - goto err_locked; - - if (is_software_event(event) && - !in_software_context(group_leader)) { - /* - * If the event is a sw event, but the group_leader - * is on hw context. - * - * Allow the addition of software events to hw - * groups, this is safe because software events - * never fail to schedule. - * - * Note the comment that goes with struct - * perf_event_pmu_context. - */ - pmu = group_leader->pmu_ctx->pmu; - } else if (!is_software_event(event)) { - if (is_software_event(group_leader) && - (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { - /* - * In case the group is a pure software group, and we - * try to add a hardware event, move the whole group to - * the hardware context. - */ - move_group = 1; - } - - /* Don't allow group of multiple hw events from different pmus */ - if (!in_software_context(group_leader) && - group_leader->pmu_ctx->pmu != pmu) - goto err_locked; - } - } + err = perf_event_group_leader_check(group_leader, event, &attr, ctx, &pmu, &move_group); + if (err) + goto err_locked; /* * Now that we're certain of the pmu; find the pmu_ctx. From patchwork Tue Aug 22 05:11:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55ED2EE49A8 for ; Tue, 22 Aug 2023 05:04:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232599AbjHVFEj (ORCPT ); Tue, 22 Aug 2023 01:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232578AbjHVFEi (ORCPT ); Tue, 22 Aug 2023 01:04:38 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 416F81A5; Mon, 21 Aug 2023 22:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680652; x=1724216652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7C56/gt/RGvreA0L41MZxGhb9elLWu8ezfqy+Cf9NVY=; b=h2HjlYv52c9hY4CpO79OQfpyXKmx0MvZfjaqJ9pi/5I5y5bvkjFYX5dr OYJvx5lh8QOj3GqWclDBQnlQx5lsRiwLHWDRlMROR/bEiQURkdozddUta v6A+of6wmKTwDfp4eIh9Pbjc0qefc2qrLGaGST+IutdJvYrebWQDiJIbq rQBYoIfHyR44roCg67sgcNPHqs37R70q5O10GKS3uofnbAZSSUKKZaKav /qxrt01gXGed+744vrUy2hVkIhdfJIxcyQ216AR2GXpIV8hnKo2MEzyau Iot8iVOUC7p3nG+9OOAYVQeQggQLspTAdINMibX6BOVV85vdPTWhoDiI6 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146528" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146528" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736736" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736736" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:06 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 04/13] perf/core: Add function perf_event_move_group() Date: Tue, 22 Aug 2023 13:11:31 +0800 Message-Id: <20230822051140.512879-5-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group moving code in function sys_perf_event_open() to create a new function perf_event_move_group(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same group moving for group leader event just like function sys_perf_event_open() does. So extract the moving code into a separate function to avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 82 ++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 616391158d7c..15eb82d1a010 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12399,6 +12399,48 @@ static int perf_event_group_leader_check(struct perf_event *group_leader, return 0; } +static void perf_event_move_group(struct perf_event *group_leader, + struct perf_event_pmu_context *pmu_ctx, + struct perf_event_context *ctx) +{ + struct perf_event *sibling; + + perf_remove_from_context(group_leader, 0); + put_pmu_ctx(group_leader->pmu_ctx); + + for_each_sibling_event(sibling, group_leader) { + perf_remove_from_context(sibling, 0); + put_pmu_ctx(sibling->pmu_ctx); + } + + /* + * Install the group siblings before the group leader. + * + * Because a group leader will try and install the entire group + * (through the sibling list, which is still in-tact), we can + * end up with siblings installed in the wrong context. + * + * By installing siblings first we NO-OP because they're not + * reachable through the group lists. + */ + for_each_sibling_event(sibling, group_leader) { + sibling->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(sibling); + perf_install_in_context(ctx, sibling, sibling->cpu); + } + + /* + * Removing from the context ends up with disabled + * event. What we want here is event in the initial + * startup state, ready to be add into new context. + */ + group_leader->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(group_leader); + perf_install_in_context(ctx, group_leader, group_leader->cpu); +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12414,7 +12456,7 @@ SYSCALL_DEFINE5(perf_event_open, { struct perf_event *group_leader = NULL, *output_event = NULL; struct perf_event_pmu_context *pmu_ctx; - struct perf_event *event, *sibling; + struct perf_event *event; struct perf_event_attr attr; struct perf_event_context *ctx; struct file *event_file = NULL; @@ -12646,42 +12688,8 @@ SYSCALL_DEFINE5(perf_event_open, * where we start modifying current state. */ - if (move_group) { - perf_remove_from_context(group_leader, 0); - put_pmu_ctx(group_leader->pmu_ctx); - - for_each_sibling_event(sibling, group_leader) { - perf_remove_from_context(sibling, 0); - put_pmu_ctx(sibling->pmu_ctx); - } - - /* - * Install the group siblings before the group leader. - * - * Because a group leader will try and install the entire group - * (through the sibling list, which is still in-tact), we can - * end up with siblings installed in the wrong context. - * - * By installing siblings first we NO-OP because they're not - * reachable through the group lists. - */ - for_each_sibling_event(sibling, group_leader) { - sibling->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(sibling); - perf_install_in_context(ctx, sibling, sibling->cpu); - } - - /* - * Removing from the context ends up with disabled - * event. What we want here is event in the initial - * startup state, ready to be add into new context. - */ - group_leader->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(group_leader); - perf_install_in_context(ctx, group_leader, group_leader->cpu); - } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); /* * Precalculate sample_data sizes; do while holding ctx::mutex such From patchwork Tue Aug 22 05:11:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A2CCEE49A5 for ; Tue, 22 Aug 2023 05:05:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232609AbjHVFFA (ORCPT ); Tue, 22 Aug 2023 01:05:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232593AbjHVFE7 (ORCPT ); Tue, 22 Aug 2023 01:04:59 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79910CCB; Mon, 21 Aug 2023 22:04:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680663; x=1724216663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+t0hpZzksitjecx9dpNXxxmhUyXW1qd16Yx5Qj+M7Ps=; b=OxoUIs7mqIXTkJG4dRDtBoHL0h8nsiNYf0BfIuv2BecU0BLSZEl6OU7S vzTJ/yiz7OQT4gbwfS0AHHYX+m0LQycxQy390VmV3+Zpjc2hwUTem70Z3 tyHDrJW7b0WMUUqQQZ8qtzpojqIUQn2llMoCNQQ8eg0yVkOegGxU3EMcx DsNefgdKmKXNfHOZgBEQ27SiEreDm+pTIUE5HnEtw0EfrGHhJuJirj0ux WNjirAXQMgXZbjAIwGolWiFc/qTNmNPIzteEPIbfIGYQ9fQPgoldBDuue QjMSTPnUuX0Ws6CXQJOwDTxy96HVYtxUp+5xoCvokQALqy7O6MbjT+HY5 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146544" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146544" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736760" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736760" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:14 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 05/13] perf/core: Add *group_leader for perf_event_create_group_kernel_counters() Date: Tue, 22 Aug 2023 13:11:32 +0800 Message-Id: <20230822051140.512879-6-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new argument *group_leader for perf_event_create_group_kernel_counters(), so group events can be created from Kernel space just like user space does. Current perf logic requires a perf events group is created to handle the topdown metrics profiling. To support topdown metrics feature in KVM, Kernel space also need the capability to create group events. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 ++-- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- include/linux/perf_event.h | 1 + kernel/events/core.c | 17 ++++++++++++++++- kernel/events/hw_breakpoint.c | 4 ++-- kernel/events/hw_breakpoint_test.c | 2 +- kernel/watchdog_perf.c | 2 +- 8 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index 458cb7419502..6494b2701204 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -952,12 +952,12 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr, u64 tmp; miss_event = perf_event_create_kernel_counter(miss_attr, plr->cpu, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); if (IS_ERR(miss_event)) goto out; hit_event = perf_event_create_kernel_counter(hit_attr, plr->cpu, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); if (IS_ERR(hit_event)) goto out_miss; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index edb89b51b383..760d293f4a4a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -221,7 +221,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, attr.precise_ip = pmc_get_pebs_precise_level(pmc); } - event = perf_event_create_kernel_counter(&attr, -1, current, + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, kvm_perf_overflow, pmc); if (IS_ERR(event)) { pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 044d61aa63dc..9bf80fee34fb 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -302,8 +302,8 @@ int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu) return 0; } - event = perf_event_create_kernel_counter(&attr, -1, - current, NULL, NULL); + event = perf_event_create_kernel_counter(&attr, -1, current, + NULL, NULL, NULL); if (IS_ERR(event)) { pr_debug_ratelimited("%s: failed %ld\n", __func__, PTR_ERR(event)); diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 2166a69e3bf2..c182f811f5f8 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1102,6 +1102,7 @@ extern struct perf_event * perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct task_struct *task, + struct perf_event *group_leader, perf_overflow_handler_t callback, void *context); extern void perf_pmu_migrate_context(struct pmu *pmu, diff --git a/kernel/events/core.c b/kernel/events/core.c index 15eb82d1a010..a3af2e740dea 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12754,12 +12754,14 @@ SYSCALL_DEFINE5(perf_event_open, * @attr: attributes of the counter to create * @cpu: cpu in which the counter is bound * @task: task to profile (NULL for percpu) + * @group_leader: the group leader event of the created event * @overflow_handler: callback to trigger when we hit the event * @context: context data could be used in overflow_handler callback */ struct perf_event * perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct task_struct *task, + struct perf_event *group_leader, perf_overflow_handler_t overflow_handler, void *context) { @@ -12767,6 +12769,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct perf_event_context *ctx; struct perf_event *event; struct pmu *pmu; + int move_group = 0; int err; /* @@ -12776,7 +12779,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, if (attr->aux_output) return ERR_PTR(-EINVAL); - event = perf_event_alloc(attr, cpu, task, NULL, NULL, + if (task && group_leader && + group_leader->attr.inherit != attr->inherit) + return ERR_PTR(-EINVAL); + + event = perf_event_alloc(attr, cpu, task, group_leader, NULL, overflow_handler, context, -1); if (IS_ERR(event)) { err = PTR_ERR(event); @@ -12806,6 +12813,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_unlock; } + err = perf_event_group_leader_check(group_leader, event, attr, ctx, + &pmu, &move_group); + if (err) + goto err_unlock; + pmu_ctx = find_get_pmu_context(pmu, ctx, event); if (IS_ERR(pmu_ctx)) { err = PTR_ERR(pmu_ctx); @@ -12833,6 +12845,9 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_pmu_ctx; } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); + perf_install_in_context(ctx, event, event->cpu); perf_unpin_context(ctx); mutex_unlock(&ctx->mutex); diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index c3797701339c..65b5b1421e62 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -771,7 +771,7 @@ register_user_hw_breakpoint(struct perf_event_attr *attr, void *context, struct task_struct *tsk) { - return perf_event_create_kernel_counter(attr, -1, tsk, triggered, + return perf_event_create_kernel_counter(attr, -1, tsk, NULL, triggered, context); } EXPORT_SYMBOL_GPL(register_user_hw_breakpoint); @@ -881,7 +881,7 @@ register_wide_hw_breakpoint(struct perf_event_attr *attr, cpus_read_lock(); for_each_online_cpu(cpu) { - bp = perf_event_create_kernel_counter(attr, cpu, NULL, + bp = perf_event_create_kernel_counter(attr, cpu, NULL, NULL, triggered, context); if (IS_ERR(bp)) { err = PTR_ERR(bp); diff --git a/kernel/events/hw_breakpoint_test.c b/kernel/events/hw_breakpoint_test.c index 2cfeeecf8de9..694db7645676 100644 --- a/kernel/events/hw_breakpoint_test.c +++ b/kernel/events/hw_breakpoint_test.c @@ -39,7 +39,7 @@ static struct perf_event *register_test_bp(int cpu, struct task_struct *tsk, int attr.bp_addr = (unsigned long)&break_vars[idx]; attr.bp_len = HW_BREAKPOINT_LEN_1; attr.bp_type = HW_BREAKPOINT_RW; - return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL); + return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL, NULL); } static void unregister_test_bp(struct perf_event **bp) diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c index 8ea00c4a24b2..f8a52c4df079 100644 --- a/kernel/watchdog_perf.c +++ b/kernel/watchdog_perf.c @@ -120,7 +120,7 @@ static int hardlockup_detector_event_create(void) wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); /* Try to register using hardware perf events */ - evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, + evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, NULL, watchdog_overflow_callback, NULL); if (IS_ERR(evt)) { pr_debug("Perf event create on CPU %d failed with %ld\n", cpu, From patchwork Tue Aug 22 05:11:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01539EE49A8 for ; Tue, 22 Aug 2023 05:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232630AbjHVFG1 (ORCPT ); Tue, 22 Aug 2023 01:06:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232620AbjHVFGY (ORCPT ); Tue, 22 Aug 2023 01:06:24 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D992FCC3; Mon, 21 Aug 2023 22:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680749; x=1724216749; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BI0wCz5Y1/hfopD6qA0qFJQF81Z7fbGbcwe5ACKj33k=; b=UPxsxOltBRuamvcFkUYpPKNWlqsoo9itgsBmsK5N3rU1011ajyE3okon SmLxGWeFeDNrFJHhAuCVSH1z9AMmz8xrqcOkwkpmGEhMuTolX+jEvfCpR Kz1zSDLJ5H+7kbaPoPNJhy9pN1J6lJOPnW4+B0zaQQTnSJTaAvqi8yFYx HV6axaCDe06TM/lRqZfZrCd8A43aM9eJq90Y4OH6j/kCICjFSv1b3siiA K7uJbrTudPWUnGTgQeFhGrjRbAP7YqqY/3so+kynILcEgCCWVKqizf77R EL8qFvrAgT1FqQ5VfaJ6vua3imlu4y6ABKsn7/vXnQp7w6FSvf1zUfado g==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146569" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146569" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736777" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736777" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:21 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 06/13] perf/x86: Fix typos and inconsistent indents in perf_event header Date: Tue, 22 Aug 2023 13:11:33 +0800 Message-Id: <20230822051140.512879-7-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is one typo and some inconsistent indents in perf_event.h header file. Fix them. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/perf_event.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 85a9fd5a3ec3..63e1ce1f4b27 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -386,15 +386,15 @@ static inline bool is_topdown_idx(int idx) * * With this fake counter assigned, the guest LBR event user (such as KVM), * can program the LBR registers on its own, and we don't actually do anything - * with then in the host context. + * with them in the host context. */ -#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. */ -#define INTEL_FIXED_VLBR_EVENT 0x1b00 +#define INTEL_FIXED_VLBR_EVENT 0x1b00 /* * Adaptive PEBS v4 From patchwork Tue Aug 22 05:11:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 967F6EE49AB for ; Tue, 22 Aug 2023 05:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232620AbjHVFG2 (ORCPT ); Tue, 22 Aug 2023 01:06:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232503AbjHVFGY (ORCPT ); Tue, 22 Aug 2023 01:06:24 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 569CA195; Mon, 21 Aug 2023 22:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680755; x=1724216755; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NJpOTnRSj8qqZCbQHRVgYJuiVfbmVTUNUDYhP4Pi3xk=; b=XvmIeyiunH2pdxAh74oOtjYinMruF0uw9AODSM2Wgx/xLMnqaXKN4/Sj sur3gPrLbDY1+C5JNvMWV8u3mEwSoTqsvfxtV89UcshqVR3oGipWH0MiX y0iLcwrx/ztNkuFSzDL4kDMErNt8KnOb/SSixVABqg36pTyCULeXA6XcS gIsW5CrUfD1bWmWhylMQwbAH7GZAkmM38cA4VaQ+PLS154LJxFYp/YfO0 N4GNAsdNQBMH++P4e00YBP4SS6LoIXpnC5zaFpmeyjYr84erRFmP2ZBmy GXkeqIkGYFqhntp/LCpqqKL+57ExjG9tw3OXWj0we3gjnH3fwzqDGskMS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146593" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146593" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736790" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736790" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:28 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 07/13] perf/x86: Add constraint for guest perf metrics event Date: Tue, 22 Aug 2023 13:11:34 +0800 Message-Id: <20230822051140.512879-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When guest wants to use PERF_METRICS MSR, a virtual metrics event needs to be created in the perf subsystem so that the guest can have exclusive ownership of the PERF_METRICS MSR. We introduce the new vmetrics constraint, so that we can couple this virtual metrics event with slots event as a events group to involves in the host perf system scheduling. Since Guest metric events are always recognized as vCPU process's events on host, they are time-sharing multiplexed with other host metric events, so that we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of this virtual metrics event. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 28 +++++++++++++++++++++------- arch/x86/events/perf_event.h | 1 + arch/x86/include/asm/perf_event.h | 15 +++++++++++++++ 3 files changed, 37 insertions(+), 7 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 2a284ba951b7..60a2384cd936 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3147,17 +3147,26 @@ intel_bts_constraints(struct perf_event *event) return NULL; } +static struct event_constraint *intel_virt_event_constraints[] __read_mostly = { + &vlbr_constraint, + &vmetrics_constraint, +}; + /* - * Note: matches a fake event, like Fixed2. + * Note: matches a virtual event, like vmetrics. */ static struct event_constraint * -intel_vlbr_constraints(struct perf_event *event) +intel_virt_constraints(struct perf_event *event) { - struct event_constraint *c = &vlbr_constraint; + int i; + struct event_constraint *c; - if (unlikely(constraint_match(c, event->hw.config))) { - event->hw.flags |= c->flags; - return c; + for (i = 0; i < ARRAY_SIZE(intel_virt_event_constraints); i++) { + c = intel_virt_event_constraints[i]; + if (unlikely(constraint_match(c, event->hw.config))) { + event->hw.flags |= c->flags; + return c; + } } return NULL; @@ -3357,7 +3366,7 @@ __intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx, { struct event_constraint *c; - c = intel_vlbr_constraints(event); + c = intel_virt_constraints(event); if (c) return c; @@ -5349,6 +5358,11 @@ static struct attribute *spr_tsx_events_attrs[] = { NULL, }; +struct event_constraint vmetrics_constraint = + __EVENT_CONSTRAINT(INTEL_FIXED_VMETRICS_EVENT, + (1ULL << INTEL_PMC_IDX_FIXED_VMETRICS), + FIXED_EVENT_FLAGS, 1, 0, 0); + static ssize_t freeze_on_smi_show(struct device *cdev, struct device_attribute *attr, char *buf) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index d6de4487348c..895c572f379c 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1482,6 +1482,7 @@ void reserve_lbr_buffers(void); extern struct event_constraint bts_constraint; extern struct event_constraint vlbr_constraint; +extern struct event_constraint vmetrics_constraint; void intel_pmu_enable_bts(u64 config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 63e1ce1f4b27..d767807aae91 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -390,6 +390,21 @@ static inline bool is_topdown_idx(int idx) */ #define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +/* + * We model guest TopDown metrics event tracing similarly. + * + * Guest metric events are recognized as vCPU process's events on host, they + * would be time-sharing multiplexed with other host metric events, so that + * we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of virtual + * metrics event. + */ +#define INTEL_PMC_IDX_FIXED_VMETRICS (INTEL_PMC_IDX_METRIC_BASE) + +/* + * Pseudo-encoding the guest metrics event as event=0x00,umask=0x11, + * since it would claim bit 48 which is effectively Fixed16. + */ +#define INTEL_FIXED_VMETRICS_EVENT 0x1100 /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. From patchwork Tue Aug 22 05:11:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F1BEEE49A3 for ; Tue, 22 Aug 2023 05:06:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232643AbjHVFGb (ORCPT ); Tue, 22 Aug 2023 01:06:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232624AbjHVFG1 (ORCPT ); Tue, 22 Aug 2023 01:06:27 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 935EA19B; Mon, 21 Aug 2023 22:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680763; x=1724216763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wsPUGIx/YGVDE2WLhocZZXusD43rUawnI38HJ4e6l8Q=; b=AZ2z2LoczS5N+EMXSVPH79MjAqx9MXonxvoLaDx7BYEsNYnkWkbCB9Ir yYZ0E2dPCx329XSf9HiDI5mCJv4IOcYDl6Z78xyg4x4m0Iyok8PSPdF6c emcyn2QljzFIjiYjTZjP8X4p7aGaI/aKU77toTFkbdAsP1JQ2L7rtKdKi rM9fb4lIwHzVyB9RR5UjctccGxeqQxM/H6UaduUAV1qmx7RPcP0SxEv5y vNdKiDtUSIofDEGmkg49Lyomy5UG85q+Skk9JutmvBMjVKyJulUb7OGHO jRCDQNp0jqldEA9d+XiGhDLrxchlob3Sa9hBQHTV1oiEpqClrNk4EuOXZ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146620" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146620" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736810" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736810" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:35 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 08/13] perf/core: Add new function perf_event_topdown_metrics() Date: Tue, 22 Aug 2023 13:11:35 +0800 Message-Id: <20230822051140.512879-9-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new function perf_event_topdown_metrics(). This new function is quite familiar with function perf_event_period(), but it updates slots count and metrics raw data instead of sample period into perf system. When guest restores FIXED_CTR3 and PERF_METRICS MSRs in sched-in process, KVM needs to capture the MSR writing trap and set the MSR values of guest into corresponding perf events just like function perf_event_period() does. Initially we tried to reuse the function perf_event_period() to set the slots/metrics value, but we found it was quite hard. The function perf_event_period() only works on sampling events but unfortunately slots event and metric events in topdown mode are all non-sampling events. There are sampling event check and lots of sampling period related check and setting in the function perf_event_period() call-chain. If we want to reuse the function perf_event_period(), we have to add lots of if-else changes on the entire function-chain and even modify the function name. This would totally mess up the function perf_event_period(). Thus, we select to create a new function perf_event_topdown_metrics() to set the slots/metrics values. This makes logic and code both be clearer. Signed-off-by: Dapeng Mi --- include/linux/perf_event.h | 13 ++++++++ kernel/events/core.c | 62 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index c182f811f5f8..3de9c4a9c2d8 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1057,6 +1057,11 @@ perf_cgroup_from_task(struct task_struct *task, struct perf_event_context *ctx) } #endif /* CONFIG_CGROUP_PERF */ +struct td_metrics { + u64 slots; + u64 metric; +}; + #ifdef CONFIG_PERF_EVENTS extern struct perf_event_context *perf_cpu_task_ctx(void); @@ -1690,6 +1695,8 @@ extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); +extern int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1776,6 +1783,12 @@ static inline u64 perf_event_pause(struct perf_event *event, bool reset) { return 0; } + +static inline int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + return 0; +} #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index a3af2e740dea..781f652f6907 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5776,6 +5776,68 @@ int perf_event_period(struct perf_event *event, u64 value) } EXPORT_SYMBOL_GPL(perf_event_period); +static void __perf_event_topdown_metrics(struct perf_event *event, + struct perf_cpu_context *cpuctx, + struct perf_event_context *ctx, + void *info) +{ + struct td_metrics *td_metrics = (struct td_metrics *)info; + bool active; + + active = (event->state == PERF_EVENT_STATE_ACTIVE); + if (active) { + perf_pmu_disable(event->pmu); + /* + * We could be throttled; unthrottle now to avoid the tick + * trying to unthrottle while we already re-started the event. + */ + if (event->hw.interrupts == MAX_INTERRUPTS) { + event->hw.interrupts = 0; + perf_log_throttle(event, 1); + } + event->pmu->stop(event, PERF_EF_UPDATE); + } + + event->hw.saved_slots = td_metrics->slots; + event->hw.saved_metric = td_metrics->metric; + + if (active) { + event->pmu->start(event, PERF_EF_RELOAD); + perf_pmu_enable(event->pmu); + } +} + +static int _perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + /* + * Slots event in topdown metrics scenario + * must be non-sampling event. + */ + if (is_sampling_event(event)) + return -EINVAL; + + if (!value) + return -EINVAL; + + event_function_call(event, __perf_event_topdown_metrics, value); + + return 0; +} + +int perf_event_topdown_metrics(struct perf_event *event, struct td_metrics *value) +{ + struct perf_event_context *ctx; + int ret; + + ctx = perf_event_ctx_lock(event); + ret = _perf_event_topdown_metrics(event, value); + perf_event_ctx_unlock(event, ctx); + + return ret; +} +EXPORT_SYMBOL_GPL(perf_event_topdown_metrics); + static const struct file_operations perf_fops; static inline int perf_fget_light(int fd, struct fd *p) From patchwork Tue Aug 22 05:11:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E96EEE49A3 for ; Tue, 22 Aug 2023 05:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232025AbjHVFHF (ORCPT ); Tue, 22 Aug 2023 01:07:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232619AbjHVFHD (ORCPT ); Tue, 22 Aug 2023 01:07:03 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2C3EE65; Mon, 21 Aug 2023 22:06:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680800; x=1724216800; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Dand54oqp4FiREPx1naWdBqec6FGhz2PeEmIqanXIhg=; b=jm3MpAhEcdGAI8oBrXfBvaiF/J04jSqqN0GEJKMwuwydoFF3FQ91ygEr NnjW784jZi6trV6kT81KOyUiw6c5jAXGk4fE36EHROAD5xiaAz4JCp/E3 1bmaqekaHQqnisi7u7RASOPkxdqutt50+hOSOmD3r95GIHFAAjVVzNncG EEpzg24Q1yL75dW0htB85zlrTjfTl6GE2r9RJTzmtoPhYVPxcAhOamTs1 q1awikRh0KsaA2e6fsKCh5v8bHcOLllTTMoMZW9xNNIwB5I9XSfuGO3jL S36pOwvPJwPl5qIv8aXS9QchkvsCt2gFK8GMkWyl7tamT0m2eLYPLGfW5 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146664" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146664" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736871" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736871" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:41 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 09/13] perf/x86/intel: Handle KVM virtual metrics event in perf system Date: Tue, 22 Aug 2023 13:11:36 +0800 Message-Id: <20230822051140.512879-10-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM creates a virtual metrics event to claim the PERF_METRICS MSR, but this virtual metrics event can't be recognized by perf system as it uses a different event code with known metrics events. We need to modify perf system code and make the KVM virtual metrics event can be recognized and processed by perf system. The counter of virtual metrics event doesn't save the real count value like other normal events, instead it's used to store the raw data of PERF_METRICS MSR, so KVM can obtain the raw data of PERF_METRICS after the virtual metrics event is disabled. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 39 +++++++++++++++++++++++++++--------- arch/x86/events/perf_event.h | 9 ++++++++- 2 files changed, 38 insertions(+), 10 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 60a2384cd936..9d53b1c6ac86 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2535,7 +2535,7 @@ static int icl_set_topdown_event_period(struct perf_event *event) hwc->saved_metric = 0; } - if ((hwc->saved_slots) && is_slots_event(event)) { + if (is_slots_event(event)) { wrmsrl(MSR_CORE_PERF_FIXED_CTR3, hwc->saved_slots); wrmsrl(MSR_PERF_METRICS, hwc->saved_metric); } @@ -2608,6 +2608,15 @@ static void __icl_update_topdown_event(struct perf_event *event, } } +static inline void __icl_update_vmetrics_event(struct perf_event *event, u64 metrics) +{ + /* + * For the guest metrics event, the count would be used to save + * the raw data of PERF_METRICS MSR. + */ + local64_set(&event->count, metrics); +} + static void update_saved_topdown_regs(struct perf_event *event, u64 slots, u64 metrics, int metric_end) { @@ -2627,6 +2636,17 @@ static void update_saved_topdown_regs(struct perf_event *event, u64 slots, } } +static inline void _intel_update_topdown_event(struct perf_event *event, + u64 slots, u64 metrics, + u64 last_slots, u64 last_metrics) +{ + if (is_vmetrics_event(event)) + __icl_update_vmetrics_event(event, metrics); + else + __icl_update_topdown_event(event, slots, metrics, + last_slots, last_metrics); +} + /* * Update all active Topdown events. * @@ -2654,9 +2674,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) if (!is_topdown_idx(idx)) continue; other = cpuc->events[idx]; - __icl_update_topdown_event(other, slots, metrics, - event ? event->hw.saved_slots : 0, - event ? event->hw.saved_metric : 0); + _intel_update_topdown_event(other, slots, metrics, + event ? event->hw.saved_slots : 0, + event ? event->hw.saved_metric : 0); } /* @@ -2664,9 +2684,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) * in active_mask e.g. x86_pmu_stop() */ if (event && !test_bit(event->hw.idx, cpuc->active_mask)) { - __icl_update_topdown_event(event, slots, metrics, - event->hw.saved_slots, - event->hw.saved_metric); + _intel_update_topdown_event(event, slots, metrics, + event->hw.saved_slots, + event->hw.saved_metric); /* * In x86_pmu_stop(), the event is cleared in active_mask first, @@ -3847,8 +3867,9 @@ static int core_pmu_hw_config(struct perf_event *event) static bool is_available_metric_event(struct perf_event *event) { - return is_metric_event(event) && - event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX; + return (is_metric_event(event) && + event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX) || + is_vmetrics_event(event); } static inline bool is_mem_loads_event(struct perf_event *event) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 895c572f379c..e0703f743713 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -105,9 +105,16 @@ static inline bool is_slots_event(struct perf_event *event) return (event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_TD_SLOTS; } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static inline bool is_topdown_event(struct perf_event *event) { - return is_metric_event(event) || is_slots_event(event); + return is_metric_event(event) || is_slots_event(event) || + is_vmetrics_event(event); } struct amd_nb { From patchwork Tue Aug 22 05:11:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88541EE49AB for ; Tue, 22 Aug 2023 05:07:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232662AbjHVFHG (ORCPT ); Tue, 22 Aug 2023 01:07:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232651AbjHVFHF (ORCPT ); Tue, 22 Aug 2023 01:07:05 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 755D81BE; Mon, 21 Aug 2023 22:06:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680801; x=1724216801; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bxK1+pAcrPqByk7LpcEzqtvpLHTVGSuPHfARgdbOhKM=; b=Twnt5uaEVsqslf6nwHE1FRWXgUbKKXr73zXpEPMqdrHuXF4+4vA4Bke7 SYL5iiXG8Zu5JUtrqiUUyJKrCiW1sDi26RAVm+BJuNJvB8nrb5J71YGWh JVGgK5WMakijp1klp8L/Kzx/IVzoTBlFT77Jg6vugyPgVJyFqGU9F5Lz3 QOSmwZDCCUiGvcCTeUmMEDknEbyyZEGcroV21mQ6EajDI1ePL7buRkkCj EG4MlIgiSlvXxbNQQMJlpqiXayeUVlUWo+E+0iD6lCOO1gbjhR4NWp2Sd VEvHJQLJZE476/QkPxx6GRcRR8B4HU/XViNc85kVfwe+3U0pCdn7gJ6n7 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146697" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146697" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:04:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736895" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736895" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:50 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 10/13] KVM: x86/pmu: Extend pmc_reprogram_counter() to create group events Date: Tue, 22 Aug 2023 13:11:37 +0800 Message-Id: <20230822051140.512879-11-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Current perf code creates a events group which contains a slots event that acts as group leader and multiple metric events to support the topdown perf metrics feature. To support the topdown metrics feature in KVM and reduce the changes for perf system at the same time, we follow this mature mechanism and create a events group in KVM. The events group contains a slots event which claims the fixed counter 3 and act as group leader as perf system requires, and a virtual metrics event which claims PERF_METRICS MSR. This events group would be scheduled as a whole by the perf system. Unfortunately the function pmc_reprogram_counter() can only create a single event for every counter, so this change extends the function and makes it have the capability to create a events group. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 11 +++++- arch/x86/kvm/pmu.c | 64 ++++++++++++++++++++++++++------- arch/x86/kvm/pmu.h | 22 ++++++++---- arch/x86/kvm/svm/pmu.c | 2 ++ arch/x86/kvm/vmx/pmu_intel.c | 4 +++ 5 files changed, 83 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 057382249d39..235e24fe66a4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -490,12 +490,12 @@ enum pmc_type { struct kvm_pmc { enum pmc_type type; u8 idx; + u8 max_nr_events; bool is_paused; bool intr; u64 counter; u64 prev_counter; u64 eventsel; - struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* * only for creating or reusing perf_event, @@ -503,6 +503,15 @@ struct kvm_pmc { * ctrl value for fixed counters. */ u64 current_config; + /* + * Non-leader events may need some extra information, + * this field can be used to store this information. + */ + u64 extra_config; + union { + struct perf_event *perf_event; + DECLARE_FLEX_ARRAY(struct perf_event *, perf_events); + }; }; /* More counters may conflict with other existing Architectural MSRs */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 760d293f4a4a..b02a56c77647 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -187,7 +187,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool intr) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); - struct perf_event *event; + struct perf_event *event, *group_leader; struct perf_event_attr attr = { .type = type, .size = sizeof(attr), @@ -199,6 +199,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, .config = config, }; bool pebs = test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); + unsigned int i, j; attr.sample_period = get_sample_period(pmc, pmc->counter); @@ -221,36 +222,73 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, attr.precise_ip = pmc_get_pebs_precise_level(pmc); } - event = perf_event_create_kernel_counter(&attr, -1, current, NULL, - kvm_perf_overflow, pmc); - if (IS_ERR(event)) { - pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", - PTR_ERR(event), pmc->idx); - return PTR_ERR(event); + /* + * To create grouped events, the first created perf_event doesn't + * know it will be the group_leader and may move to an unexpected + * enabling path, thus delay all enablement until after creation, + * not affecting non-grouped events to save one perf interface call. + */ + if (pmc->max_nr_events > 1) + attr.disabled = 1; + + for (i = 0; i < pmc->max_nr_events; i++) { + group_leader = i ? pmc->perf_event : NULL; + event = perf_event_create_kernel_counter(&attr, -1, + current, group_leader, + kvm_perf_overflow, pmc); + if (IS_ERR(event)) { + pr_err_ratelimited("kvm_pmu: event %u of pmc %u creation failed %ld\n", + i, pmc->idx, PTR_ERR(event)); + + for (j = 0; j < i; j++) { + perf_event_release_kernel(pmc->perf_events[j]); + pmc->perf_events[j] = NULL; + pmc_to_pmu(pmc)->event_count--; + } + + return PTR_ERR(event); + } + + pmc->perf_events[i] = event; + pmc_to_pmu(pmc)->event_count++; } - pmc->perf_event = event; - pmc_to_pmu(pmc)->event_count++; pmc->is_paused = false; pmc->intr = intr || pebs; + + if (!attr.disabled) + return 0; + + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); + return 0; } static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; + unsigned int i; if (!pmc->perf_event || pmc->is_paused) return; - /* update counter, reset event value to avoid redundant accumulation */ + /* + * Update counter, reset event value to avoid redundant + * accumulation. Disable group leader event firstly and + * then disable non-group leader events. + */ counter += perf_event_pause(pmc->perf_event, true); + for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_pause(pmc->perf_events[i], true); pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; } static bool pmc_resume_counter(struct kvm_pmc *pmc) { + unsigned int i; + if (!pmc->perf_event) return false; @@ -264,8 +302,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) (!!pmc->perf_event->attr.precise_ip)) return false; - /* reuse perf_event to serve as pmc_reprogram_counter() does*/ - perf_event_enable(pmc->perf_event); + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); pmc->is_paused = false; return true; @@ -432,7 +470,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (pmc->current_config == new_config && pmc_resume_counter(pmc)) goto reprogram_complete; - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, false); pmc->current_config = new_config; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7d9ba301c090..3dc0deb83096 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -74,21 +74,31 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) return counter & pmc_bitmask(pmc); } -static inline void pmc_release_perf_event(struct kvm_pmc *pmc) +static inline void pmc_release_perf_event(struct kvm_pmc *pmc, bool reset) { - if (pmc->perf_event) { - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - pmc->current_config = 0; + unsigned int i; + + if (!pmc->perf_event) + return; + + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) { + perf_event_release_kernel(pmc->perf_events[i]); + pmc->perf_events[i] = NULL; pmc_to_pmu(pmc)->event_count--; } + + if (reset) { + pmc->current_config = 0; + pmc->extra_config = 0; + pmc->max_nr_events = 1; + } } static inline void pmc_stop_counter(struct kvm_pmc *pmc) { if (pmc->perf_event) { pmc->counter = pmc_read_counter(pmc); - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, true); } } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index cef5a3d0abd0..861ff79ac614 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -230,6 +230,8 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 9bf80fee34fb..b45396e0a46c 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -628,6 +628,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { @@ -635,6 +637,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; + pmu->fixed_counters[i].extra_config = 0; + pmu->fixed_counters[i].max_nr_events = 1; } lbr_desc->records.nr = 0; From patchwork Tue Aug 22 05:11:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67E90EE49A8 for ; Tue, 22 Aug 2023 05:07:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232678AbjHVFH1 (ORCPT ); Tue, 22 Aug 2023 01:07:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231576AbjHVFH0 (ORCPT ); Tue, 22 Aug 2023 01:07:26 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69764191; Mon, 21 Aug 2023 22:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680819; x=1724216819; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U6aVh1MFuTXfCz6xGzJhQRJsBE/sxwKFrUCvIhNsIJg=; b=ZaSQo8a3m84jFPgZLiKA+GZuIzz5hHAOhNPadpbnaWffQGXQmr/BkoMr Y7Dwmj2tCxL04I8kR5W0ZT+3d+1/r9QfCf1JAT8AxfjU34gkGggNZIMPA rv2z7XwhcsVyE0YjOY8AV36gwOimuff+mrfZqc2Ch44ZYPrB6iUIRSrh9 aW5jJ7X9crbC7gyrLqP8El5yBGQUrmpsB6l9AP+W9XXM5RdTIVotMmCmF HhmuZPgtj9v9n6vT6Y6G2CId0dUQfgkrQYGfwL8Q7jAjPAlS9XpeCNBww cvgUViC/okGxc4tHBonSF2dxXjEeaHmT8dMkgI+jr6CJEb1WEydQM0VaY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146723" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146723" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:05:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982736945" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982736945" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:04:58 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 11/13] KVM: x86/pmu: Support topdown perf metrics feature Date: Tue, 22 Aug 2023 13:11:38 +0800 Message-Id: <20230822051140.512879-12-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds topdown perf metrics support for KVM. The topdown perf metrics is a feature on Intel CPUs which supports the TopDown Microarchitecture Analysis (TMA) Method. TMA is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. The details about topdown metrics support on Intel processors can be found in section "Performance Metrics" of Intel's SDM. Intel chips use fixed counter 3 and PERF_METRICS MSR together to support topdown metrics feature. Fix counter 3 counts the elapsed cpu slots, PERF_METRICS reports the topdown metrics percentage. Generally speaking, KVM has no method to know guest is running a solo slots event counting/sampling or guest is profiling the topdown perf metrics if KVM only observes the fixed counter 3. Fortunately we know topdown metrics profiling always manipulate fixed counter 3 and PERF_METRICS MSR together with a fixed sequence, FIXED_CTR3 MSR is written firstly and then PERF_METRICS follows. So we can assume topdown metrics profiling is running in guest if KVM observes PERF_METRICS writing. In current perf logic, an events group is required to handle the topdown metrics profiling, and the events group couples a slots event which acts events group leader and multiple metric events. To coordinate with the perf topdown metrics handing logic and reduce the code changes in KVM, we choose to follow current mature vPMU PMC emulation framework. The only difference is that we need to create a events group for fixed counter 3 and manipulate FIXED_CTR3 and PERF_METRICS MSRS together instead of a single event and only manipulating FIXED_CTR3 MSR. When guest write PERF_METRICS MSR at first, KVM would create an event group which couples a slots event and a virtual metrics event. In this event group, slots event claims the fixed counter 3 HW resource and acts as group leader which is required by perf system. The virtual metrics event claims the PERF_METRICS MSR. This event group is just like the perf metrics events group on host and is scheduled by host perf system. In this proposal, the count of slots event is calculated and emulated on host and returned to guest just like other normal counters, but there is a difference for the metrics event processing. KVM doesn't calculate the real count of topdown metrics, it just stores the raw data of PERF_METRICS MSR and directly returns the stored raw data to guest. Thus, guest can get the real HW PERF_METRICS data and guarantee the calculation accuracy of topdown metrics. The whole procedure can be summarized as below. 1. KVM intercepts PERF_METRICS MSR writing and marks fixed counter 3 enter topdown profiling mode (set the max_nr_events of fixed counter 3 to 2) if it's not. 2. If the topdown metrics events group doesn't exist, create the events group firstly, and then update the saved slots count and metrics data of the group events with guest values. At last, enable the events and make the guest values are loaded into HW FIXED_CTR3 and PERF_METRICS MSRs. 3. Modify kvm_pmu_rdpmc() function to return PERF_METRICS MSR raw data to guest directly. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 6 ++++ arch/x86/kvm/pmu.c | 62 +++++++++++++++++++++++++++++++-- arch/x86/kvm/pmu.h | 28 +++++++++++++++ arch/x86/kvm/vmx/capabilities.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 48 +++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 5 +++ arch/x86/kvm/x86.c | 1 + 7 files changed, 149 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 235e24fe66a4..d037259c6887 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -487,6 +487,12 @@ enum pmc_type { KVM_PMC_FIXED, }; +enum topdown_events { + KVM_TD_SLOTS = 0, + KVM_TD_METRICS = 1, + KVM_TD_EVENTS_MAX = 2, +}; + struct kvm_pmc { enum pmc_type type; u8 idx; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b02a56c77647..fad7b2c10bb8 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -182,6 +182,30 @@ static u64 pmc_get_pebs_precise_level(struct kvm_pmc *pmc) return 1; } +static void pmc_setup_td_metrics_events_attr(struct kvm_pmc *pmc, + struct perf_event_attr *attr, + unsigned int event_idx) +{ + if (!pmc_is_topdown_metrics_used(pmc)) + return; + + /* + * setup slots event attribute, when slots event is + * created for guest topdown metrics profiling, the + * sample period must be 0. + */ + if (event_idx == KVM_TD_SLOTS) + attr->sample_period = 0; + + /* setup vmetrics event attribute */ + if (event_idx == KVM_TD_METRICS) { + attr->config = INTEL_FIXED_VMETRICS_EVENT; + attr->sample_period = 0; + /* Only group leader event can be pinned. */ + attr->pinned = false; + } +} + static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr) @@ -233,6 +257,8 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, for (i = 0; i < pmc->max_nr_events; i++) { group_leader = i ? pmc->perf_event : NULL; + pmc_setup_td_metrics_events_attr(pmc, &attr, i); + event = perf_event_create_kernel_counter(&attr, -1, current, group_leader, kvm_perf_overflow, pmc); @@ -256,6 +282,12 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, pmc->is_paused = false; pmc->intr = intr || pebs; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + /* KVM need to inject PMI for PERF_METRICS overflow. */ + pmc->intr = true; + } + if (!attr.disabled) return 0; @@ -269,6 +301,7 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; unsigned int i; + u64 data; if (!pmc->perf_event || pmc->is_paused) return; @@ -279,8 +312,15 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) * then disable non-group leader events. */ counter += perf_event_pause(pmc->perf_event, true); - for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) - perf_event_pause(pmc->perf_events[i], true); + for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) { + data = perf_event_pause(pmc->perf_events[i], true); + /* + * The count of vmetrics event actually stores raw data of + * PERF_METRICS, save it to extra_config. + */ + if (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS && i == KVM_TD_METRICS) + pmc->extra_config = data; + } pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; } @@ -557,6 +597,21 @@ static int kvm_pmu_rdpmc_vmware(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) return 0; } +static inline int kvm_pmu_read_perf_metrics(struct kvm_vcpu *vcpu, + unsigned int idx, u64 *data) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc) { + *data = 0; + return 1; + } + + *data = pmc->extra_config; + return 0; +} + int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) { bool fast_mode = idx & (1u << 31); @@ -570,6 +625,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (is_vmware_backdoor_pmc(idx)) return kvm_pmu_rdpmc_vmware(vcpu, idx, data); + if (idx & INTEL_PMC_FIXED_RDPMC_METRICS) + return kvm_pmu_read_perf_metrics(vcpu, idx, data); + pmc = static_call(kvm_x86_pmu_rdpmc_ecx_to_pmc)(vcpu, idx, &mask); if (!pmc) return 1; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 3dc0deb83096..43abe793c11c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -257,6 +257,34 @@ static inline bool pmc_is_globally_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } +static inline int pmc_is_topdown_metrics_used(struct kvm_pmc *pmc) +{ + return (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS) && + (pmc->max_nr_events == KVM_TD_EVENTS_MAX); +} + +static inline int pmc_is_topdown_metrics_active(struct kvm_pmc *pmc) +{ + return pmc_is_topdown_metrics_used(pmc) && + pmc->perf_events[KVM_TD_METRICS]; +} + +static inline void pmc_update_topdown_metrics(struct kvm_pmc *pmc) +{ + struct perf_event *event; + int i; + + struct td_metrics td_metrics = { + .slots = pmc->counter, + .metric = pmc->extra_config, + }; + + for (i = 0; i < pmc->max_nr_events; i++) { + event = pmc->perf_events[i]; + perf_event_topdown_metrics(event, &td_metrics); + } +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 41a4533f9989..d8317552b634 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -22,6 +22,7 @@ extern int __read_mostly pt_mode; #define PT_MODE_HOST_GUEST 1 #define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PERF_METRICS BIT_ULL(15) #define PMU_CAP_LBR_FMT 0x3f struct nested_vmx_msrs { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b45396e0a46c..04ccb8c6f7e4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -229,6 +229,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) ret = (perf_capabilities & PERF_CAP_PEBS_BASELINE) && ((perf_capabilities & PERF_CAP_PEBS_FORMAT) > 3); break; + case MSR_PERF_METRICS: + ret = intel_pmu_metrics_is_enabled(vcpu) && (pmu->version > 1); + break; default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || @@ -357,6 +360,43 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu, return true; } +static int intel_pmu_handle_perf_metrics_access(struct kvm_vcpu *vcpu, + struct msr_data *msr_info, bool read) +{ + u32 index = msr_info->index; + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc || index != MSR_PERF_METRICS) + return 1; + + if (read) { + msr_info->data = pmc->extra_config; + } else { + /* + * Save guest PERF_METRICS data in to extra_config, + * the extra_config would be read to write to PERF_METRICS + * MSR in later events group creating process. + */ + pmc->extra_config = msr_info->data; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + } else { + /* + * If the slots/vmetrics events group is not + * created yet, set max_nr_events to 2 + * (slots event + vmetrics event), so KVM knows + * topdown metrics profiling is running in guest + * and slots/vmetrics events group would be created + * later. + */ + pmc->max_nr_events = KVM_TD_EVENTS_MAX; + } + } + + return 0; +} + static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -376,6 +416,10 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_PEBS_DATA_CFG: msr_info->data = pmu->pebs_data_cfg; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, true)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -438,6 +482,10 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) pmu->pebs_data_cfg = data; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, false)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c2130d2c8e24..63b6dcc360c2 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -670,6 +670,11 @@ static inline bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu) return !!vcpu_to_lbr_records(vcpu)->nr; } +static inline bool intel_pmu_metrics_is_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.perf_capabilities & PMU_CAP_PERF_METRICS; +} + void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu); int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu); void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 95b1ac3bc0b6..5d9fde90370a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1463,6 +1463,7 @@ static const u32 msrs_to_save_pmu[] = { MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, + MSR_PERF_METRICS, /* This part of MSRs should match KVM_INTEL_PMC_MAX_GENERIC. */ MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, From patchwork Tue Aug 22 05:11:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67351EE49AB for ; Tue, 22 Aug 2023 05:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232692AbjHVFH3 (ORCPT ); Tue, 22 Aug 2023 01:07:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232651AbjHVFH0 (ORCPT ); Tue, 22 Aug 2023 01:07:26 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71B0ECE0; Mon, 21 Aug 2023 22:07:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680820; x=1724216820; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SNcJaChC/heuKNF6UUiOZR1s9WThihpIUHavE5la/mg=; b=RIDRAhdLtfu+qXpR8BTKaZDJyWGecrDhVv4rl8vfwNs2VVwwMmYM8O4M /BRJTefWgxQ5cT43t6AKabwXLeeMlbvKvz8LFZq4xukCoTPzQPQ5xP3jN +Icu9n1pwcaf5PAmwTWMTClPzNzsJL3tm5DkaEpeJV8X99KE+B+G+tY+z +jlJh1aQQq5VNVlqClLlxUm9nbMe+d5vxbSk26cBKDjmE+T1cqcixWkCU nETGCwZnl2U/UpYxCTHeSGFwaS6KQUpoaZoG22KLlUMN+5fZ3gRnbtXvF kpVYycEFxVeMNAsMANhSa/952B7VRzY4OC+2Wckz0w7/6Ncg/4qKYpqhi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146761" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146761" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:05:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982737005" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982737005" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:05:04 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 12/13] KVM: x86/pmu: Handle PERF_METRICS overflow Date: Tue, 22 Aug 2023 13:11:39 +0800 Message-Id: <20230822051140.512879-13-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When the fixed counter 3 overflows, the PMU would also triggers an PERF_METRICS overflow subsequently. This patch handles the PERF_METRICS overflow case, it would inject an PMI into guest and set the PERF_METRICS overflow bit in PERF_GLOBAL_STATUS MSR after detecting PERF_METRICS overflow on host. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 7 ++++++- arch/x86/kvm/pmu.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 9d53b1c6ac86..7a917e61d994 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3042,8 +3042,13 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status) * Intel Perf metrics */ if (__test_and_clear_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, (unsigned long *)&status)) { + struct perf_event *event = cpuc->events[GLOBAL_STATUS_PERF_METRICS_OVF_BIT]; + handled++; - static_call(intel_pmu_update_topdown_event)(NULL); + if (event && is_vmetrics_event(event)) + READ_ONCE(event->overflow_handler)(event, &data, regs); + else + static_call(intel_pmu_update_topdown_event)(NULL); } /* diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index fad7b2c10bb8..06c815859f77 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -101,7 +101,7 @@ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) kvm_pmu_deliver_pmi(vcpu); } -static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) +static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi, bool metrics_of) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); bool skip_pmi = false; @@ -121,7 +121,11 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) (unsigned long *)&pmu->global_status); } } else { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); + if (metrics_of) + __set_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, + (unsigned long *)&pmu->global_status); + else + __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); } if (!pmc->intr || skip_pmi) @@ -141,11 +145,18 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) kvm_make_request(KVM_REQ_PMI, pmc->vcpu); } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static void kvm_perf_overflow(struct perf_event *perf_event, struct perf_sample_data *data, struct pt_regs *regs) { struct kvm_pmc *pmc = perf_event->overflow_handler_context; + bool metrics_of = is_vmetrics_event(perf_event); /* * Ignore overflow events for counters that are scheduled to be @@ -155,7 +166,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; - __kvm_perf_overflow(pmc, true); + __kvm_perf_overflow(pmc, true, metrics_of); kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } @@ -490,7 +501,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) goto reprogram_complete; if (pmc->counter < pmc->prev_counter) - __kvm_perf_overflow(pmc, false); + __kvm_perf_overflow(pmc, false, false); if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); From patchwork Tue Aug 22 05:11:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13360166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F56EE49A8 for ; Tue, 22 Aug 2023 05:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232698AbjHVFHb (ORCPT ); Tue, 22 Aug 2023 01:07:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231576AbjHVFH3 (ORCPT ); Tue, 22 Aug 2023 01:07:29 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57073E42; Mon, 21 Aug 2023 22:07:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1692680821; x=1724216821; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FjGrlNIY+vShUtvCpATT+KcAnzFBFFjm1ThsnaWT4H0=; b=hL4xq9lgsSQmr0QH2YrhYekw7P1RpYJJFmTc+LXHKeTpoqWfk8acJwhn EdlkNdZJ/UUIT1yZ4UX6VrimqPCmcKhjCFjYNahHRrQUjjYivi0muxx63 dNqBgXVOi/+NsrYFIYLHcLnFH6WhHgGXz+Mxugyv6Efsw9dzHQ5Ux2eUB MsYV2pE/nOoAbhK2GOx1A7wds4RbGxONlfnCgtcZ/feuMV6jvGnpwfWz/ PnyLe4dQdyRoI0eYsJfSk+9XVBlL3ddBCBFzsW/vcgmFWgtorxetSXFVf GVvRJjUETzXHJPZKAHXB/D/HLun3cATQObyZgZtDHiGsimJCUVE2wMexm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="440146780" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="440146780" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2023 22:05:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10809"; a="982737043" X-IronPort-AV: E=Sophos;i="6.01,192,1684825200"; d="scan'208";a="982737043" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga006.fm.intel.com with ESMTP; 21 Aug 2023 22:05:12 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [PATCH RFC v3 13/13] KVM: x86/pmu: Expose Topdown in MSR_IA32_PERF_CAPABILITIES Date: Tue, 22 Aug 2023 13:11:40 +0800 Message-Id: <20230822051140.512879-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> References: <20230822051140.512879-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Topdown support is enumerated via IA32_PERF_CAPABILITIES[bit 15]. Enable this bit for guest when the feature is available on host. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 3 +++ arch/x86/kvm/vmx/vmx.c | 2 ++ 2 files changed, 5 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 04ccb8c6f7e4..5783cde00054 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -614,6 +614,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED)); pmu->global_ctrl_mask = counter_mask; + if (intel_pmu_metrics_is_enabled(vcpu)) + pmu->global_ctrl_mask &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS); + /* * GLOBAL_STATUS and GLOBAL_OVF_CONTROL (a.k.a. GLOBAL_STATUS_RESET) * share reserved bit definitions. The kernel just happens to use diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e6849f780dba..69a425be55bf 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7827,6 +7827,8 @@ static u64 vmx_get_perf_capabilities(void) perf_cap &= ~PERF_CAP_PEBS_BASELINE; } + perf_cap |= host_perf_cap & PMU_CAP_PERF_METRICS; + return perf_cap; }