From patchwork Fri Sep 1 07:40:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 463C4CA0FE1 for ; Fri, 1 Sep 2023 07:41:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236598AbjIAHlf (ORCPT ); Fri, 1 Sep 2023 03:41:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235352AbjIAHlc (ORCPT ); Fri, 1 Sep 2023 03:41:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67BBF1704 for ; Fri, 1 Sep 2023 00:41:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554079; x=1725090079; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mJ+VCLzexJ/2B2CEmWr7sq64bxbH0rgU5WEt7AUTy3Q=; b=niWu+eFRm9SDhPWRSram/sGSoiXUd8gGdw9Sl/TlW/wTXjEJMRPyGv1X AQJ/hl1hsuspYcmJkptHr0p+tGD5/pLv/3MJqLItfx9NAx4tedKU6qzvZ Xm9A8/V5tU40qdPQySJOb8shO+nL/O/aIVF0ecXh0n1UNLnlfTBBxkS2C Zr/VFNbLIk6RMwHdUk9V3ZhNP+sn/wMN5T+en6M/QKx3L5RWH0MB7c+Vn COm0FOOc/YsJYGfXWpL7/peAnbeDkYBLxzKzvexDbWcFx7XU8792ERNkD 0gHNubE4Zrnjqgq35giA1OvAl06EfvTcB/YQqQ27/wloEUk/bJQiheysX Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886133" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886133" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448052" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448052" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:16 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 1/6] x86: pmu: remove duplicate code Date: Fri, 1 Sep 2023 15:40:47 +0800 Message-Id: <20230901074052.640296-2-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Xiong Zhang --- lib/x86/pmu.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 0f2afd6..d06e945 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -16,11 +16,6 @@ void pmu_init(void) pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff; } - if (pmu.version > 1) { - pmu.nr_fixed_counters = cpuid_10.d & 0x1f; - pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff; - } - pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff; pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff; From patchwork Fri Sep 1 07:40:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0758CA0FE8 for ; Fri, 1 Sep 2023 07:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244616AbjIAHlm (ORCPT ); Fri, 1 Sep 2023 03:41:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348528AbjIAHlh (ORCPT ); Fri, 1 Sep 2023 03:41:37 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 365FB10E7 for ; Fri, 1 Sep 2023 00:41:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554083; x=1725090083; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=318f20nud9Tfzfiszabdqf7SrmGNtcCTbrZd6Pqwtro=; b=LApvcZCH2dXRfKYhnJ0/s52rmk3MSivjj67KlowRhogF6ByL3D48JqtS xI3wsRCjjR2v2+CR+/0ZNNeMv/schYqWhAeSeNHS4rmcupYZ6mC2oPtRM MflzAfMQgusbiICmRtJdjn+MWQP4oRyPv5iptQI+eMb3uae6fDxDKeoXN ta9trrVO5JYMGVafIhL7M5B035dDzfeSAlWiXhe5LCNgO/aicwaZHQqNF x8OPW3llvuo1J9eXQ2EF8qjvagY5RILRGddmPdwhgIlLJWZUN1HBySbq5 2UfReIti06luXToIlaT4ClDmM670TpyrcikqKdvAx//O75FXWVsSR2rnk g==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886141" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886141" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448068" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448068" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:20 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 2/6] x86: pmu: Add Freeze_LBRS_On_PMI test case Date: Fri, 1 Sep 2023 15:40:48 +0800 Message-Id: <20230901074052.640296-3-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Once IA32_DEBUGCTL.FREEZE_LBR_ON_PMI is set, LBR stack will be frozen on PMI. This commit add a test case to check whether LBR stack is changed during PMI handler or not. In PMU v2, legacy Freeze LBRS on PMI is introduced, IA32_DEBUGCTL.FREEZE_LBR_ON_PMI will be cleared by processor when PMI happens, SW should set it again in PMI handler to enable LBR. In PMU v4, streamlined freeze LBRs on PMI is introduced, the new LBR_FRZ[bit 58] bit is added into IA32_PERF_GLOBAL_STATUS MSR, this bit is set by processor when PMI happens, this bit also serves as a control to enable LBR stack. SW should clear this bit in PMI handler to enable LBR. This commit checks legacy and streamlined FREEZE_LBR_ON_PMI feature, and their SW and HW sequence. Signed-off-by: Xiong Zhang --- lib/x86/msr.h | 3 ++ x86/pmu_lbr.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 111 insertions(+), 1 deletion(-) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 0e3fd03..9748436 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -430,6 +430,9 @@ #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 +/* PERF_GLOBAL_OVF_CTRL bits */ +#define MSR_CORE_PERF_GLOBAL_OVF_CTRL_LBR_FREEZE (1ULL << 58) + /* AMD Performance Counter Global Status and Control MSRs */ #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index 40b63fa..24220f0 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -2,11 +2,17 @@ #include "x86/processor.h" #include "x86/pmu.h" #include "x86/desc.h" +#include "x86/apic-defs.h" +#include "x86/apic.h" +#include "x86/isr.h" #define N 1000000 +#define MAX_LBR 64 volatile int count; u32 lbr_from, lbr_to; +int max; +bool pmi_received = false; static noinline int compute_flag(int i) { @@ -41,9 +47,102 @@ static bool test_init_lbr_from_exception(u64 index) return test_for_exception(GP_VECTOR, init_lbr, &index); } +static void pmi_handler(isr_regs_t *regs) +{ + uint64_t lbr_tos, from[MAX_LBR], to[MAX_LBR]; + uint64_t gbl_status, debugctl = 0, lbr_cur; + int i; + + pmi_received = true; + + if (pmu.version < 4) { + debugctl = rdmsr(MSR_IA32_DEBUGCTLMSR); + report((debugctl & DEBUGCTLMSR_LBR) == 0, + "The guest LBR_EN is cleared in guest PMI"); + gbl_status = rdmsr(pmu.msr_global_status); + report((gbl_status & BIT_ULL(0)) == BIT_ULL(0), + "GP counter 0 overflow."); + } else { + gbl_status = rdmsr(pmu.msr_global_status); + report((gbl_status & (MSR_CORE_PERF_GLOBAL_OVF_CTRL_LBR_FREEZE | BIT_ULL(0))) + == (MSR_CORE_PERF_GLOBAL_OVF_CTRL_LBR_FREEZE | BIT_ULL(0)), + "GP counter 0 overflow and LBR freeze."); + } + + lbr_tos = rdmsr(MSR_LBR_TOS); + for (i = 0; i < max; ++i) { + from[i] = rdmsr(lbr_from + i); + to[i] = rdmsr(lbr_to + i); + } + + lbr_test(); + + lbr_cur = rdmsr(MSR_LBR_TOS); + report(lbr_cur == lbr_tos, + "LBR TOS freezed in PMI, %lx -> %lx", lbr_tos, lbr_cur); + for (i = 0; i < max; ++i) { + lbr_cur = rdmsr(lbr_from + i); + report(lbr_cur == from[i], + "LBR from %d freezed in PMI, %lx -> %lx", i, from[i], lbr_cur); + lbr_cur = rdmsr(lbr_to + i); + report(lbr_cur == to[i], + "LBR to %d freezed in PMI, %lx -> %lx", i, to[i], lbr_cur); + } + + if (pmu.version < 4) { + debugctl |= DEBUGCTLMSR_LBR; + wrmsr(MSR_IA32_DEBUGCTLMSR, debugctl); + } + + wrmsr(pmu.msr_global_status_clr, gbl_status); + + apic_write(APIC_EOI, 0); +} + +/* GP counter 0 overflow after 2 instructions. */ +static void setup_gp_counter_0_overflow(void) +{ + uint64_t count, ctrl; + int i; + + count = (1ull << pmu.gp_counter_width) - 1 - 2; + wrmsr(pmu.msr_gp_counter_base, count); + + ctrl = EVNTSEL_EN | EVNTSEL_USR | EVNTSEL_OS | EVNTSEL_INT | 0xc0; + wrmsr(pmu.msr_gp_event_select_base, ctrl); + + wrmsr(pmu.msr_global_ctl, 0x1); + + apic_write(APIC_LVTPC, PMI_VECTOR); + + irq_enable(); + asm volatile("nop; nop; nop"); + for (i=0; i < 100000 && !pmi_received; i++) + asm volatile("pause"); + irq_disable(); +} + +static void stop_gp_counter_0(void) +{ + wrmsr(pmu.msr_global_ctl, 0); + wrmsr(pmu.msr_gp_event_select_base, 0); +} + +static void test_freeze_lbr_on_pmi(void) +{ + wrmsr(MSR_IA32_DEBUGCTLMSR, + DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); + + setup_gp_counter_0_overflow(); + + stop_gp_counter_0(); + + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); +} + int main(int ac, char **av) { - int max, i; + int i; setup_vm(); @@ -70,6 +169,12 @@ int main(int ac, char **av) printf("PMU version: %d\n", pmu.version); printf("LBR version: %ld\n", pmu_lbr_version()); + handle_irq(PMI_VECTOR, pmi_handler); + apic_write(APIC_LVTPC, PMI_VECTOR); + + if (pmu_has_full_writes()) + pmu.msr_gp_counter_base = MSR_IA32_PMC0; + /* Look for LBR from and to MSRs */ lbr_from = MSR_LBR_CORE_FROM; lbr_to = MSR_LBR_CORE_TO; @@ -104,5 +209,7 @@ int main(int ac, char **av) } report(i == max, "The guest LBR FROM_IP/TO_IP values are good."); + test_freeze_lbr_on_pmi(); + return report_summary(); } From patchwork Fri Sep 1 07:40:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F68FCA0FE6 for ; Fri, 1 Sep 2023 07:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348530AbjIAHll (ORCPT ); Fri, 1 Sep 2023 03:41:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348531AbjIAHlk (ORCPT ); Fri, 1 Sep 2023 03:41:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E31010DF for ; Fri, 1 Sep 2023 00:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554087; x=1725090087; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ds7s6N7COLxgRFjwNgLlQUgxwkf+vwrnF6OQBWIIogc=; b=VcoLIQtZd7nl6fFd0QXx7ugV0ZVXxtZYC1/pNj1DNWvcAXt3YOdRevZx xR3Al7hyTeBHDEJ18bgTzn9lZgof8/9uFzKQFD1p8vPlzVEQe5QOGluBT ElfWKoh1AAQmY4O+ParaO0pQAElyD4gyU9Y2QC/rksbc4qqahmCScHb84 X0ThUYoaPbuGEhnb54y3MX2e0Y70bYZTQfKWJcHklj5e0O7E5IeCY8rIw +Zyq0ZU9PmmAHu+iZSGW8MCU/n+2hdcBmHBGTsFmyUIxo2Fyp2kTcXXEl xrzkgtviopqMjeTOp3tI/xyaOX0g/yOn608wtTmKAOFXTJVg5AwEmHjZt w==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886155" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886155" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448077" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448077" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:24 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 3/6] x86: pmu: PERF_GLOBAL_STATUS_SET MSR verification for vPMU v4 Date: Fri, 1 Sep 2023 15:40:49 +0800 Message-Id: <20230901074052.640296-4-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The IA32_PERF_GLOBAL_STATUS_SET MSR is introduced with arch PMU v4. It allows software to set individual bits in IA32_PERF_GLOBAL_STATUS MSR. This commit adds the test case for this MSR. When global status is cleared at counter overflow, guest write PERF_GLOBAL_STATUS_SET MSR to set the counter overflow bit, then check this counter's overflow bit in IA32_PERF_GLOABAL_STATUS MSR. Signed-off-by: Xiong Zhang --- lib/x86/msr.h | 1 + x86/pmu.c | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 9748436..63b8539 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -429,6 +429,7 @@ #define MSR_CORE_PERF_GLOBAL_STATUS 0x0000038e #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 +#define MSR_CORE_PERF_GLOBAL_STATUS_SET 0x00000391 /* PERF_GLOBAL_OVF_CTRL bits */ #define MSR_CORE_PERF_GLOBAL_OVF_CTRL_LBR_FREEZE (1ULL << 58) diff --git a/x86/pmu.c b/x86/pmu.c index 72c2c9c..a171e9e 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -350,6 +350,14 @@ static void check_counter_overflow(void) status = rdmsr(pmu.msr_global_status); report(!(status & (1ull << idx)), "status clear-%d", i); report(check_irq() == (i % 2), "irq-%d", i); + if (pmu.version >= 4) { + wrmsr(MSR_CORE_PERF_GLOBAL_STATUS_SET, 1ull << idx); + status = rdmsr(pmu.msr_global_status); + report(status & (1ull << idx), "status set-%d", i); + wrmsr(pmu.msr_global_status_clr, 1ull << idx); + status = rdmsr(pmu.msr_global_status); + report(!(status & (1ull << idx)), "status set clear-%d", i); + } } report_prefix_pop(); From patchwork Fri Sep 1 07:40:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BBC6CA0FEA for ; Fri, 1 Sep 2023 07:41:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348529AbjIAHln (ORCPT ); Fri, 1 Sep 2023 03:41:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348527AbjIAHlk (ORCPT ); Fri, 1 Sep 2023 03:41:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 722BD10F3 for ; Fri, 1 Sep 2023 00:41:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554091; x=1725090091; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R+yBScRB0q5rh7mW2Qf6L5MIpz4dEffD7KeZDRfhbzs=; b=gu0ArEnky83YvFGLKLIRx6FariXBt11u1i1BUXZ5HwkUJZUdinfn3Se6 AQEUIOWRpdrskZbBij2ob83OiSpet4Us7TLatzholtidGf1g6JUgOzx1D vmQk9MfjP8KVcclfD2+eoYcUypbCMSju8wJG1f1BV0xZCU2hO/3KNxh9O u90zBYQKTJi/mtvCziJ3Dnx8GKFs0bojJFZqRPe4A29sCvBZCmmBiJXGt /tJaw3P1jY4nfdn9ebI0iM++0M4/2RJSuxH7HZ5gSN+BB+v+ATQcVMls8 ITcXSrKzNbm/6F/MJoXW1JFezR7mGqlMMElI7r6g0GRtl/e8i6/X0/9XO g==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886164" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886164" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448089" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448089" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:28 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 4/6] x86: pmu: PERF_GLOBAL_INUSE MSR verification for vPMU v4 Date: Fri, 1 Sep 2023 15:40:50 +0800 Message-Id: <20230901074052.640296-5-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Arch PMU v4 introduces a new MSR, IA32_PERF_GLOBAL_INUSE. It provides as "InUse" bit for each GP counter and fixed counter in processor. Additionally PMI InUse[bit 63] indicates if the PMI mechanisam has been configured. This commit add the test case for this MSR, when a counter is started, its index bit must be set in this MSR, when a counter is stopped through writing 0 into counter's control MSR, its index bit must be cleared in this MSR. Signed-off-by: Xiong Zhang --- lib/x86/msr.h | 4 ++++ x86/pmu.c | 14 ++++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 63b8539..3bffe80 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -430,6 +430,10 @@ #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 #define MSR_CORE_PERF_GLOBAL_STATUS_SET 0x00000391 +#define MSR_CORE_PERF_GLOBAL_INUSE 0x00000392 + +/* PERF_GLOBAL_INUSE bits */ +#define MSR_CORE_PERF_GLOBAL_INUSE_PMI_BIT 63 /* PERF_GLOBAL_OVF_CTRL bits */ #define MSR_CORE_PERF_GLOBAL_OVF_CTRL_LBR_FREEZE (1ULL << 58) diff --git a/x86/pmu.c b/x86/pmu.c index a171e9e..0ec0062 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -155,6 +155,15 @@ static void __start_event(pmu_counter_t *evt, uint64_t count) ctrl = (ctrl & ~(0xf << shift)) | (usrospmi << shift); wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, ctrl); } + if (pmu.version >= 4) { + u64 inuse = rdmsr(MSR_CORE_PERF_GLOBAL_INUSE); + int idx = event_to_global_idx(evt); + + report(inuse & BIT_ULL(idx), "start counter_idx: %d", idx); + if (evt->config & EVNTSEL_INT) + report(inuse & BIT_ULL(MSR_CORE_PERF_GLOBAL_INUSE_PMI_BIT), + "INT, start counter_idx: %d", idx); + } global_enable(evt); apic_write(APIC_LVTPC, PMI_VECTOR); } @@ -168,14 +177,15 @@ static void stop_event(pmu_counter_t *evt) { global_disable(evt); if (is_gp(evt)) { - wrmsr(MSR_GP_EVENT_SELECTx(event_to_global_idx(evt)), - evt->config & ~EVNTSEL_EN); + wrmsr(MSR_GP_EVENT_SELECTx(event_to_global_idx(evt)), 0); } else { uint32_t ctrl = rdmsr(MSR_CORE_PERF_FIXED_CTR_CTRL); int shift = (evt->ctr - MSR_CORE_PERF_FIXED_CTR0) * 4; wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, ctrl & ~(0xf << shift)); } evt->count = rdmsr(evt->ctr); + if (pmu.version >= 4) + report((rdmsr(MSR_CORE_PERF_GLOBAL_INUSE) & BIT_ULL(evt->idx)) == 0, "stop counter idx: %d", evt->idx); } static noinline void measure_many(pmu_counter_t *evt, int count) From patchwork Fri Sep 1 07:40:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBFD9CA0FE1 for ; Fri, 1 Sep 2023 07:41:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235352AbjIAHln (ORCPT ); Fri, 1 Sep 2023 03:41:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348538AbjIAHlk (ORCPT ); Fri, 1 Sep 2023 03:41:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D765410E9 for ; Fri, 1 Sep 2023 00:41:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554095; x=1725090095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H/bSAPousThChjmlEm+2gbhh8OYU/OslzvQbDwzHwFE=; b=Y9rpSxjEV6pPt7OVUrwjuv17Mx+UYbOkqKI0+D8Paow7bJecaydJlESH 52NplHc1VUSD7+3WJme6KDC6hGpKwS3f6PrmFkMvKOzmn1egMBa3kqo9D jFR56S1y2yUjFsA43PBvPZ2M5pmxPYVCbkeYJrf13oEAxRJrSsnuSpU9+ UPfpIBASIeKczM/hGVekdjt9jSMFMhqmdPoAv+WQ6DpHNFlwBGmHF7sP0 vee/nTkaZR8nGwNyZmuE0PG/NlTM6gmJowQJIql/VrzmJsgxzjtgYFubN yQRBffSvB7sjbniwntW34tm1sPRncFO9x72mJBEgwWyWwYtD6QTZ26R+E Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886174" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886174" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448098" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448098" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:32 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 5/6] x86: pmu: Limit vcpu's fixed counter into fixed_events[] Date: Fri, 1 Sep 2023 15:40:51 +0800 Message-Id: <20230901074052.640296-6-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Arch PMU v5 has Fixed Counter Enumeration feature, user can specify a fixed counter which has index greater than fixed_events[] array through CPUID.0AH.ECX, so limit fixed counter index into fixed_events[] array. Signed-off-by: Xiong Zhang --- x86/pmu.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/x86/pmu.c b/x86/pmu.c index 0ec0062..416e9d7 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -53,6 +53,7 @@ char *buf; static struct pmu_event *gp_events; static unsigned int gp_events_size; +static unsigned int fixed_events_size; static inline void loop(void) { @@ -256,6 +257,8 @@ static void check_fixed_counters(void) int i; for (i = 0; i < pmu.nr_fixed_counters; i++) { + if (i >= fixed_events_size) + continue; cnt.ctr = fixed_events[i].unit_sel; measure_one(&cnt); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); @@ -277,6 +280,8 @@ static void check_counters_many(void) n++; } for (i = 0; i < pmu.nr_fixed_counters; i++) { + if (i >= fixed_events_size) + continue; cnt[n].ctr = fixed_events[i].unit_sel; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR; n++; @@ -700,6 +705,7 @@ int main(int ac, char **av) } gp_events = (struct pmu_event *)intel_gp_events; gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); + fixed_events_size = sizeof(fixed_events)/sizeof(fixed_events[0]); report_prefix_push("Intel"); set_ref_cycle_expectations(); } else { From patchwork Fri Sep 1 07:40:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Xiong Y" X-Patchwork-Id: 13372166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E5BCA0FE1 for ; Fri, 1 Sep 2023 07:42:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348535AbjIAHmE (ORCPT ); Fri, 1 Sep 2023 03:42:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234519AbjIAHmD (ORCPT ); Fri, 1 Sep 2023 03:42:03 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A61AE10FA for ; Fri, 1 Sep 2023 00:41:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693554114; x=1725090114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sk6Ogx2mLzmy0w0eLZHuab1j4cSKcLY4fjeqoreEUS8=; b=BguMQBD9jJDrIBZkDe2JgVHcAAgFa2mBS2xkpY35qf/QS7R6/5xx0pIi OncsO/kq6S4yc1NnI4cOnvWIDRBD59Abqq+e/y+AWp8bgQ+F4Ji21PFhR hNO/Afk/vW1eVvWCS2sKXa/llb8c5ZZ/5FVnFUnvsRTm48Nd8fyEWiwyQ +wPP2/rTCfDUQjkT2KcnLY6eeuL4C6UiVKYwilZ93epoTKvTdU7T/7W3z JpsumerXhy1QcFJ257EmCynnyw2Tvbp+PH7hLagWf9WSTkwciMQUCLNuq HDSv1HL34pIHjyP4ILRpjSIMtebjHTTrRN+9fvQ31Szo8SpCOC2rCBVLC Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="378886185" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="378886185" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10819"; a="733448109" X-IronPort-AV: E=Sophos;i="6.02,219,1688454000"; d="scan'208";a="733448109" Received: from wangdere-mobl2.ccr.corp.intel.com (HELO xiongzha-desk1.ccr.corp.intel.com) ([10.255.29.239]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2023 00:41:36 -0700 From: Xiong Zhang To: kvm@vger.kernel.org Cc: seanjc@google.com, like.xu.linux@gmail.com, zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com, kan.liang@intel.com, dapeng1.mi@linux.intel.com, Xiong Zhang Subject: [kvm-unit-tests 6/6] x86: pmu: Support fixed counter enumeration in vPMU v5 Date: Fri, 1 Sep 2023 15:40:52 +0800 Message-Id: <20230901074052.640296-7-xiong.y.zhang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230901074052.640296-1-xiong.y.zhang@intel.com> References: <20230901074052.640296-1-xiong.y.zhang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With Architectural Performance Monitoring Version 5, register CPUID.0AH.ECX indicates Fixed Counter enumeration. It is a bit mask which enumerates the supported Fixed Counters in a processor. If bit 'i' is set, it implies that Fixed Counter 'i' is supported. So user can specify non-continuous Fixed Counters, i.e. CPUID.0AH.ECX= 0x77, CPUID.0AH.EDX.EDX[4:0] = 3, this means fixed counters 0,1,2, 4,5,6 are supported, while Fixed counter 3 isn't supported. This commit add the non-continuous fixed counters support. If vPMU version < 5, nr_fixed_counters is continuous fixed counter number starting from 0. If vPMU version >= 5, nr_fixed_counters is highest bit set index in fixed_counter_mask bitmap, finxed counters with index [0, nr_fixed_counters) may be not all supported. Signed-off-by: Xiong Zhang --- lib/x86/pmu.c | 5 +++++ lib/x86/pmu.h | 6 ++++++ x86/pmu.c | 8 +++++--- x86/pmu_pebs.c | 12 +++++++++--- 4 files changed, 25 insertions(+), 6 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index d06e945..e6492ed 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -14,6 +14,11 @@ void pmu_init(void) if (pmu.version > 1) { pmu.nr_fixed_counters = cpuid_10.d & 0x1f; pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff; + pmu.fixed_counter_mask = (1u << pmu.nr_fixed_counters) - 1; + if (pmu.version >= 5) { + pmu.fixed_counter_mask = cpuid_10.c; + pmu.nr_fixed_counters = fls(cpuid_10.c) + 1; + } } pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 8465e3c..038d74e 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -57,6 +57,7 @@ struct pmu_caps { u8 version; u8 nr_fixed_counters; u8 fixed_counter_width; + u32 fixed_counter_mask; u8 nr_gp_counters; u8 gp_counter_width; u8 gp_counter_mask_length; @@ -106,6 +107,11 @@ static inline bool this_cpu_has_perf_global_status(void) return pmu.version > 1; } +static inline bool pmu_fixed_counter_supported(int i) +{ + return pmu.fixed_counter_mask & BIT(i); +} + static inline bool pmu_gp_counter_is_available(int i) { return pmu.gp_counter_available & BIT(i); diff --git a/x86/pmu.c b/x86/pmu.c index 416e9d7..9806f29 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -257,7 +257,7 @@ static void check_fixed_counters(void) int i; for (i = 0; i < pmu.nr_fixed_counters; i++) { - if (i >= fixed_events_size) + if (i >= fixed_events_size || !pmu_fixed_counter_supported(i)) continue; cnt.ctr = fixed_events[i].unit_sel; measure_one(&cnt); @@ -280,7 +280,7 @@ static void check_counters_many(void) n++; } for (i = 0; i < pmu.nr_fixed_counters; i++) { - if (i >= fixed_events_size) + if (i >= fixed_events_size || !pmu_fixed_counter_supported(i)) continue; cnt[n].ctr = fixed_events[i].unit_sel; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR; @@ -437,7 +437,8 @@ static void check_rdpmc(void) else report(cnt.count == (u32)val, "fast-%d", i); } - for (i = 0; i < pmu.nr_fixed_counters; i++) { + for (i = 0; i < pmu.nr_fixed_counters && pmu_fixed_counter_supported(i); + i++) { uint64_t x = val & ((1ull << pmu.fixed_counter_width) - 1); pmu_counter_t cnt = { .ctr = MSR_CORE_PERF_FIXED_CTR0 + i, @@ -720,6 +721,7 @@ int main(int ac, char **av) printf("Mask length: %d\n", pmu.gp_counter_mask_length); printf("Fixed counters: %d\n", pmu.nr_fixed_counters); printf("Fixed counter width: %d\n", pmu.fixed_counter_width); + printf("Supported Fixed counter mask: 0x%x\n", pmu.fixed_counter_mask); apic_write(APIC_LVTPC, PMI_VECTOR); diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index 894ae6c..bc8e64d 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -222,7 +222,9 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) ds->pebs_interrupt_threshold = ds->pebs_buffer_base + get_adaptive_pebs_record_size(pebs_data_cfg); - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) { + for (idx = 0; + idx < pmu.nr_fixed_counters && pmu_fixed_counter_supported(idx); + idx++) { if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) continue; if (has_baseline) @@ -357,13 +359,17 @@ static void check_pebs_counters(u64 pebs_data_cfg) unsigned int idx; u64 bitmask = 0; - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) + for (idx = 0; + idx < pmu.nr_fixed_counters && pmu_fixed_counter_supported(idx); + idx++) check_one_counter(FIXED, idx, pebs_data_cfg); for (idx = 0; idx < max_nr_gp_events; idx++) check_one_counter(GP, idx, pebs_data_cfg); - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) + for (idx = 0; + idx < pmu.nr_fixed_counters && pmu_fixed_counter_supported(idx); + idx++) bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); for (idx = 0; idx < max_nr_gp_events; idx += 2) bitmask |= BIT_ULL(idx);