From patchwork Mon Aug 14 11:50:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98656C41513 for ; Mon, 14 Aug 2023 11:52:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229646AbjHNLv3 (ORCPT ); Mon, 14 Aug 2023 07:51:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229505AbjHNLv1 (ORCPT ); Mon, 14 Aug 2023 07:51:27 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9665EA; Mon, 14 Aug 2023 04:51:26 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-686f19b6dd2so2780199b3a.2; Mon, 14 Aug 2023 04:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013886; x=1692618686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bOG0YVT8JSBLBHyuyCJeatj3Kijd1In1QFgkizkkf5E=; b=qrFxZriGk1WLMSup7efZpsBcf8HRwcBOkpx9Jk7rxrGp7mVaDM4VyoZmMh8Wf5/Qur uVumNbksGvajhgCTjBrHcROqjb8dgRmP9uRuwL/w+pfCtdnGI7NW/PBs2Jt8AzVBiOCe Uh2G6qWJWlz7kuQTv+NIMAdHN3oQmD63Q8zc43d1Kpy1mQpboHwCWcjVoee/AqZPDgop 2DKwG7weJhRZ/Zku/oDiV+HeeOfMZ4lhGJ3CWZTE3wm6g+wOu/RUyX2jXsMLXdwvaLTd vYZaMD/NBaFkeAZ3D8MJesuJULauujpDgHLMFh+nZ6MvlWxZnjwFCeZk93PtiqXWFY92 RZkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013886; x=1692618686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bOG0YVT8JSBLBHyuyCJeatj3Kijd1In1QFgkizkkf5E=; b=YPgLFhLHkIh38unU5F9xkD/PzYfGT6XQ9aXf1yTZz/uQoZWKWL1yBTtKII9ZFzV8q9 sDvdSiiiirjBv321z7CuiII24x2beL5kXNAjSTD+oq5QjckSzk92oVK02Yni1RGVVJUP Yp1zwPJO/EvxIcwa8UydMCOTNVy5TYpxmCocqivRNlhBW6IFkahMooEBZbrpsZ8Y/ZBp iwRgPRSDTPkAA7LtGFZ22FMXlAQGinOSLb/oUwEFCLSI9EG4dU4YZgkL9OQGn6UL+hEQ tBpIkWrO1qv4RVMCQYlSmc2rXADUt2DMXL6MpUOiDFlo2UdvBEBUV5teV4rdm+YUg7x+ E17w== X-Gm-Message-State: AOJu0YzHBiLdmQFjPuJbacL8lKg6wHLiOvE0bl49/2HuwNj7HU1sJBO5 4ifEKpW2rd6TY280fMOfD6M= X-Google-Smtp-Source: AGHT+IHzoO8azA1GjT65Ok+0vzYcVDaLyni7kd1AOoKhr5junlW9Bay1mVn+CBaJnH+vFqO1IfwVmA== X-Received: by 2002:a05:6a00:1402:b0:687:5fdb:59ee with SMTP id l2-20020a056a00140200b006875fdb59eemr8547600pfu.12.1692013886089; Mon, 14 Aug 2023 04:51:26 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:25 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/11] KVM: selftests: Add vcpu_set_cpuid_property() to set properties Date: Mon, 14 Aug 2023 19:50:58 +0800 Message-Id: <20230814115108.45741-2-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add vcpu_set_cpuid_property() helper function for setting properties, which simplifies the process of setting CPUID properties for vCPUs. Suggested-by: Sean Christopherson Signed-off-by: Jinrong Liang --- .../selftests/kvm/include/x86_64/processor.h | 4 ++++ tools/testing/selftests/kvm/lib/x86_64/processor.c | 14 ++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 4fd042112526..6b146e1c6736 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -973,6 +973,10 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu) void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr); +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value); + void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function); void vcpu_set_or_clear_cpuid_feature(struct kvm_vcpu *vcpu, struct kvm_x86_cpu_feature feature, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index d8288374078e..0e029be66783 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -760,6 +760,20 @@ void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr) vcpu_set_cpuid(vcpu); } +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value) +{ + struct kvm_cpuid_entry2 *entry; + + entry = __vcpu_get_cpuid_entry(vcpu, property.function, property.index); + + (&entry->eax)[property.reg] &= ~GENMASK(property.hi_bit, property.lo_bit); + (&entry->eax)[property.reg] |= value << (property.lo_bit); + + vcpu_set_cpuid(vcpu); +} + void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function) { struct kvm_cpuid_entry2 *entry = vcpu_get_cpuid_entry(vcpu, function); From patchwork Mon Aug 14 11:50:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AE29C001DB for ; Mon, 14 Aug 2023 11:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229883AbjHNLwD (ORCPT ); Mon, 14 Aug 2023 07:52:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229826AbjHNLva (ORCPT ); Mon, 14 Aug 2023 07:51:30 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF5D31AA; Mon, 14 Aug 2023 04:51:29 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id d2e1a72fcca58-686f0d66652so4062542b3a.2; Mon, 14 Aug 2023 04:51:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013889; x=1692618689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AP8ae3qQgO3DzVD/Nec443NRAN8vAky7C/bENgh3Bkg=; b=Q37UaYZA+7waWRGB3LCdyEMCricYxPaScACuYS5DMXNVBLv6cMYSEfuK5ErgLpd4vk lVjFndTlY32Qtw3r1cLEwOrYslCFiqzEh2QqBysDUu5Nv2VQjuRlLEvSVPn7W8OtPR3/ 0Wy8YF6oVeIUkjmrIkfKyjlc+cEDV8L2uMY9xmAIaJ8G4zdtkusptcoVzWyGStuwBt0q suYdjct53S/qe0B4M6EJ1ZeMa9imlcMnO9y2VJbKiN7tIAtf02meNc+A60CZdInOzlpI hOLwOXbNWxdp7wo2atKHK4PI+VPpTilNz4z6i4EZdXSLCLtbQGHDY1hus3o9KaszWTWR pS9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013889; x=1692618689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AP8ae3qQgO3DzVD/Nec443NRAN8vAky7C/bENgh3Bkg=; b=l0zwlL+azSQtwaIbuERed5OJBfhHa2yO/qiQQqVuCMnhYaU1CcZZzMpsQpW113RfW/ C6bB4XK5LFUjPPNvFjqNZe3brud8ItkMLrpIISsD6UZme5HOt729oSOnaqh6PB7dKNP8 /mb2KeeLUcS0qh02IWEXvVYBcgiNJlTR9pAxdetD0UmooUDlgkhqW37qvV/NaMchfgG/ TlgRw3ttmkx+kfDjpe5byv5rCe0pk6gO3ee4Q8UumgJyxdYKrkrS+jo6QHuUr3wanpm8 UJ8KDKrQBKIeOJDxsBzF0aykfIowYTTJPK0HwzmZ6LQ59Dkx773SwNO9zbBGR2nfzLkW saXA== X-Gm-Message-State: AOJu0YzXcJa+L7bi1s9nTTe8HDIAZXH197Tq9gGEgAnTcvqZzVRZdEV9 4CNY39vaa1Q13Hu74AzNxX3hYyWHmuxWtKc1lZg= X-Google-Smtp-Source: AGHT+IEQpjnn4VlPM2TBrIJh0yzw3qHi9E878iI4plyMnLxbGqFqN1X4lZztYvRiXzH+s9rTLBGXFA== X-Received: by 2002:a05:6a20:9187:b0:138:1980:1837 with SMTP id v7-20020a056a20918700b0013819801837mr13062892pzd.13.1692013889157; Mon, 14 Aug 2023 04:51:29 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:28 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/11] KVM: selftests: Add pmu.h for PMU events and common masks Date: Mon, 14 Aug 2023 19:50:59 +0800 Message-Id: <20230814115108.45741-3-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang By defining the PMU performance events and masks relevant for x86 in the new pmu.h header, it becomes easier to reference them, minimizing potential errors in code that handles these values. Suggested-by: Sean Christopherson Signed-off-by: Jinrong Liang --- .../selftests/kvm/include/x86_64/pmu.h | 124 ++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86_64/pmu.h diff --git a/tools/testing/selftests/kvm/include/x86_64/pmu.h b/tools/testing/selftests/kvm/include/x86_64/pmu.h new file mode 100644 index 000000000000..eb60b2065fac --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/pmu.h @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * tools/testing/selftests/kvm/include/x86_64/pmu.h + * + * Copyright (C) 2023, Tencent, Inc. + */ +#ifndef SELFTEST_KVM_PMU_H +#define SELFTEST_KVM_PMU_H + +#include "processor.h" + +#define GP_COUNTER_NR_OFS_BIT 8 +#define EVENT_LENGTH_OFS_BIT 24 +#define INTEL_PMC_IDX_FIXED 32 + +#define AMD64_NR_COUNTERS 4 +#define AMD64_NR_COUNTERS_CORE 6 + +#define PMU_CAP_FW_WRITES BIT_ULL(13) +#define RDPMC_FIXED_BASE BIT_ULL(30) + +#define ARCH_PERFMON_EVENTSEL_EVENT GENMASK_ULL(7, 0) +#define ARCH_PERFMON_EVENTSEL_UMASK GENMASK_ULL(15, 8) +#define ARCH_PERFMON_EVENTSEL_USR BIT_ULL(16) +#define ARCH_PERFMON_EVENTSEL_OS BIT_ULL(17) + +#define ARCH_PERFMON_EVENTSEL_EDGE BIT_ULL(18) +#define ARCH_PERFMON_EVENTSEL_PIN_CONTROL BIT_ULL(19) +#define ARCH_PERFMON_EVENTSEL_INT BIT_ULL(20) +#define ARCH_PERFMON_EVENTSEL_ANY BIT_ULL(21) +#define ARCH_PERFMON_EVENTSEL_ENABLE BIT_ULL(22) +#define ARCH_PERFMON_EVENTSEL_INV BIT_ULL(23) +#define ARCH_PERFMON_EVENTSEL_CMASK GENMASK_ULL(31, 24) + +#define PMU_VERSION_MASK GENMASK_ULL(7, 0) +#define EVENT_LENGTH_MASK GENMASK_ULL(31, EVENT_LENGTH_OFS_BIT) +#define GP_COUNTER_NR_MASK GENMASK_ULL(15, GP_COUNTER_NR_OFS_BIT) +#define FIXED_COUNTER_NR_MASK GENMASK_ULL(4, 0) + +/* Definitions for Architectural Performance Events */ +#define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8) + +enum intel_pmu_architectural_events { + /* + * The order of the architectural events matters as support for each + * event is enumerated via CPUID using the index of the event. + */ + INTEL_ARCH_CPU_CYCLES, + INTEL_ARCH_INSTRUCTIONS_RETIRED, + INTEL_ARCH_REFERENCE_CYCLES, + INTEL_ARCH_LLC_REFERENCES, + INTEL_ARCH_LLC_MISSES, + INTEL_ARCH_BRANCHES_RETIRED, + INTEL_ARCH_BRANCHES_MISPREDICTED, + + NR_REAL_INTEL_ARCH_EVENTS, + + /* + * Pseudo-architectural event used to implement IA32_FIXED_CTR2, a.k.a. + * TSC reference cycles. The architectural reference cycles event may + * or may not actually use the TSC as the reference, e.g. might use the + * core crystal clock or the bus clock (yeah, "architectural"). + */ + PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS, + NR_INTEL_ARCH_EVENTS, +}; + +static const uint64_t intel_arch_events[] = { + [INTEL_ARCH_CPU_CYCLES] = ARCH_EVENT(0x3c, 0x0), + [INTEL_ARCH_INSTRUCTIONS_RETIRED] = ARCH_EVENT(0xc0, 0x0), + [INTEL_ARCH_REFERENCE_CYCLES] = ARCH_EVENT(0x3c, 0x1), + [INTEL_ARCH_LLC_REFERENCES] = ARCH_EVENT(0x2e, 0x4f), + [INTEL_ARCH_LLC_MISSES] = ARCH_EVENT(0x2e, 0x41), + [INTEL_ARCH_BRANCHES_RETIRED] = ARCH_EVENT(0xc4, 0x0), + [INTEL_ARCH_BRANCHES_MISPREDICTED] = ARCH_EVENT(0xc5, 0x0), + [PSEUDO_ARCH_REFERENCE_CYCLES] = ARCH_EVENT(0xa4, 0x1), +}; + +/* mapping between fixed pmc index and intel_arch_events array */ +static const int fixed_pmc_events[] = { + [0] = INTEL_ARCH_INSTRUCTIONS_RETIRED, + [1] = INTEL_ARCH_CPU_CYCLES, + [2] = PSEUDO_ARCH_REFERENCE_CYCLES, +}; + +enum amd_pmu_k7_events { + AMD_ZEN_CORE_CYCLES, + AMD_ZEN_INSTRUCTIONS, + AMD_ZEN_BRANCHES, + AMD_ZEN_BRANCH_MISSES, +}; + +static const uint64_t amd_arch_events[] = { + [AMD_ZEN_CORE_CYCLES] = ARCH_EVENT(0x76, 0x00), + [AMD_ZEN_INSTRUCTIONS] = ARCH_EVENT(0xc0, 0x00), + [AMD_ZEN_BRANCHES] = ARCH_EVENT(0xc2, 0x00), + [AMD_ZEN_BRANCH_MISSES] = ARCH_EVENT(0xc3, 0x00), +}; + +static inline bool arch_event_is_supported(struct kvm_vcpu *vcpu, + uint8_t arch_event) +{ + struct kvm_cpuid_entry2 *entry; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + + return !(entry->ebx & BIT_ULL(arch_event)) && + (kvm_cpuid_property(vcpu->cpuid, + X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH) > arch_event); +} + +static inline bool fixed_counter_is_supported(struct kvm_vcpu *vcpu, + uint8_t fixed_counter_idx) +{ + struct kvm_cpuid_entry2 *entry; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + + return (entry->ecx & BIT_ULL(fixed_counter_idx) || + (kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_PMU_NR_FIXED_COUNTERS) > + fixed_counter_idx)); +} + +#endif /* SELFTEST_KVM_PMU_H */ From patchwork Mon Aug 14 11:51:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFF9EC04A6A for ; Mon, 14 Aug 2023 11:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230361AbjHNLwF (ORCPT ); Mon, 14 Aug 2023 07:52:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229868AbjHNLvg (ORCPT ); Mon, 14 Aug 2023 07:51:36 -0400 Received: from mail-oi1-x241.google.com (mail-oi1-x241.google.com [IPv6:2607:f8b0:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C045110CE; Mon, 14 Aug 2023 04:51:32 -0700 (PDT) Received: by mail-oi1-x241.google.com with SMTP id 5614622812f47-3a7aedc57ffso3358899b6e.2; Mon, 14 Aug 2023 04:51:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013892; x=1692618692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9TqYglirc6iqEIKO+wO9Xn7uOILA+ARNWEmI7KJ7F4M=; b=RFEzIm7P8aQRy+dmBC9a1djzShGMtfQiHVqQ240yzY6nDAGwlHB1FVRTAu6/z5AmL3 5mfalY5AowD/VRzE7mUH7ZnfZT/QWx+ZB7mNE6/cbV+frzhhZLknZreBtg4TcWW63224 lnYTenXDUMQEmy/p0XToNpcjdv3+gUuJGs5/hrQ8vH5LpwAT8NrElzaH7tn0mtnH4pK7 HfFY7I1d0ONPKGcC3r0IlCuxuKm4IpesSiP+TfyNU++BDhY2915LDTxmWESmmoRxPaVC BiQTbRK/+imXX2OPkIWUaEdb6GTNYFPLrkWTj+6aVF9DPu4ATZaJz7ELkGvrOBqOOc8D 7oAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013892; x=1692618692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9TqYglirc6iqEIKO+wO9Xn7uOILA+ARNWEmI7KJ7F4M=; b=FW7voQUOx/QHeF3WEtaKDBedbfZxN2t9BujTCAM46bz1OnXdQSbxyuVuNPF9Kg3nLz 6Nw60XmUF0VHp6Mc8mcbPoc90jsiNiENiJ7QBjr7XMakDi7NlX0oZeLpbNr+BdxZS48J t+buKpEEhzN/QEixDYXUdAAVpds8KV4FHMN0ekN/Z8Jta27clVhQwfFEfmJYKNeMpvgR lppZGTiKQQQxmHrHMAsvsqawekVCoeqdYH8PP+Xgby55MZn+Q8orLs2+H4PwMkCnu/9j mzqcOzS99+JsxzaymXfdw6JTu7CwC4MWck0DBH/O0zg2svLoWsrTMDAcEgGgR5NNSMg/ g97g== X-Gm-Message-State: AOJu0Yx+e1DSPQvRps/acofSepRXsZI1fm1bB+3h8vKoAjX+/UtD1D/X R/ENsE9bi8IxnypS27fHgPw= X-Google-Smtp-Source: AGHT+IE2TEstYAHJUFL4w+ilEpyMO371OQ50Hud9px2LSENpz9AWEEjG0yDL9EgVcMYQuMmAcVo1lA== X-Received: by 2002:a05:6808:148b:b0:3a7:a2f4:9873 with SMTP id e11-20020a056808148b00b003a7a2f49873mr10455204oiw.35.1692013892084; Mon, 14 Aug 2023 04:51:32 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:31 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 03/11] KVM: selftests: Test Intel PMU architectural events on gp counters Date: Mon, 14 Aug 2023 19:51:00 +0800 Message-Id: <20230814115108.45741-4-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test cases to check if different Architectural events are available after it's marked as unavailable via CPUID. It covers vPMU event filtering logic based on Intel CPUID, which is a complement to pmu_event_filter. According to Intel SDM, the number of architectural events is reported through CPUID.0AH:EAX[31:24] and the architectural event x is supported if EBX[x]=0 && EAX[31:24]>x. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/pmu_basic_functionality_test.c | 158 ++++++++++++++++++ 2 files changed, 159 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 77026907968f..965a36562ef8 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -80,6 +80,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test +TEST_GEN_PROGS_x86_64 += x86_64/pmu_basic_functionality_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c new file mode 100644 index 000000000000..c04eb0bdf69f --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test the consistency of the PMU's CPUID and its features + * + * Copyright (C) 2023, Tencent, Inc. + * + * Check that the VM's PMU behaviour is consistent with the + * VM CPUID definition. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include + +#include "pmu.h" + +/* Guest payload for any performance counter counting */ +#define NUM_BRANCHES 10 + +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, + void *guest_code) +{ + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(*vcpu); + + return vm; +} + +static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg) +{ + struct ucall uc; + + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + *ucall_arg = uc.args[1]; + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + return uc.cmd; +} + +static void guest_measure_loop(uint64_t event_code) +{ + uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + uint32_t pmu_version = this_cpu_property(X86_PROPERTY_PMU_VERSION); + uint32_t counter_msr; + unsigned int i; + + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + counter_msr = MSR_IA32_PMC0; + else + counter_msr = MSR_IA32_PERFCTR0; + + for (i = 0; i < nr_gp_counters; i++) { + wrmsr(counter_msr + i, 0); + wrmsr(MSR_P6_EVNTSEL0 + i, ARCH_PERFMON_EVENTSEL_OS | + ARCH_PERFMON_EVENTSEL_ENABLE | event_code); + + if (pmu_version > 1) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + GUEST_SYNC(_rdpmc(i)); + } else { + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + GUEST_SYNC(_rdpmc(i)); + } + } + + GUEST_DONE(); +} + +static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, + uint8_t arch_events_bitmap_size, + uint8_t arch_events_unavailable_mask, + uint8_t idx) +{ + uint64_t counter_val = 0; + bool is_supported; + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH, + arch_events_bitmap_size); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EVENTS_MASK, + arch_events_unavailable_mask); + + is_supported = arch_event_is_supported(vcpu, idx); + vcpu_args_set(vcpu, 1, intel_arch_events[idx]); + + while (run_vcpu(vcpu, &counter_val) != UCALL_DONE) + TEST_ASSERT_EQ(is_supported, !!counter_val); +} + +static void intel_check_arch_event_is_unavl(uint8_t idx) +{ + uint8_t eax_evt_vec, ebx_unavl_mask, i, j; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* + * A brute force iteration of all combinations of values is likely to + * exhaust the limit of the single-threaded thread fd nums, so it's + * tested here by iterating through all valid values on a single bit. + */ + for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) { + eax_evt_vec = BIT_ULL(i); + for (j = 0; j < ARRAY_SIZE(intel_arch_events); j++) { + ebx_unavl_mask = BIT_ULL(j); + vm = pmu_vm_create_with_one_vcpu(&vcpu, + guest_measure_loop); + test_arch_events_cpuid(vcpu, eax_evt_vec, + ebx_unavl_mask, idx); + + kvm_vm_free(vm); + } + } +} + +static void intel_test_arch_events(void) +{ + uint8_t idx; + + for (idx = 0; idx < ARRAY_SIZE(intel_arch_events); idx++) { + /* + * Given the stability of performance event recurrence, + * only these arch events are currently being tested: + * + * - Core cycle event (idx = 0) + * - Instruction retired event (idx = 1) + * - Reference cycles event (idx = 2) + * - Branch instruction retired event (idx = 5) + */ + if (idx > INTEL_ARCH_INSTRUCTIONS_RETIRED && + idx != INTEL_ARCH_BRANCHES_RETIRED) + continue; + + intel_check_arch_event_is_unavl(idx); + } +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + + TEST_REQUIRE(host_cpu_is_intel); + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); + + intel_test_arch_events(); + + return 0; +} From patchwork Mon Aug 14 11:51:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88BEFC41513 for ; Mon, 14 Aug 2023 11:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbjHNLwE (ORCPT ); Mon, 14 Aug 2023 07:52:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229882AbjHNLvg (ORCPT ); Mon, 14 Aug 2023 07:51:36 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DE22EA; Mon, 14 Aug 2023 04:51:35 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id d2e1a72fcca58-68783b2e40bso2897393b3a.3; Mon, 14 Aug 2023 04:51:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013895; x=1692618695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3Isc+tGprLo3aak9bWJdMKknXw+IJrOLcgnKWJJM4lY=; b=TnNoGgFotOm7S+iWt84C2+bCoMsxRYGXYWfdmRTgOwmTnsb3CobNvjSOCKYtC16Cla DPNSjOtqnkSg5MrFihzNupOUmMMKqtMJ1c61CIwnMUJXyyzELKjxY7oGoR9M+OkUdIqv L5GpBfKSSCt2tbPrDHLkuNZLt+gQDb9g6xTt3IOoFBlnWIZXdOaMfRVHsTeeEOz24gzJ FKyTJ8G4vs4t3JGIJE/oDeCMlaWAWYToEn60GwVxm7uyl138w2qs5/izcfEBBNaGpiIl UkxtlNaZbo6B2X+dixpcK0yz7E9AEks1K4uMOYq9bzXrDO2qNEdho6jyMHPL3RVpDZLU 44FQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013895; x=1692618695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3Isc+tGprLo3aak9bWJdMKknXw+IJrOLcgnKWJJM4lY=; b=DdX8TJpycaU9970A+dSCzw4XFMf5/17jCINELDyV7j3JQGipCb++zXQB+xkTZWYEYr ouZx8PzkokH8CHFvNJ0qpks0bCSjzl3atYp6+MBkdOK4HNfsIkwZ/mwq6t4w38QDL3Vk maukn2KaoRE7eqO0U8X3914xrMgKs8Ctuq+gZLo8EbKjbyPl0Q1TAfCXnUddBrYL2BhX Z6iGsUnW/WI6/1jHx8tpirpxHYTnOqUD8lKBALhtUXXTzZfZ5HJbYEWQYEWDXHukXYl3 vANsQVzpQn1364Y+szMxwac0InJAT4OWhfm6OuvaB9vez9Cf3mzJTIARGH2KZ5WuFI5w 3cxQ== X-Gm-Message-State: AOJu0YxWDygqoqECbQ0dbgR+ESKSCWdq/5gfVMLLvQiKDmDULeu6iLEI BjDtsrGSQ7gAkcp12I8RIvw= X-Google-Smtp-Source: AGHT+IGPVokJ/z0kOtUc2p2hGpNzMsgHljlX9ntMujLgO+jlTFpot/ArWe0qBYrRjMZLs08tpNVnUA== X-Received: by 2002:a05:6a20:12d1:b0:13d:ac08:6b72 with SMTP id v17-20020a056a2012d100b0013dac086b72mr10160283pzg.18.1692013895007; Mon, 14 Aug 2023 04:51:35 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:34 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 04/11] KVM: selftests: Test Intel PMU architectural events on fixed counters Date: Mon, 14 Aug 2023 19:51:01 +0800 Message-Id: <20230814115108.45741-5-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Update test to cover Intel PMU architectural events on fixed counters. Per Intel SDM, PMU users can also count architecture performance events on fixed counters (specifically, FIXED_CTR0 for the retired instructions and FIXED_CTR1 for cpu core cycles event). Therefore, if guest's CPUID indicates that an architecture event is not available, the corresponding fixed counter will also not count that event. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 21 +++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index c04eb0bdf69f..daa45aa285bb 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -47,6 +47,7 @@ static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg) static void guest_measure_loop(uint64_t event_code) { + uint32_t nr_fixed_counter = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); uint32_t pmu_version = this_cpu_property(X86_PROPERTY_PMU_VERSION); uint32_t counter_msr; @@ -73,6 +74,26 @@ static void guest_measure_loop(uint64_t event_code) } } + if (pmu_version < 2 || nr_fixed_counter < 1) + goto done; + + if (event_code == intel_arch_events[INTEL_ARCH_INSTRUCTIONS_RETIRED]) + i = 0; + else if (event_code == intel_arch_events[INTEL_ARCH_CPU_CYCLES]) + i = 1; + else + goto done; + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(INTEL_PMC_IDX_FIXED + i)); + + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + GUEST_SYNC(_rdpmc(RDPMC_FIXED_BASE | i)); + +done: GUEST_DONE(); } From patchwork Mon Aug 14 11:51:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B197EC04E69 for ; Mon, 14 Aug 2023 11:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbjHNLwH (ORCPT ); Mon, 14 Aug 2023 07:52:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229913AbjHNLvj (ORCPT ); Mon, 14 Aug 2023 07:51:39 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AA76EA; Mon, 14 Aug 2023 04:51:38 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id d2e1a72fcca58-688142a392eso3041255b3a.3; Mon, 14 Aug 2023 04:51:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013898; x=1692618698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0Xx2H5qBp82cfooizXIXNIdk1Jryabv6Gra/eZNCPNc=; b=A6z0GNwCAKQ8ICz3mVzFFHUNd5hf45mUeTrH5+RSnNd7P87XEeVtTuWUzZUYZyemyh nTI3tTlrnY7v8DC5FvPSEqtGJ/XjrXozyAa/Ow4hjIikY4QYhhYk0w9d4DIDZ7mtpGBx Rptb5+y7Xv4TdeV96HkpKFbe0dBgInZCuhMzBU5olzB7WwZiST4LVqQHZ6QBZBH10mpw 4rElbPs57tLAcNfIhq0dh1ELRc7nK8Tp4oPvIVFJGwZq6Q0q+umZo0uq3CACGuzkijPt PiXrhZ/lDTmDhj6a6UHbUARSxdjjXlvjfg6+F3py00ceGFyrClcs1ysxBZzcbp4vtFvZ CNnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013898; x=1692618698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0Xx2H5qBp82cfooizXIXNIdk1Jryabv6Gra/eZNCPNc=; b=UeUvdglvpXnl/67zJr2f3WVtUmecAogUXnAAG+emWVNYCzWhWqbG74DxTfcNcdnuQn 4ADSOPFLyfhMv+jIwnc7c8qg4WzKL+AZZzJ31l0PArp1zG8m34QHrLMNC+Z9v3XjG5hy o575NgOC3/pdLP8r328pcTM4ciN0RZjH2gAzzcgDHsyDA/sI+R/Y7Mr0HHgbUGPmC8PC yctXH+LXpyrfkuChJsd484T7lAJjDyHDGHL8jtuYP7dHaeYwG36DLazLgJJXvAe83a2L +5gPcxiEgYbcLjGKnk/0Gq5TXLl84Ql+TyUmyy5KqRGKPP+iN3jRW/Ol1GeuWTXI6BwF rpgg== X-Gm-Message-State: AOJu0YxnGX1SUrCGY740z46lPFqEe4fJ7xGXmwmQqP5xFD3ykENNHFD+ BfVS4zL75igHoyR+B6+65HY= X-Google-Smtp-Source: AGHT+IEfFoMyU6qaasErQKedZB88qGgbQQBRBK/uofyA1ut8RAouMhrkGIOYVsDOExlL9zHjHB2Viw== X-Received: by 2002:a05:6a20:dd85:b0:13d:c86b:d76d with SMTP id kw5-20020a056a20dd8500b0013dc86bd76dmr9054156pzb.60.1692013897944; Mon, 14 Aug 2023 04:51:37 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:37 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 05/11] KVM: selftests: Test consistency of CPUID with num of gp counters Date: Mon, 14 Aug 2023 19:51:02 +0800 Message-Id: <20230814115108.45741-6-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test to check if non-existent counters can be accessed in guest after determining the number of Intel generic performance counters by CPUID. When the num of counters is less than 3, KVM does not emulate #GP if a counter isn't present due to compatibility MSR_P6_PERFCTRx handling. Nor will the KVM emulate more counters than it can support. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index daa45aa285bb..b86033e51d5c 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -16,6 +16,11 @@ /* Guest payload for any performance counter counting */ #define NUM_BRANCHES 10 +static const uint64_t perf_caps[] = { + 0, + PMU_CAP_FW_WRITES, +}; + static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, void *guest_code) { @@ -164,6 +169,78 @@ static void intel_test_arch_events(void) } } +static void guest_wr_and_rd_msrs(uint32_t base, uint8_t begin, uint8_t offset) +{ + uint8_t wr_vector, rd_vector; + uint64_t msr_val; + unsigned int i; + + for (i = begin; i < begin + offset; i++) { + wr_vector = wrmsr_safe(base + i, 0xffff); + rd_vector = rdmsr_safe(base + i, &msr_val); + if (wr_vector == GP_VECTOR || rd_vector == GP_VECTOR) + GUEST_SYNC(GP_VECTOR); + else + GUEST_SYNC(msr_val); + } + + GUEST_DONE(); +} + +/* Access the first out-of-range counter register to trigger #GP */ +static void test_oob_gp_counter(uint8_t eax_gp_num, uint8_t offset, + uint64_t perf_cap, uint64_t expected) +{ + uint32_t ctr_msr = MSR_IA32_PERFCTR0; + struct kvm_vcpu *vcpu; + uint64_t msr_val = 0; + struct kvm_vm *vm; + + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_wr_and_rd_msrs); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_GP_COUNTERS, + eax_gp_num); + + if (perf_cap & PMU_CAP_FW_WRITES) + ctr_msr = MSR_IA32_PMC0; + + vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap); + vcpu_args_set(vcpu, 3, ctr_msr, eax_gp_num, offset); + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE) + TEST_ASSERT_EQ(expected, msr_val); + + kvm_vm_free(vm); +} + +static void intel_test_counters_num(void) +{ + uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + unsigned int i; + + TEST_REQUIRE(nr_gp_counters > 2); + + for (i = 0; i < ARRAY_SIZE(perf_caps); i++) { + /* + * For compatibility reasons, KVM does not emulate #GP + * when MSR_P6_PERFCTR[0|1] is not present, but it doesn't + * affect checking the presence of MSR_IA32_PMCx with #GP. + */ + if (perf_caps[i] & PMU_CAP_FW_WRITES) + test_oob_gp_counter(0, 1, perf_caps[i], GP_VECTOR); + + test_oob_gp_counter(2, 1, perf_caps[i], GP_VECTOR); + test_oob_gp_counter(nr_gp_counters, 1, perf_caps[i], GP_VECTOR); + + /* KVM doesn't emulate more counters than it can support. */ + test_oob_gp_counter(nr_gp_counters + 1, 1, perf_caps[i], + GP_VECTOR); + + /* Test that KVM drops writes to MSR_P6_PERFCTR[0|1]. */ + if (!perf_caps[i]) + test_oob_gp_counter(0, 2, perf_caps[i], 0); + } +} + int main(int argc, char *argv[]) { TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); @@ -174,6 +251,7 @@ int main(int argc, char *argv[]) TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); intel_test_arch_events(); + intel_test_counters_num(); return 0; } From patchwork Mon Aug 14 11:51:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF7E8C04FE1 for ; Mon, 14 Aug 2023 11:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231171AbjHNLwI (ORCPT ); Mon, 14 Aug 2023 07:52:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229928AbjHNLvm (ORCPT ); Mon, 14 Aug 2023 07:51:42 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59B13EA; Mon, 14 Aug 2023 04:51:41 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-68842ebdcf7so174169b3a.0; Mon, 14 Aug 2023 04:51:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013901; x=1692618701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=61GfPmVPqdJQe4FEL90WRoo6PIXr9V7sVFEuh/ye+sc=; b=B7hUH7NJhTrBM477jFwBthlaB/D7bYwffvUYtQI3xqNIlGwtkH/pyT95qNZq0t4bYz DbSKjk8lt+QNdG99hQXLG9xHptdi1Amxx14vHOB0GZnVEncCgXoG99a17ZNLkzYJtovv S0wRgW7bCjvkdmKHHB+TGAKU2TgVi7rJCDuyFysPkCSQfA9/q4K0Gr8RYCUujXEJa5uC dJKIEv7eem2kRt/vvk+u4M90xyyxi+2FMD+7f0qO8d8aEac7+SeC8oDyC6ggRYOsyJ9r YgznWQvRUTh7FKcVafARVVlRlivGjyAkOqheGe/47Aq45yg3Dk+R8emC5XNYzdit5qkk lftQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013901; x=1692618701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=61GfPmVPqdJQe4FEL90WRoo6PIXr9V7sVFEuh/ye+sc=; b=coNgWgmtwqH+WGYX+uwYPaPGj9nQnygnOoQPNrhS2TpuOvhIHNJZeLrIB9uUaaigrC d3oc+s0hyaBoslgvzXjaWUAAtVcJudbP3tCD1NpdRBejzy2xA632f2nDOTe58Ls8p7V0 s7qXIyU8DFzZwoBkirolDxev3aP9yYQ38zyTWHW6BilxQDSPgBH9Q0IMmGnvP1tenTUU XU/yac959u+FrkHCyNE9oLhPS3fL70PxPzf7DYFJ40BYA6jZTaUrWCFc7mzF1v3yA7Lr cCKiIW61QWEPIV7EPeMQ4Io2HKiWPWnVM8WrsQ/WTUcO7EkxNDMFhVFrEuCwaS7oGFw1 PjdA== X-Gm-Message-State: AOJu0Yz2Fr6QQG4xkTiAr5t3uMY08JCSAa/o+Kyha8Do4ykwWctZ994g tHl0q36qxd/WkWXD739QYAo= X-Google-Smtp-Source: AGHT+IHDMfCfk7rZyRXz+fCSGjAI06UJi8gP/yvncrO8BtZ+NL92ix98Bs7Nw43gYCH00+qyB3SgYQ== X-Received: by 2002:a05:6a20:4325:b0:13f:cd07:2b60 with SMTP id h37-20020a056a20432500b0013fcd072b60mr14322356pzk.1.1692013900876; Mon, 14 Aug 2023 04:51:40 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:40 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 06/11] KVM: selftests: Test consistency of CPUID with num of fixed counters Date: Mon, 14 Aug 2023 19:51:03 +0800 Message-Id: <20230814115108.45741-7-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test to check if non-existent counters can be accessed in guest after determining the number of Intel generic performance counters by CPUID. Per SDM, fixed-function performance counter 'i' is supported if ECX[i] || (EDX[4:0] > i). KVM doesn't emulate more counters than it can support. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index b86033e51d5c..db1c1230700a 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -212,10 +212,43 @@ static void test_oob_gp_counter(uint8_t eax_gp_num, uint8_t offset, kvm_vm_free(vm); } +static void intel_test_oob_fixed_ctr(uint8_t edx_fixed_num, + uint32_t fixed_bitmask, uint64_t expected) +{ + uint8_t idx = edx_fixed_num; + struct kvm_vcpu *vcpu; + uint64_t msr_val = 0; + struct kvm_vm *vm; + bool visible; + + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_wr_and_rd_msrs); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK, + fixed_bitmask); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_FIXED_COUNTERS, + edx_fixed_num); + + visible = fixed_counter_is_supported(vcpu, idx); + + /* KVM doesn't emulate more fixed counters than it can support. */ + if (idx >= kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS)) + visible = false; + + vcpu_args_set(vcpu, 3, MSR_CORE_PERF_FIXED_CTR0, idx, 1); + if (visible) { + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE) + TEST_ASSERT_EQ(expected, msr_val); + } + + kvm_vm_free(vm); +} + static void intel_test_counters_num(void) { + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); unsigned int i; + uint32_t ecx; TEST_REQUIRE(nr_gp_counters > 2); @@ -239,6 +272,14 @@ static void intel_test_counters_num(void) if (!perf_caps[i]) test_oob_gp_counter(0, 2, perf_caps[i], 0); } + + for (ecx = 0; + ecx <= kvm_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK) + 1; + ecx++) { + intel_test_oob_fixed_ctr(0, ecx, GP_VECTOR); + intel_test_oob_fixed_ctr(nr_fixed_counters, ecx, GP_VECTOR); + intel_test_oob_fixed_ctr(nr_fixed_counters + 1, ecx, GP_VECTOR); + } } int main(int argc, char *argv[]) From patchwork Mon Aug 14 11:51:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18899C05052 for ; Mon, 14 Aug 2023 11:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231358AbjHNLwI (ORCPT ); Mon, 14 Aug 2023 07:52:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230018AbjHNLvp (ORCPT ); Mon, 14 Aug 2023 07:51:45 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B482EA; Mon, 14 Aug 2023 04:51:44 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id d2e1a72fcca58-6874d1c8610so2626888b3a.0; Mon, 14 Aug 2023 04:51:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013904; x=1692618704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ytuuOxw54VYORwOpx8ggpahMngdV8+dWvhoVEndQ8GY=; b=A34XgtRpNzMJjlY8SRN9i3HaXizQ6Ylqk5W/Fm+EZSIM4Dyn1SH36ou8ADRzQ84Z6c uFI4RvyQ10qkPjMkAok679CPu4t8gScYKOoy5EktHxMrD3Gm6/Lgq832bkcW0PbPMyFd HBuhRwVF1etCbxbG4wn0Xu1b9paEIOlAI6z6qsNROGw75pIu7K/iuUOny2sPXuLzqCVh 2l6yd6KOJC7LGd9yhpI5LN1uHPvYaciPhCIytcli6po+zUvsCc2HAg/Oc+o+FqHa9x+p 8QZ86hgZpIh2x56R31U4FIcpK05qtOez8EV0M7FKuGGnvGWLKeCqNkWjRkV2ZHnnVyoy 7/KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013904; x=1692618704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ytuuOxw54VYORwOpx8ggpahMngdV8+dWvhoVEndQ8GY=; b=URjbWAQJ2JvxvVmvK23w336OSrl+W3JtfnH3JQ+JeGGW29jmmyybIr29epP9iBcecZ 7vo02mo6uZhfL2jjwPaabeeBLxN+fDbXqEbsZ6CS2P23ahjSCkzbwslx/91YTT2pofeZ UxesxWP7bp6w2kghGzJd0KuYzPi4actZY9mFr63+xQz49ukM4BGgZh0RGir/3mXtckTe Hyuac7auRllVI+FStsstAlxs8Fq8EveBPitPpDTqLdyybbY8C+vzjp3UzoTmMQ1z7br/ tABiR5eC0zsC4UmmjD8BoGDsq87AOJBUMx8fpjZoCLYuKPARFocR4vWTrXai+JMWNn9N vmCA== X-Gm-Message-State: AOJu0YyY4mnhFSRG5nubEHoE7at9xs/eEGSeHe/WEJUq3O8yUWVtq3o/ erlKkE6vSslryBnseRn8niw= X-Google-Smtp-Source: AGHT+IHNkoOPgqMtbwLHAeN+iXn+o8Rd9gTlbv5k3D/2B5wF9pey6LBmaEcPHS7jVHOkf+4TojhoqQ== X-Received: by 2002:a05:6a20:3d95:b0:137:bc72:9c08 with SMTP id s21-20020a056a203d9500b00137bc729c08mr8718771pzi.16.1692013903883; Mon, 14 Aug 2023 04:51:43 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:43 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 07/11] KVM: selftests: Test Intel supported fixed counters bit mask Date: Mon, 14 Aug 2023 19:51:04 +0800 Message-Id: <20230814115108.45741-8-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add a test to check that fixed counters enabled via guest CPUID.0xA.ECX (instead of EDX[04:00]) work as normal as usual. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index db1c1230700a..3bbf3bd2846b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -282,6 +282,65 @@ static void intel_test_counters_num(void) } } +static void intel_guest_run_fixed_counters(void) +{ + uint64_t supported_bitmask = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK); + uint32_t nr_fixed_counter = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + uint64_t msr_val; + unsigned int i; + bool expected; + + for (i = 0; i < nr_fixed_counter; i++) { + expected = supported_bitmask & BIT_ULL(i) || i < nr_fixed_counter; + + wrmsr_safe(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(INTEL_PMC_IDX_FIXED + i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, 0); + rdmsr_safe(MSR_CORE_PERF_FIXED_CTR0 + i, &msr_val); + + GUEST_ASSERT(expected == !!msr_val); + } + + GUEST_DONE(); +} + +static void test_fixed_counters_setup(struct kvm_vcpu *vcpu, + uint32_t fixed_bitmask, + uint8_t edx_fixed_num) +{ + int ret; + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK, + fixed_bitmask); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_FIXED_COUNTERS, + edx_fixed_num); + + do { + ret = run_vcpu(vcpu, NULL); + } while (ret != UCALL_DONE); +} + +static void intel_test_fixed_counters(void) +{ + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint32_t ecx; + uint8_t edx; + + for (edx = 0; edx <= nr_fixed_counters; edx++) { + /* KVM doesn't emulate more fixed counters than it can support. */ + for (ecx = 0; ecx <= (BIT_ULL(nr_fixed_counters) - 1); ecx++) { + vm = pmu_vm_create_with_one_vcpu(&vcpu, + intel_guest_run_fixed_counters); + test_fixed_counters_setup(vcpu, ecx, edx); + kvm_vm_free(vm); + } + } +} + int main(int argc, char *argv[]) { TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); @@ -293,6 +352,7 @@ int main(int argc, char *argv[]) intel_test_arch_events(); intel_test_counters_num(); + intel_test_fixed_counters(); return 0; } From patchwork Mon Aug 14 11:51:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CF1BC07E8A for ; Mon, 14 Aug 2023 11:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbjHNLwK (ORCPT ); Mon, 14 Aug 2023 07:52:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230021AbjHNLvs (ORCPT ); Mon, 14 Aug 2023 07:51:48 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61119EA; Mon, 14 Aug 2023 04:51:47 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-68783b2e40bso2897484b3a.3; Mon, 14 Aug 2023 04:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013907; x=1692618707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6QSg0DwA3NF38TCmKZOiHhuUw3mZXP23ciDh8DZpCwk=; b=dgcTCnjiK9l2MrqLnJReCG6zW9hUewSOsmYzj85gLMoYCtmsAcZB3TF+IoqKOWkgIP KKSOXj1TK3a+6t98a7loh0YwmQXNi3SDtufQ1rUo+b0tKKgK981JQ6E2ErGj9nUoSmtj 7UOKETFgWcUUeoknLbA37k5JLuLrf7x9BBOyLO0tFfvNkduEUxcJRDym2OIztUkzAKDb HOCL9noAXeaQrcRadCdxfj7RiXaFk7339g7a4Kj9bNZedwTghItbJYN4CYVThRqSrBxu mJohcmGFdof4LZ/HVTECoZFfIXTPofGSabl0ComnKzmMslxc332rkCxpxDy5CJiythn7 VO7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013907; x=1692618707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6QSg0DwA3NF38TCmKZOiHhuUw3mZXP23ciDh8DZpCwk=; b=kU5Zig8iGuGOXwWbaJqlgGlXryyCCxA/ghMPdoZoCMry94zazGvHsci+afID7PU3W1 cdfoWDeK2arYn0tQTPY9W1Bkr8+RVem+0CfUdOujMUeGK9IUxvEZ4oGQr8bKXWj6n70L KbOjU9mbpEdiTUMwMcfJUYhP/dTE5uqB6SWl16F/DQKdvKcWgc70lFqf7BB9PT7Ux3Cx H2JhpJv+7dVgL11L8BEpjMUeONIovtZgpws8ifPtN1PkJMl45N/X5J5SVNoWh8E+GWKk sj7C+1vMZdL7VIhIG67MUh8IIPwGQh+UJnrF7l65QiMI9PLCaaUUcAWsSuGxa7Pwwr3C RWWg== X-Gm-Message-State: AOJu0Yx/k5h0b+j99pw0MwC0L4j56Df5igQ8rg77ulXSWGpOufiOaGA7 nppLIKdY4VVYTeU44mTrM0c= X-Google-Smtp-Source: AGHT+IGXjiyPa/JAUvgPxAzmjGukAxQUa0qpeqQw6b2n4Y+XXkQWSvPXmRBa1zSO9uprefFKkFl5gg== X-Received: by 2002:a05:6a21:6d88:b0:12f:90d8:9755 with SMTP id wl8-20020a056a216d8800b0012f90d89755mr10550114pzb.15.1692013906801; Mon, 14 Aug 2023 04:51:46 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:46 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 08/11] KVM: selftests: Test consistency of PMU MSRs with Intel PMU version Date: Mon, 14 Aug 2023 19:51:05 +0800 Message-Id: <20230814115108.45741-9-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang KVM user sapce may control the Intel guest PMU version number via CPUID.0AH:EAX[07:00]. A test is added to check if a typical PMU register that is not available at the current version number is leaking. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index 3bbf3bd2846b..70adfad45010 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -16,6 +16,12 @@ /* Guest payload for any performance counter counting */ #define NUM_BRANCHES 10 +/* + * KVM implements the first two non-existent counters (MSR_P6_PERFCTRx) + * via kvm_pr_unimpl_wrmsr() instead of #GP. + */ +#define MSR_INTEL_ARCH_PMU_GPCTR (MSR_IA32_PERFCTR0 + 2) + static const uint64_t perf_caps[] = { 0, PMU_CAP_FW_WRITES, @@ -341,6 +347,66 @@ static void intel_test_fixed_counters(void) } } +static void intel_guest_check_pmu_version(uint8_t version) +{ + switch (version) { + case 0: + GUEST_SYNC(wrmsr_safe(MSR_INTEL_ARCH_PMU_GPCTR, 0xffffull)); + case 1: + GUEST_SYNC(wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, 0x1ull)); + case 2: + /* + * AnyThread Bit is only supported in version 3 + * + * The strange thing is that when version=0, writing ANY-Any + * Thread bit (bit 21) in MSR_P6_EVNTSEL0 and MSR_P6_EVNTSEL1 + * will not generate #GP. While writing ANY-Any Thread bit + * (bit 21) in MSR_P6_EVNTSEL0+x (MAX_GP_CTR_NUM > x > 2) to + * ANY-Any Thread bit (bit 21) will generate #GP. + */ + if (version == 0) + break; + + GUEST_SYNC(wrmsr_safe(MSR_P6_EVNTSEL0, + ARCH_PERFMON_EVENTSEL_ANY)); + break; + default: + /* KVM currently supports up to pmu version 2 */ + GUEST_SYNC(GP_VECTOR); + } + + GUEST_DONE(); +} + +static void test_pmu_version_setup(struct kvm_vcpu *vcpu, uint8_t version, + uint64_t expected) +{ + uint64_t msr_val = 0; + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_VERSION, version); + + vcpu_args_set(vcpu, 1, version); + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE) + TEST_ASSERT_EQ(expected, msr_val); +} + +static void intel_test_pmu_version(void) +{ + uint8_t unsupported_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION) + 1; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint8_t version; + + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS) > 2); + + for (version = 0; version <= unsupported_version; version++) { + vm = pmu_vm_create_with_one_vcpu(&vcpu, + intel_guest_check_pmu_version); + test_pmu_version_setup(vcpu, version, GP_VECTOR); + kvm_vm_free(vm); + } +} + int main(int argc, char *argv[]) { TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); @@ -353,6 +419,7 @@ int main(int argc, char *argv[]) intel_test_arch_events(); intel_test_counters_num(); intel_test_fixed_counters(); + intel_test_pmu_version(); return 0; } From patchwork Mon Aug 14 11:51:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70866C07E8B for ; Mon, 14 Aug 2023 11:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231547AbjHNLwL (ORCPT ); Mon, 14 Aug 2023 07:52:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229766AbjHNLvu (ORCPT ); Mon, 14 Aug 2023 07:51:50 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3776CEA; Mon, 14 Aug 2023 04:51:50 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-688142a392eso3041484b3a.3; Mon, 14 Aug 2023 04:51:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013910; x=1692618710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0FePrHfi10+XZY9oU7qWIyV1EnP7SrcBy1onsUp/NmM=; b=Pn1y9twCjSeSf0Kll6xM5OeFo08gfC4oSPHmQfUrlXRok8bgXwIDh+LlK3KBFehr0W Aix6HOn7EZfxDgPUHZU+JiS5ANT3i1L5wo1ujDar1DLs7FLL1p6abjGgsk/VITvM+aRQ 9d6p3/wJ0gY98zIYr0nMR4ih8ofJ0DEruw+9ocNYE2CaD3vIlGk6EZmWQBGz+wVYR0CQ uE6hea0y46s72q4fqruv7agkMu2pzBvTiO/H3iR6JNxGTx24lXpC5lqoB57OFwgFAnm0 wBTsXWVrRvJ6RWKp6S2Dvc+nLlEn39BsPTfhSxmQrV4WAHRL8V26ExSsj1gzTnM8gJMG 69xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013910; x=1692618710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0FePrHfi10+XZY9oU7qWIyV1EnP7SrcBy1onsUp/NmM=; b=Bg2EzabFSHXFH/OIJ5KIF7TTMu0vCj9qy0Yrq1xDWnLWxfeoEzRyGSTvgzKxQbonB9 fdf0uB2jkp8RZTJzNsuyGTJudJbJ9no/W3oQDuRgocAV3OrR2LODX+QwwiivDkJGU9Vc lvgluYBGShVAl/zscMHKbyk/Icm+5wVXXv2kGN556rkJtQPMKVDUarKWVWvLP/VQ29MC F3Rcchjx7gt7+pCtRD2qgh7LdjCbYjv1JDW1OLnh20GaV8h9toM5oa+PZ5VMU4z4RSKY ZJoPUrgsrNpwlPfaNq7XbPW5UpBrqxzQLJZepY0F8Jy7HD4A3r8bER2bixN0nfhtQwdQ 85iQ== X-Gm-Message-State: AOJu0YwCZxTz1Djqh2wbYQhbRzezdcIka2Ls5MNTLgQd+0+smzV0x6Du DajiEm/SJ6xxtAlQOHfw9Lc= X-Google-Smtp-Source: AGHT+IFfOlvIMkt0oQNJwZoX1Q4qkEwKDjEtWG1S0CellW4gAVOJo4wWFQStI77JBC+D73IV0Of/Fg== X-Received: by 2002:a05:6a00:1ace:b0:681:50fd:2b98 with SMTP id f14-20020a056a001ace00b0068150fd2b98mr13307238pfv.31.1692013909718; Mon, 14 Aug 2023 04:51:49 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:49 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/11] KVM: selftests: Add x86 feature and properties for AMD PMU in processor.h Date: Mon, 14 Aug 2023 19:51:06 +0800 Message-Id: <20230814115108.45741-10-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add x86 feature and properties for AMD PMU so that tests don't have to manually retrieve the correct CPUID leaf+register, and so that the resulting code is self-documenting. Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/include/x86_64/processor.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 6b146e1c6736..07b980b8bec2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -167,6 +167,7 @@ struct kvm_x86_cpu_feature { */ #define X86_FEATURE_SVM KVM_X86_CPU_FEATURE(0x80000001, 0, ECX, 2) #define X86_FEATURE_NX KVM_X86_CPU_FEATURE(0x80000001, 0, EDX, 20) +#define X86_FEATURE_AMD_PMU_EXT_CORE KVM_X86_CPU_FEATURE(0x80000001, 0, ECX, 23) #define X86_FEATURE_GBPAGES KVM_X86_CPU_FEATURE(0x80000001, 0, EDX, 26) #define X86_FEATURE_RDTSCP KVM_X86_CPU_FEATURE(0x80000001, 0, EDX, 27) #define X86_FEATURE_LM KVM_X86_CPU_FEATURE(0x80000001, 0, EDX, 29) @@ -182,6 +183,9 @@ struct kvm_x86_cpu_feature { #define X86_FEATURE_VGIF KVM_X86_CPU_FEATURE(0x8000000A, 0, EDX, 16) #define X86_FEATURE_SEV KVM_X86_CPU_FEATURE(0x8000001F, 0, EAX, 1) #define X86_FEATURE_SEV_ES KVM_X86_CPU_FEATURE(0x8000001F, 0, EAX, 3) +#define X86_FEATURE_AMD_PERFMON_V2 KVM_X86_CPU_FEATURE(0x80000022, 0, EAX, 0) +#define X86_FEATURE_AMD_LBR_STACK KVM_X86_CPU_FEATURE(0x80000022, 0, EAX, 1) +#define X86_FEATURE_AMD_LBR_PMC_FREEZE KVM_X86_CPU_FEATURE(0x80000022, 0, EAX, 2) /* * KVM defined paravirt features. @@ -267,6 +271,9 @@ struct kvm_x86_cpu_property { #define X86_PROPERTY_MAX_VIRT_ADDR KVM_X86_CPU_PROPERTY(0x80000008, 0, EAX, 8, 15) #define X86_PROPERTY_PHYS_ADDR_REDUCTION KVM_X86_CPU_PROPERTY(0x8000001F, 0, EBX, 6, 11) +#define X86_PROPERTY_AMD_PMU_NR_CORE_COUNTERS KVM_X86_CPU_PROPERTY(0x80000022, 0, EBX, 0, 3) +#define X86_PROPERTY_AMD_PMU_LBR_STACK_SIZE KVM_X86_CPU_PROPERTY(0x80000022, 0, EBX, 4, 9) + #define X86_PROPERTY_MAX_CENTAUR_LEAF KVM_X86_CPU_PROPERTY(0xC0000000, 0, EAX, 0, 31) /* From patchwork Mon Aug 14 11:51:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 519FAC0729B for ; Mon, 14 Aug 2023 11:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231641AbjHNLwM (ORCPT ); Mon, 14 Aug 2023 07:52:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229480AbjHNLvy (ORCPT ); Mon, 14 Aug 2023 07:51:54 -0400 Received: from mail-oi1-x241.google.com (mail-oi1-x241.google.com [IPv6:2607:f8b0:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48C45EA; Mon, 14 Aug 2023 04:51:53 -0700 (PDT) Received: by mail-oi1-x241.google.com with SMTP id 5614622812f47-3a7e68f4214so2310527b6e.1; Mon, 14 Aug 2023 04:51:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013912; x=1692618712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JAnTwD22KZ81y7xJz7a0vclcrgGfIXcxvg6CFL8Xcrc=; b=OW1tTsUjYIgnFTNASqTQDFsgvjuoaSG2rvlkNdz2D8EOyZ67O442FuAupOitQo4IiX ajBK8WT0PsSkYSvZklnsp4Ak2tEdrYx6jGrqWhusqLxgxf0+VfWBNISmyuFEEywssiLq hnv4Ddz5SLUElA1yNX/N5lzgLYufJNHwm5FoNv8HwkuoPp/9wYXEGisfLgJZUU2rQsyL AxgxpvoEMojMSDVew0Wr1kgtcNWhKIsm5GYHHDrSirJIVDvpxL8oRsbVdAa3RdQDU9ip 5HYJz5nIP48gDa3JZpb//RvLiQ1uzDUwbh8AhXAZLHRDG9E9tw7aaT2sRXoOe8NSekb4 pdAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013912; x=1692618712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JAnTwD22KZ81y7xJz7a0vclcrgGfIXcxvg6CFL8Xcrc=; b=jDhpBRVYk4YrgwcbJT+EVKta70jb3UrBs3VLpQe1+oIxih9N4ckLG+AV9zSwSy8MLa sQCbGbFO+EF/uo7UFiu8o7q/f6y0RFRTC2nCDYKsPYIxgoTm0B+aB+N+lBiCrDQXFZPk 4BCD9eUKx1jEPeF/NyDtzAtE+zIQySiNy5BHJvjbU1EQ+jShGJT3qvjo0DY39Fk2Yc4o Yd2BNhYOp0EObXTXKR7gKx7Gs1iDkld8KMpUDyLu1E+bmW4sYDq/MR29w+BAp+py5IdN YRHDnP5Nf8qFvOM82N88sTfF8Q+Lm06XLO0Ba9ClnFTygWj+S1pgpii9+ZRy2/4AH72B bnhg== X-Gm-Message-State: AOJu0YyttzKugaq0o29P/qUJtIbJ5L+bjPa2CmvFhOIHZ//Vk/3jiVLo FRMoc5IHyuN1s6sgfYb3X5o= X-Google-Smtp-Source: AGHT+IFaui22yNQmJ36TxGx1GYH5YcA89QCYxtfaD2QJjR15nNI3dc3oluMwDtwi6gKJzFBoMMwLjA== X-Received: by 2002:a05:6808:2124:b0:3a7:82e8:8fd1 with SMTP id r36-20020a056808212400b003a782e88fd1mr12131771oiw.20.1692013912628; Mon, 14 Aug 2023 04:51:52 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:52 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 10/11] KVM: selftests: Test AMD PMU events on legacy four performance counters Date: Mon, 14 Aug 2023 19:51:07 +0800 Message-Id: <20230814115108.45741-11-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add tests to check AMD PMU legacy four performance counters. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 72 ++++++++++++++----- 1 file changed, 54 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index 70adfad45010..cb2a7ad5c504 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -58,20 +58,29 @@ static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg) static void guest_measure_loop(uint64_t event_code) { - uint32_t nr_fixed_counter = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); - uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); - uint32_t pmu_version = this_cpu_property(X86_PROPERTY_PMU_VERSION); + uint8_t nr_gp_counters, pmu_version = 1; + uint64_t event_sel_msr; uint32_t counter_msr; unsigned int i; - if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) - counter_msr = MSR_IA32_PMC0; - else - counter_msr = MSR_IA32_PERFCTR0; + if (host_cpu_is_intel) { + nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + pmu_version = this_cpu_property(X86_PROPERTY_PMU_VERSION); + event_sel_msr = MSR_P6_EVNTSEL0; + + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + counter_msr = MSR_IA32_PMC0; + else + counter_msr = MSR_IA32_PERFCTR0; + } else { + nr_gp_counters = AMD64_NR_COUNTERS; + event_sel_msr = MSR_K7_EVNTSEL0; + counter_msr = MSR_K7_PERFCTR0; + } for (i = 0; i < nr_gp_counters; i++) { wrmsr(counter_msr + i, 0); - wrmsr(MSR_P6_EVNTSEL0 + i, ARCH_PERFMON_EVENTSEL_OS | + wrmsr(event_sel_msr + i, ARCH_PERFMON_EVENTSEL_OS | ARCH_PERFMON_EVENTSEL_ENABLE | event_code); if (pmu_version > 1) { @@ -85,7 +94,12 @@ static void guest_measure_loop(uint64_t event_code) } } - if (pmu_version < 2 || nr_fixed_counter < 1) + if (host_cpu_is_amd || pmu_version < 2) + goto done; + + uint32_t nr_fixed_counter = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + + if (nr_fixed_counter < 1) goto done; if (event_code == intel_arch_events[INTEL_ARCH_INSTRUCTIONS_RETIRED]) @@ -407,19 +421,41 @@ static void intel_test_pmu_version(void) } } +static void amd_test_pmu_counters(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + unsigned int i; + uint64_t msr_val; + + for (i = 0; i < ARRAY_SIZE(amd_arch_events); i++) { + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_measure_loop); + vcpu_args_set(vcpu, 1, amd_arch_events[i]); + while (run_vcpu(vcpu, &msr_val) != UCALL_DONE) + TEST_ASSERT(msr_val, "Unexpected AMD counter values"); + + kvm_vm_free(vm); + } +} + int main(int argc, char *argv[]) { TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); - TEST_REQUIRE(host_cpu_is_intel); - TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); - TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); - TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); - - intel_test_arch_events(); - intel_test_counters_num(); - intel_test_fixed_counters(); - intel_test_pmu_version(); + if (host_cpu_is_intel) { + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); + + intel_test_arch_events(); + intel_test_counters_num(); + intel_test_fixed_counters(); + intel_test_pmu_version(); + } else if (host_cpu_is_amd) { + amd_test_pmu_counters(); + } else { + TEST_FAIL("Unknown CPU vendor"); + } return 0; } From patchwork Mon Aug 14 11:51:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13352750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80F03C07E8E for ; Mon, 14 Aug 2023 11:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231718AbjHNLwM (ORCPT ); Mon, 14 Aug 2023 07:52:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230062AbjHNLv5 (ORCPT ); Mon, 14 Aug 2023 07:51:57 -0400 Received: from mail-io1-xd43.google.com (mail-io1-xd43.google.com [IPv6:2607:f8b0:4864:20::d43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3660CEA; Mon, 14 Aug 2023 04:51:56 -0700 (PDT) Received: by mail-io1-xd43.google.com with SMTP id ca18e2360f4ac-77acb04309dso203621439f.2; Mon, 14 Aug 2023 04:51:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692013915; x=1692618715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=20tjvbePUAfEV6JgW2FKzFkjTz/aXRRiD/7BadIt25E=; b=DoCvn378mtZOPxRQN3nFkhu40qYpfHAUGuQG0BNtR2t5Bf4w2XrOuoFjflOGVN3B2+ 8gN+ojhWJjUdCHs0Ax4Xtqfk3IPdx4xTC+r6VJg+1xBcTKXPtL2e9XF83VHHqjgS3ML6 dXolPRCenT2oukRMBWwX/mDHKe+YyH3Cl/0T/FMhaOCh88rJifTJZyN2hbDC6TlFRraP RygcspI0k3u2yNKAoF6sqFYjMsxAx+JV79nPwf3Sq8+Ge/bowkD3A1xgqlUZEzI7vpyW 649oEDRbb9PdkRKjTbvpFBPj5Soqc/EgnA29ooQaKi6bKA/hVsPd9/QobNR0hhUeLHlh GACg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692013915; x=1692618715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=20tjvbePUAfEV6JgW2FKzFkjTz/aXRRiD/7BadIt25E=; b=NF21ne4j+tHex/vDIU5d9BbkDoCxlGif4Ctkwoo4uqRLbeBpVrWjSSTwE0/uQYSvbi t1LtL6pmycY9eaQ0nGM244IAkzL8a16O9drn42eFlHce19ydi+9dpp59zxGfa4GDx5En LxcI81bsLCvO7eMI7xYJSnkzb5Gc3dVjrcQsTBgIuxPFMFVret7aFl1BdM2CxbSkfR+I o7qQNOnKBFXI6Ci+TluqE6LfKxzorddkAY+QjjbIwVBHcQIw2S10WwCxQL1bSiObT4Au pswIVA+07JF84/9qjGAWEzjF+i/v0QFIeF4b7GuqYA1eKBrYaSFdm8hvxoPNhgAnfHdg oARg== X-Gm-Message-State: AOJu0YzZu0zQTkLj8a+cZqnd2oVw77nQwxJ6bdnlfZkHHppVlpQGPeio KKtI/3zFmhO8ZioNg78LJNI= X-Google-Smtp-Source: AGHT+IF+ErgJvZY8p1Nd9FZsjgYqCnBE3X9FfVE1ZHWA+Zxg3CWlhnJPydkghkGgTW1nhhu/ZLoaFg== X-Received: by 2002:a05:6e02:12e9:b0:346:5bd7:4a17 with SMTP id l9-20020a056e0212e900b003465bd74a17mr16644925iln.17.1692013915595; Mon, 14 Aug 2023 04:51:55 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x7-20020a63b207000000b0055386b1415dsm8407848pge.51.2023.08.14.04.51.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 04:51:55 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 11/11] KVM: selftests: Test AMD Guest PerfMonV2 Date: Mon, 14 Aug 2023 19:51:08 +0800 Message-Id: <20230814115108.45741-12-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814115108.45741-1-cloudliang@tencent.com> References: <20230814115108.45741-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test case for AMD Guest PerfMonV2. Also test Intel MSR_CORE_PERF_GLOBAL_STATUS and MSR_CORE_PERF_GLOBAL_OVF_CTRL. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_basic_functionality_test.c | 48 ++++++++++++++++++- 1 file changed, 46 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c index cb2a7ad5c504..02bd1fe3900b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_basic_functionality_test.c @@ -58,7 +58,9 @@ static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg) static void guest_measure_loop(uint64_t event_code) { + uint64_t global_ovf_ctrl_msr, global_status_msr, global_ctrl_msr; uint8_t nr_gp_counters, pmu_version = 1; + uint8_t gp_counter_bit_width = 48; uint64_t event_sel_msr; uint32_t counter_msr; unsigned int i; @@ -68,6 +70,12 @@ static void guest_measure_loop(uint64_t event_code) pmu_version = this_cpu_property(X86_PROPERTY_PMU_VERSION); event_sel_msr = MSR_P6_EVNTSEL0; + if (pmu_version > 1) { + global_ovf_ctrl_msr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; + global_status_msr = MSR_CORE_PERF_GLOBAL_STATUS; + global_ctrl_msr = MSR_CORE_PERF_GLOBAL_CTRL; + } + if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) counter_msr = MSR_IA32_PMC0; else @@ -76,6 +84,17 @@ static void guest_measure_loop(uint64_t event_code) nr_gp_counters = AMD64_NR_COUNTERS; event_sel_msr = MSR_K7_EVNTSEL0; counter_msr = MSR_K7_PERFCTR0; + + if (this_cpu_has(X86_FEATURE_AMD_PMU_EXT_CORE) && + this_cpu_has(X86_FEATURE_AMD_PERFMON_V2)) { + nr_gp_counters = this_cpu_property(X86_PROPERTY_AMD_PMU_NR_CORE_COUNTERS); + global_ovf_ctrl_msr = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR; + global_status_msr = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS; + global_ctrl_msr = MSR_AMD64_PERF_CNTR_GLOBAL_CTL; + event_sel_msr = MSR_F15H_PERF_CTL0; + counter_msr = MSR_F15H_PERF_CTR0; + pmu_version = 2; + } } for (i = 0; i < nr_gp_counters; i++) { @@ -84,14 +103,39 @@ static void guest_measure_loop(uint64_t event_code) ARCH_PERFMON_EVENTSEL_ENABLE | event_code); if (pmu_version > 1) { - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(i)); + wrmsr(global_ctrl_msr, BIT_ULL(i)); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(global_ctrl_msr, 0); GUEST_SYNC(_rdpmc(i)); } else { __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); GUEST_SYNC(_rdpmc(i)); } + + if (pmu_version > 1 && _rdpmc(i)) { + wrmsr(global_ctrl_msr, 0); + wrmsr(counter_msr + i, 0); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + GUEST_ASSERT(!_rdpmc(i)); + + wrmsr(global_ctrl_msr, BIT_ULL(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + GUEST_ASSERT(_rdpmc(i)); + + if (host_cpu_is_intel) + gp_counter_bit_width = + this_cpu_property(X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH); + + wrmsr(global_ctrl_msr, 0); + wrmsr(counter_msr + i, (1ULL << gp_counter_bit_width) - 2); + wrmsr(global_ctrl_msr, BIT_ULL(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + GUEST_ASSERT(rdmsr(global_status_msr) & BIT_ULL(i)); + + wrmsr(global_ctrl_msr, 0); + wrmsr(global_ovf_ctrl_msr, BIT_ULL(i)); + GUEST_ASSERT(!(rdmsr(global_status_msr) & BIT_ULL(i))); + } } if (host_cpu_is_amd || pmu_version < 2)