From patchwork Mon Jul 17 06:23:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FB5FC0015E for ; Mon, 17 Jul 2023 06:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231532AbjGQGYL (ORCPT ); Mon, 17 Jul 2023 02:24:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231503AbjGQGYH (ORCPT ); Mon, 17 Jul 2023 02:24:07 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FAE510CC; Sun, 16 Jul 2023 23:24:02 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-666e5f0d60bso2452516b3a.3; Sun, 16 Jul 2023 23:24:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575041; x=1692167041; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=stqHzkq7kigcMhRBDly6DL6DHAf258g1j0/ov6PXdGE=; b=SfpdvFolwH0JYP/J1qiauZHPhImaAXaGzYP/cc2qCXNmM+m4zuGijaCDU1/NWddsDD olUr7TqXnrHTaLPmkJ+bUFz+Y1A4WTPIr35BjuOpjxdVoxtOtiH6Blr2d+vUIjGYV9/L b1smx1lrM3jPyGvV3epCQwNIae50lE5rc+pxLsD6lS3i7lx746p3oAyJRy0iZAlfmSo6 uEXtVRepsdtaHm0C/FKIVIRmI48/q4laoJXriJO4xE14gL1g/QOAs3XhkbZVgmbSs9ii iB8rXVn/PArb9UAw0xNwhK5Qb/pKlEE6D8m+aAbdeUmCnTvhyaKM+EOHZe5VO+IwJPNz UJ7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575041; x=1692167041; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=stqHzkq7kigcMhRBDly6DL6DHAf258g1j0/ov6PXdGE=; b=kFCu7nAArgTaLcy5qeqYaUh0hSea+6zH9iHSHygfdQ/1l7r1OZeCVkX/WfYV0Pfzpn WC5veYvrNJv2bXrzqK7C/tE588cUDQ+y5NEymnm4G7H/pJqA5HiCiYkB/eXX3A7lbZEi CUqDNQKh3THhzyg3+3AC4INgseX5Rg2gLlJgMF59+ozIOrLRKPiVmdUahNivYRL0DsxB FbiXplWsyXpsrd9VSCGhKUxK37aZjDhWrstYAOCCcB6LSi6RCquQ1yk+JtGUTm2pw5eZ yk2OXk1SADzRSoIwKAp3dPppRl9ZSb560CUiRbgxtseHc3DZX5LrfgcmsdoUHt9exgEE 4RmA== X-Gm-Message-State: ABy/qLbPlrdqBzsxfX/lztflJfEWKuk62fzfS2zTXf+H6XxbJr0alE/v HrU4PzdaIrZ+XwqQwHec7Vg= X-Google-Smtp-Source: APBJJlF+X1lFnaQ0WkL3Hs3J6Jxd1SirmIbzJTagavxfJ8ocpuO8hVifSG+IDepGloqy6Ee4FojYWg== X-Received: by 2002:a05:6a20:841a:b0:12c:f124:5b72 with SMTP id c26-20020a056a20841a00b0012cf1245b72mr10556237pzd.43.1689575041559; Sun, 16 Jul 2023 23:24:01 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.23.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:01 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 1/6] KVM: selftests: Add macros for fixed counters in processor.h Date: Mon, 17 Jul 2023 14:23:38 +0800 Message-Id: <20230717062343.3743-2-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add x86 properties for the number of PMU fixed counters and the bitmask that allows for "discontiguous" fixed counters so that tests don't have to manually retrieve the correct CPUID leaf+register, and so that the resulting code is self-documenting. Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/include/x86_64/processor.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index aa434c8f19c5..15331abf063b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -240,6 +240,8 @@ struct kvm_x86_cpu_property { #define X86_PROPERTY_PMU_VERSION KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7) #define X86_PROPERTY_PMU_NR_GP_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15) #define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31) +#define X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK KVM_X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31) +#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4) #define X86_PROPERTY_SUPPORTED_XCR0_LO KVM_X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31) #define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 KVM_X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31) From patchwork Mon Jul 17 06:23:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A26C0015E for ; Mon, 17 Jul 2023 06:24:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231550AbjGQGYX (ORCPT ); Mon, 17 Jul 2023 02:24:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231508AbjGQGYN (ORCPT ); Mon, 17 Jul 2023 02:24:13 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C2BDE6B; Sun, 16 Jul 2023 23:24:07 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id d2e1a72fcca58-66f5faba829so2889549b3a.3; Sun, 16 Jul 2023 23:24:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575046; x=1692167046; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zsZNEzswzG8LGSiGtSvGiHDvtnXPesXlahtUeg2sYj0=; b=RLHJbAEMHsXFfwx8RKOvdw4kmXMlCXv1ik9B7gcMFReM28bdeOfiRT/VsWMbmGogrw 4uAZn72MV/VeLpR4XkbSM4F0SW3DYEZWzol23sy9QjAa2S1gDFDsjsthbbu6Yi/ZD/pm 3PiRkqUykhnO39LR1QDtqt42EHThWRt995mrxtZ2LUleWtWNnkmIj+42/GovNvoYAiVW haZrPLVO+JT1tFb5lKHlpMaSuCNlvp1h9P/ZFhSO6zZM+FpgnuR7J62tGNYc+cbZeW41 d+evkDmvbCa2+g8Zd21IBCP7LDO3+6jUnNk6/qOIfEIvHQsy+Cpef5e4jkRLj52aYfvI TltQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575046; x=1692167046; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zsZNEzswzG8LGSiGtSvGiHDvtnXPesXlahtUeg2sYj0=; b=ffJUlVL9aw/k7eBwDiT+WjYQaAUrOJzmlQDAr2gdNuUGXr44ArWLf1h3WIVv0LakCg SbuFv3irPuxhO2t3/lbbktU+ouq4H8FowxmMaD3vbFge3hEs+Q+VsLlAyv2zkQX20yop Wv+3uNXxuVytWZqBkJzOZAUWA4r/VVmFjW/YMoGss53EhhhX7FkDoSr/XW6DNIdPQry2 zSH85sys2ktddHBl97w7ZkLpOXqsQbf2ATO7KoSSWwEwlCLc1v3qqByMMVC9j02WorCZ zQ97iTPOkb/TZIKZJKTc1mei7J4lMZ5srFlVBlWreRxU5kyzar2WNzcZ5JksGKlmQhoF vEKA== X-Gm-Message-State: ABy/qLaXTI55f3WhmAAK9ZEQhyhd3I792sPvvL5hPKcBZc4GRahxrN4r +ldhP/0uckeShWyuHzmYs2g= X-Google-Smtp-Source: APBJJlHiBgaoZ6rgQRi4sRa9kVcZoeU0ORIv5M5V8fJ+R3FTqPa+PedF50srSZ45i7XseBVTwgzXkg== X-Received: by 2002:a05:6a20:748c:b0:130:d5a:e413 with SMTP id p12-20020a056a20748c00b001300d5ae413mr11700881pzd.6.1689575046495; Sun, 16 Jul 2023 23:24:06 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:06 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 2/6] KVM: selftests: Drop the return of remove_event() Date: Mon, 17 Jul 2023 14:23:39 +0800 Message-Id: <20230717062343.3743-3-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang None of the callers consume remove_event(), and it incorrectly implies that the incoming filter isn't modified. Drop the return. Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 40507ed9fe8a..5ac05e64bec9 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -265,8 +265,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) * Remove the first occurrence of 'event' (if any) from the filter's * event list. */ -static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, - uint64_t event) +static void remove_event(struct kvm_pmu_event_filter *f, uint64_t event) { bool found = false; int i; @@ -279,7 +278,6 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, } if (found) f->nevents--; - return f; } #define ASSERT_PMC_COUNTING_INSTRUCTIONS() \ From patchwork Mon Jul 17 06:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3452CC0015E for ; Mon, 17 Jul 2023 06:24:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231605AbjGQGYl (ORCPT ); Mon, 17 Jul 2023 02:24:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231547AbjGQGYX (ORCPT ); Mon, 17 Jul 2023 02:24:23 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB3601705; Sun, 16 Jul 2023 23:24:10 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id d2e1a72fcca58-6686a05bc66so2892313b3a.1; Sun, 16 Jul 2023 23:24:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575050; x=1692167050; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YRi+IaxQS5j3+9rDlR9mUvSIX4EcGmDuBRnj72kn4po=; b=C7fC+8ytGEvzzts+vsq6xv5WSvudLbJjVUc9khLSxl2whmbjZQQizAzlXPevEBldll hs9hZrDktVt6M0xGuoi5FT6AyvPh+6SCFr0LDfjDiWAPTgwVeKUuZgWDAZU4tN6HFewv 5bIkWjlIBTsLbluuSS/Go99EVsqiGDmOo+gyoWGDlP9LnSH6GOybGgzZanK/WFi09oFb 3g/+f75i0iMg3kYgZV1qKJr40juHx+W5/uTY+0aJdwhKfdzW+mZnvFcNtgV/WrZvfkwp VUPmcrfYpBz81K9faoQpsbynahpTa+0DD+Leh9DMGT3ZWmTW5RZzVJKHgS9Mq1QU73jS GOPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575050; x=1692167050; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YRi+IaxQS5j3+9rDlR9mUvSIX4EcGmDuBRnj72kn4po=; b=OyigGub4THcBLvAxnbBJ8D8zh3AQkLWNPGusVtGc8lNHEjMfakH5TwNFTGwQv6tjQS EcSYYKXpimoEJ7BfWJ1VwlHhTzj37NefAu2aBuzapBtKtlMxkBnX/5hDydV6WUXneZxy G1K4taKQoCeK/BzYDGdvYsrTKkAsxcoAgg3pLIJFpLSBQqje6h5p4z88PeTR9MuUxGhW NzAOpmNdoG7BXC3fJUdBoD6qRqPK/P23xUcv66b2X12u3gT2m+UIQGSVuIaHGZ6wGi4S w1jXueU8Ed3RvDWaIEctfxrES84wXzpdprSBRrjoN4Udl/HtB1OZMnwT9PwpmSZDU9Jf C9DQ== X-Gm-Message-State: ABy/qLbibWsf8qbSFAgklyLx8J1XfEjlbzki6vGR6QTeyUn2+hfFxjtV ffJ1HnCwqj+bezYbGj02rOAxp5IRJSx2pSdaUWA= X-Google-Smtp-Source: APBJJlHolwI70G47YAetZ7RSrRqJY8eUZYiSi4tR45oWzlUyEaJST6XP+YbcHhNFrIdxH/tntBs+eA== X-Received: by 2002:a05:6a21:3814:b0:122:10f9:f635 with SMTP id yi20-20020a056a21381400b0012210f9f635mr8579693pzb.19.1689575050340; Sun, 16 Jul 2023 23:24:10 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.24.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:10 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 3/6] KVM: selftests: Introduce __kvm_pmu_event_filter to improved event filter settings Date: Mon, 17 Jul 2023 14:23:40 +0800 Message-Id: <20230717062343.3743-4-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add custom "__kvm_pmu_event_filter" structure to improve pmu event filter settings. Simplifies event filter setup by organizing event filter parameters in a cleaner, more organized way. Signed-off-by: Jinrong Liang Reviewed-by: Isaku Yamahata --- .../kvm/x86_64/pmu_event_filter_test.c | 179 +++++++++--------- 1 file changed, 87 insertions(+), 92 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 5ac05e64bec9..ffcbbf25b29b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -28,6 +28,10 @@ #define NUM_BRANCHES 42 +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ +#define MAX_FILTER_EVENTS 300 +#define MAX_TEST_EVENTS 10 + /* * This is how the event selector and unit mask are stored in an AMD * core performance event-select register. Intel's format is similar, @@ -69,21 +73,33 @@ #define INST_RETIRED EVENT(0xc0, 0) +struct __kvm_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; + __u32 flags; + __u32 pad[4]; + __u64 events[MAX_FILTER_EVENTS]; +}; + /* * This event list comprises Intel's eight architectural events plus * AMD's "retired branch instructions" for Zen[123] (and possibly * other AMD CPUs). */ -static const uint64_t event_list[] = { - EVENT(0x3c, 0), - INST_RETIRED, - EVENT(0x3c, 1), - EVENT(0x2e, 0x4f), - EVENT(0x2e, 0x41), - EVENT(0xc4, 0), - EVENT(0xc5, 0), - EVENT(0xa4, 1), - AMD_ZEN_BR_RETIRED, +static const struct __kvm_pmu_event_filter base_event_filter = { + .nevents = ARRAY_SIZE(base_event_filter.events), + .events = { + EVENT(0x3c, 0), + INST_RETIRED, + EVENT(0x3c, 1), + EVENT(0x2e, 0x4f), + EVENT(0x2e, 0x41), + EVENT(0xc4, 0), + EVENT(0xc5, 0), + EVENT(0xa4, 1), + AMD_ZEN_BR_RETIRED, + }, }; struct { @@ -225,47 +241,11 @@ static bool sanity_check_pmu(struct kvm_vcpu *vcpu) return !r; } -static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) -{ - struct kvm_pmu_event_filter *f; - int size = sizeof(*f) + nevents * sizeof(f->events[0]); - - f = malloc(size); - TEST_ASSERT(f, "Out of memory"); - memset(f, 0, size); - f->nevents = nevents; - return f; -} - - -static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], int nevents, - uint32_t action, uint32_t flags) -{ - struct kvm_pmu_event_filter *f; - int i; - - f = alloc_pmu_event_filter(nevents); - f->action = action; - f->flags = flags; - for (i = 0; i < nevents; i++) - f->events[i] = event_list[i]; - - return f; -} - -static struct kvm_pmu_event_filter *event_filter(uint32_t action) -{ - return create_pmu_event_filter(event_list, - ARRAY_SIZE(event_list), - action, 0); -} - /* * Remove the first occurrence of 'event' (if any) from the filter's * event list. */ -static void remove_event(struct kvm_pmu_event_filter *f, uint64_t event) +static void remove_event(struct __kvm_pmu_event_filter *f, uint64_t event) { bool found = false; int i; @@ -313,66 +293,70 @@ static void test_without_filter(struct kvm_vcpu *vcpu) } static void test_with_filter(struct kvm_vcpu *vcpu, - struct kvm_pmu_event_filter *f) + struct __kvm_pmu_event_filter *__f) { + struct kvm_pmu_event_filter *f = (void *)__f; + vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); run_vcpu_and_sync_pmc_results(vcpu); } static void test_amd_deny_list(struct kvm_vcpu *vcpu) { - uint64_t event = EVENT(0x1C2, 0); - struct kvm_pmu_event_filter *f; + struct __kvm_pmu_event_filter f = base_event_filter; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_DENY; + f.nevents = 1; + f.events[0] = EVENT(0x1C2, 0); + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_member_deny_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); + struct __kvm_pmu_event_filter f = base_event_filter; - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_DENY; + test_with_filter(vcpu, &f); ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(); } static void test_member_allow_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); + struct __kvm_pmu_event_filter f = base_event_filter; - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_ALLOW; + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); + struct __kvm_pmu_event_filter f = base_event_filter; + + f.action = KVM_PMU_EVENT_DENY; - remove_event(f, INST_RETIRED); - remove_event(f, INTEL_BR_RETIRED); - remove_event(f, AMD_ZEN_BR_RETIRED); - test_with_filter(vcpu, f); - free(f); + remove_event(&f, INST_RETIRED); + remove_event(&f, INTEL_BR_RETIRED); + remove_event(&f, AMD_ZEN_BR_RETIRED); + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); + struct __kvm_pmu_event_filter f = base_event_filter; - remove_event(f, INST_RETIRED); - remove_event(f, INTEL_BR_RETIRED); - remove_event(f, AMD_ZEN_BR_RETIRED); - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_ALLOW; + + remove_event(&f, INST_RETIRED); + remove_event(&f, INTEL_BR_RETIRED); + remove_event(&f, AMD_ZEN_BR_RETIRED); + test_with_filter(vcpu, &f); ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(); } @@ -567,19 +551,16 @@ static void run_masked_events_test(struct kvm_vcpu *vcpu, const uint64_t masked_events[], const int nmasked_events) { - struct kvm_pmu_event_filter *f; + struct __kvm_pmu_event_filter f = { + .nevents = nmasked_events, + .action = KVM_PMU_EVENT_ALLOW, + .flags = KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + }; - f = create_pmu_event_filter(masked_events, nmasked_events, - KVM_PMU_EVENT_ALLOW, - KVM_PMU_EVENT_FLAG_MASKED_EVENTS); - test_with_filter(vcpu, f); - free(f); + memcpy(f.events, masked_events, sizeof(uint64_t) * nmasked_events); + test_with_filter(vcpu, &f); } -/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ -#define MAX_FILTER_EVENTS 300 -#define MAX_TEST_EVENTS 10 - #define ALLOW_LOADS BIT(0) #define ALLOW_STORES BIT(1) #define ALLOW_LOADS_STORES BIT(2) @@ -751,17 +732,27 @@ static void test_masked_events(struct kvm_vcpu *vcpu) run_masked_events_tests(vcpu, events, nevents); } -static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - int nevents, uint32_t flags) +static int do_vcpu_set_pmu_event_filter(struct kvm_vcpu *vcpu, + struct __kvm_pmu_event_filter *__f) { - struct kvm_pmu_event_filter *f; - int r; + struct kvm_pmu_event_filter *f = (void *)__f; - f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); - r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); - free(f); + return __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); +} + +static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event, + uint32_t flags, uint32_t action) +{ + struct __kvm_pmu_event_filter f = { + .nevents = 1, + .flags = flags, + .action = action, + .events = { + event, + }, + }; - return r; + return do_vcpu_set_pmu_event_filter(vcpu, &f); } static void test_filter_ioctl(struct kvm_vcpu *vcpu) @@ -773,14 +764,18 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ - r = run_filter_test(vcpu, &e, 1, 0); + r = set_pmu_single_event_filter(vcpu, e, 0, KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = set_pmu_single_event_filter(vcpu, e, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = set_pmu_single_event_filter(vcpu, e, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } From patchwork Mon Jul 17 06:23:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 480AEC001DE for ; Mon, 17 Jul 2023 06:24:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231589AbjGQGY4 (ORCPT ); Mon, 17 Jul 2023 02:24:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231593AbjGQGYj (ORCPT ); Mon, 17 Jul 2023 02:24:39 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8E0F1980; Sun, 16 Jul 2023 23:24:15 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id d9443c01a7336-1b8b318c5a7so32665245ad.3; Sun, 16 Jul 2023 23:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575054; x=1692167054; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6iWyDJ0vw+EOcxJe/K1oi+A9yw85j7hp6Mnp84AwIPo=; b=oS3kQq+DOD+7pk4ys1tYVbpbXI1B4ZyNRYwGW5w/2TCWRnJ16a6tGACDaOhBdDginD tzS6EibVbsSZFOrEb43jIIcyiHv4pmNjjJbTRhVLrAL209QaU5aeL+17tvagMiHWqsZG d41SiveOJCTks8mxlIb2A9LnnhG5DWMpklY6EGSbEXuqim4eka+75pl/J+aLJFwAUV8M GJE4sYXnjCtLKcqcMiIsdh5l5DCyGjrxSMzKUHO9HtI9KGlMYaaJ+DvjLunUih+nLE+h wNNFnMxYwT2CkFy3jphum4wETeWHMAMfEJy3sw3umRInpG3G3DGvTWj0DQkI+IIRPgMS o4pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575054; x=1692167054; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6iWyDJ0vw+EOcxJe/K1oi+A9yw85j7hp6Mnp84AwIPo=; b=EqmrlqbkSNzv++Qk5N2CNpluH7wWJ0OVIx9yBzt9oHAxf9pvum3QlaS96A5/B2UWlX Tt4j05xyjs/1ufPCqCe0wSioCrfPLwaw4t4//z3j4tGSHb1wXfwi5mNBa8FrclQ8fAVY v4hML3bQ/AwCvPqcM6g9OxCqjYawX53TxWjnRIvUQ8mqFQo4d3XBeO95wTnLoCcFI0iC HeaPixkmFnZQE+pev6s3AqtQWuEWsH84f4W5mG+ox457uIjcuShLhn3ReWjULOViP1AG q5IeBtQYfDjICcBOVQIyvXZXO60THoLd5oogs5Mv9yEEzIOMI9qquheiKZk33SgOU9AC 2t8A== X-Gm-Message-State: ABy/qLZBMbE1fUJqJp4amJlnH3BwqDf7AIQTgRXXLVlDDibJ2RaKC/GB BI3jkhnPkjXpduPthyrPxss= X-Google-Smtp-Source: APBJJlEY0B87Dz696UvK5qHdiYjG9zJuLL8M44mCS8ILS2UX8pmv/9XSazJkc3kttGzsXp1NRPHY8g== X-Received: by 2002:a17:902:9b8f:b0:1b9:de75:d5bb with SMTP id y15-20020a1709029b8f00b001b9de75d5bbmr11507397plp.7.1689575054559; Sun, 16 Jul 2023 23:24:14 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.24.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:14 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 4/6] KVM: selftests: Add test cases for unsupported PMU event filter input values Date: Mon, 17 Jul 2023 14:23:41 +0800 Message-Id: <20230717062343.3743-5-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test cases to verify the handling of unsupported input values for the PMU event filter. The tests cover unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values. All these cases should return an error, as they are currently not supported by the filter. Furthermore, the tests also cover the scenario where setting non-existent fixed counters in the fixed bitmap does not fail. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 26 +++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index ffcbbf25b29b..63f85f583ef8 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -32,6 +32,10 @@ #define MAX_FILTER_EVENTS 300 #define MAX_TEST_EVENTS 10 +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) + /* * This is how the event selector and unit mask are stored in an AMD * core performance event-select register. Intel's format is similar, @@ -757,6 +761,8 @@ static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event, static void test_filter_ioctl(struct kvm_vcpu *vcpu) { + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + struct __kvm_pmu_event_filter f; uint64_t e = ~0ul; int r; @@ -777,6 +783,26 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) KVM_PMU_EVENT_FLAG_MASKED_EVENTS, KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + f = base_event_filter; + f.action = PMU_EVENT_FILTER_INVALID_ACTION; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Set invalid action is expected to fail"); + + f = base_event_filter; + f.flags = PMU_EVENT_FILTER_INVALID_FLAGS; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Set invalid flags is expected to fail"); + + f = base_event_filter; + f.nevents = PMU_EVENT_FILTER_INVALID_NEVENTS; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Exceeding the max number of filter events should fail"); + + f = base_event_filter; + f.fixed_counter_bitmap = ~GENMASK_ULL(nr_fixed_counters, 0); + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed"); } int main(int argc, char *argv[]) From patchwork Mon Jul 17 06:23:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F4F9C001DE for ; Mon, 17 Jul 2023 06:25:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231626AbjGQGZM (ORCPT ); Mon, 17 Jul 2023 02:25:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbjGQGYu (ORCPT ); Mon, 17 Jul 2023 02:24:50 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA9DE10F0; Sun, 16 Jul 2023 23:24:21 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id d2e1a72fcca58-6687466137bso2559711b3a.0; Sun, 16 Jul 2023 23:24:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575059; x=1692167059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gCeTe38A90jfPJyG3hYITaHvW8Yih1iw621ZZ3nJC5U=; b=GJXR8DVQlIaakajHyVXDghyDXaomxIJNgBFt9z15tAHEG0KbGVBE44HbxTrSc9Bj/A Jer5k2x6zyHrdXOQwFkE3d+VOJXwxAhEsbX8qB++xRx7w1ASxqgakT5A+Litotlz73Xf kGHZuGg5gS8ZNRGWRXtyAxUXaZ+JoplwbsU6xdMUDL3S13LiHYA5wl/rN070gTL4dNHj OGhn5TPCbapgfLIrRNiwNgVHyqKzL2bJOmsJhYXPzvOC7l06UT614UghbLIItdcTkR6w eQkFjtlcfPJ3buDPgriBw4TQ0IoS6aGLvL0EaZiBrTYCeY0MpPbGVXK/bBdnh5S7kAJV rCCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575059; x=1692167059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gCeTe38A90jfPJyG3hYITaHvW8Yih1iw621ZZ3nJC5U=; b=eIX3yP2k2162aFahcspqaDCxb5VSvWBQJDB4RU/ABXwPmxBP1g74LwcW9Rbm4qZLf0 Vyxk7VcGPenOWNakopRQ/yNhylo2WS83mRuXHS1gDkRrGf7fLSRH5iUJFMEkSHkZb0Ry tl2hyNBRwKv4M5MhbyqCyfVWZWCfA39SlsIq2sThqG3G2SmkX/a5EACc9L/w29D1VUXf LdSvI7D0gk0EZDnl9cf/Zc1MNuMkcQp2DOfoKflfBAaHccs4Db+doiUY0dF2WioWC0OI z7r/vElIbHQTSLiu5D+hYKQmJitbxaK6phdimuLuZ4jgKXxVjMkSCyuc+vkmouBu53xi PZlg== X-Gm-Message-State: ABy/qLa4rjksMpVM4i+Pfmw1U3BuelL2mksEK91XIYDZdAWmsTqHmLqy MVS5UhRxVYagZXEXxIHDJqc= X-Google-Smtp-Source: APBJJlGREx0hjtXLU2porkX28XvHnBuDcr/f5QtSQqB0hrBJ0KnA0pKwEyFAniEEhU206zy5T9VnMg== X-Received: by 2002:a05:6a20:8e1c:b0:12b:f7ff:9fe5 with SMTP id y28-20020a056a208e1c00b0012bf7ff9fe5mr12872125pzj.49.1689575058696; Sun, 16 Jul 2023 23:24:18 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.24.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:18 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 5/6] KVM: selftests: Test if event filter meets expectations on fixed counters Date: Mon, 17 Jul 2023 14:23:42 +0800 Message-Id: <20230717062343.3743-6-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add tests to cover that pmu event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest). Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 63f85f583ef8..1872b848f734 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,7 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5 #define NUM_BRANCHES 42 +#define INTEL_PMC_IDX_FIXED 32 /* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ #define MAX_FILTER_EVENTS 300 @@ -805,6 +806,84 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed"); } +static void intel_run_fixed_counter_guest_code(uint8_t fixed_ctr_idx) +{ + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + + /* Only OS_EN bit is enabled for fixed counter[idx]. */ + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, + BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + } +} + +static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu, + uint32_t action, uint32_t bitmap) +{ + struct __kvm_pmu_event_filter f = { + .action = action, + .fixed_counter_bitmap = bitmap, + }; + do_vcpu_set_pmu_event_filter(vcpu, &f); + + return run_vcpu_to_sync(vcpu); +} + +static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, + uint8_t nr_fixed_counters) +{ + unsigned int i; + uint32_t bitmap; + uint64_t count; + + TEST_ASSERT(nr_fixed_counters < sizeof(bitmap), + "Invalid nr_fixed_counters"); + + /* + * Check the fixed performance counter can count normally when KVM + * userspace doesn't set any pmu filter. + */ + count = run_vcpu_to_sync(vcpu); + TEST_ASSERT(count, "Unexpected count value: %ld\n", count); + + for (i = 0; i < BIT(nr_fixed_counters); i++) { + bitmap = BIT(i); + count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_ALLOW, + bitmap); + ASSERT_EQ(!!count, !!(bitmap & BIT(idx))); + + count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_DENY, + bitmap); + ASSERT_EQ(!!count, !(bitmap & BIT(idx))); + } +} + +static void test_fixed_counter_bitmap(void) +{ + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + uint8_t idx; + + /* + * Check that pmu_event_filter works as expected when it's applied to + * fixed performance counters. + */ + for (idx = 0; idx < nr_fixed_counters; idx++) { + vm = vm_create_with_one_vcpu(&vcpu, + intel_run_fixed_counter_guest_code); + vcpu_args_set(vcpu, 1, idx); + __test_fixed_counter_bitmap(vcpu, idx, nr_fixed_counters); + kvm_vm_free(vm); + } +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -848,6 +927,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm); test_pmu_config_disable(guest_code); + test_fixed_counter_bitmap(); return 0; } From patchwork Mon Jul 17 06:23:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13315228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 967DEC001DE for ; Mon, 17 Jul 2023 06:25:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231603AbjGQGZe (ORCPT ); Mon, 17 Jul 2023 02:25:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231617AbjGQGZU (ORCPT ); Mon, 17 Jul 2023 02:25:20 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16CA0171F; Sun, 16 Jul 2023 23:24:53 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id d9443c01a7336-1b8b318c5a7so32666075ad.3; Sun, 16 Jul 2023 23:24:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689575062; x=1692167062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+/T9TECqwtlPK/zIL+NP7OfC6MHV7iapqldWU70LRAo=; b=skepM+1BZmqhluFqQNxJP8LgFNsINfiA30NfKHjSH+QLCQiUnJ9r/w/IVrBsQ3d2Ir OiXMor9ORTLIbBZZfHp6z1H50mjnIFKD5dW16uooFXg0D8zQTCGpOCLIgW8LBcFqlwoy i490hSoTGkjEpmx6yfQQtK2q5e07ur1I22DpBoIVxhoA+iE5uSLr9Z089WnDAZul7EPP IzwKnQQstBhQF3asR8Is9oXK39cBwKCN1t+Qw2YLDq8HLEQ2r+IDGzSWxxc5KueZfWav nAo5nDVkLWNRe8ikQvfa/WvNhfY4cUuX1qgBGC1nnF1/VlX1cPNbKCO7Aqu6d2EJB+8m EQqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689575062; x=1692167062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+/T9TECqwtlPK/zIL+NP7OfC6MHV7iapqldWU70LRAo=; b=dBoW4CoTIpL9J+KkaNhniOMH/KFQz7xbDvWpTCpX4ip1K6eXjQ6DUUbHkGaPs8bpiW hvrIr1sfX6NPEtMKzXAYYMMDGIJqV4JUjiWvpJIf233GRGFZ8ZMKWYP5ffHfOQWWGkw0 p64lcfb4VnaSY2oISdtWr8WHdNvC6YEjpWhYQOJRw6QsRWjKQMpSWbBmsZLXf/UR3W0y WBfuxxLLZ+0L7+uYyA9fqxGQ0uEN+jewF7UTgstT6AQ2VzZEXLri7r5spXbISMndZk+w O4dR7w+4sTBN3QjcSm52GpbBdFs7vsm3DWkpzFxYS4lnX8OF/GiRe+4dCcdIKKJDegGw QK0g== X-Gm-Message-State: ABy/qLYs35Wqgph2BGnBAdJGGDsIare42mRTZ8p0UM8Tb8n3SMTRc4x9 1ll3FjGeIgHYLoIzNZtPdjk= X-Google-Smtp-Source: APBJJlEqssmPLgnzEkJY2UQyz+TwDoq26mmaoyOH1yHSlCu8lEWXqRCK4ZQIwGDKD2A/Jc/aVvMRiQ== X-Received: by 2002:a17:902:c946:b0:1b8:b2c6:7e8d with SMTP id i6-20020a170902c94600b001b8b2c67e8dmr12295040pla.66.1689575062571; Sun, 16 Jul 2023 23:24:22 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id e4-20020a170902744400b001b9ff5aa2e7sm9555716plt.239.2023.07.16.23.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 16 Jul 2023 23:24:22 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 6/6] KVM: selftests: Test gp event filters don't affect fixed event filters Date: Mon, 17 Jul 2023 14:23:43 +0800 Message-Id: <20230717062343.3743-7-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230717062343.3743-1-cloudliang@tencent.com> References: <20230717062343.3743-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed event filter behavior in KVM. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 1872b848f734..b2e432542a8c 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -835,6 +835,19 @@ static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu, return run_vcpu_to_sync(vcpu); } +static uint64_t test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu, + uint32_t action, + uint32_t bitmap) +{ + struct __kvm_pmu_event_filter f = base_event_filter; + + f.action = action; + f.fixed_counter_bitmap = bitmap; + do_vcpu_set_pmu_event_filter(vcpu, &f); + + return run_vcpu_to_sync(vcpu); +} + static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, uint8_t nr_fixed_counters) { @@ -861,6 +874,20 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_DENY, bitmap); ASSERT_EQ(!!count, !(bitmap & BIT(idx))); + + /* + * Check that fixed_counter_bitmap has higher priority than + * events[] when both are set. + */ + count = test_set_gp_and_fixed_event_filter(vcpu, + KVM_PMU_EVENT_ALLOW, + bitmap); + ASSERT_EQ(!!count, !!(bitmap & BIT(idx))); + + count = test_set_gp_and_fixed_event_filter(vcpu, + KVM_PMU_EVENT_DENY, + bitmap); + ASSERT_EQ(!!count, !(bitmap & BIT(idx))); } }