From patchwork Thu Jul 20 11:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D8FC001DC for ; Thu, 20 Jul 2023 11:47:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbjGTLrk (ORCPT ); Thu, 20 Jul 2023 07:47:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230292AbjGTLrh (ORCPT ); Thu, 20 Jul 2023 07:47:37 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 783FAE4C; Thu, 20 Jul 2023 04:47:36 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id d2e1a72fcca58-6862842a028so422414b3a.0; Thu, 20 Jul 2023 04:47:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853656; x=1690458456; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XSF6S+UOyj9oNfC0MJh489ttKCJmHxLthvR3b7uPlOk=; b=aQmt55SMNkO7Bt5Ze+Zhr09RbdVTYlRZgISvQULiN8aQkyWghnSHznynuNoKz6gsn8 NFXsrzwUEXz8iNO6lTMVl+jVTsDosgM2irByooN1uSPKfEo1t4tXkUDGv6qQIgvzXGCo 9kTxXoT1+9wGu9GFzQEBcKPBYeZej0gCBI7d3a7z23jHMyvJ53v2OACETyRbYO+vIXkB dCw4Z1/+63CHbYVr77K0QDX8e9hiDCk0GEvgWHHsIimMrvxis5rsau4OXdJCQoNXW+JJ 542sH6Df4CLUSbh/H+DRRr5xAFvIVeCHkRlgNEpznK9KDHYPiXgOIheyyeCN6W5atKm9 IoCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853656; x=1690458456; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XSF6S+UOyj9oNfC0MJh489ttKCJmHxLthvR3b7uPlOk=; b=IWKwcEd77pWBlevyTSbxMOZZ04Lpy+AQ80Ez9BdX/Q1K4qx84mKSd/zFXjcd7zrWJD vnQfor4ilNm4K7Owtbljb+4fnLVLiQdH80qbjB4ip6CcBbVzb2Nlf4OBnvA7yfmjWJE7 G0n6NmX6maxGMumkaCqv5cRnkMbmMYd5PSClJhWhvk7cBjpJ3H4yR5iHNxyPc0fZiStX NnPd0ygGPOKgrFR/1lzsGOFNFSxMPtzkNbAzTOEw7nNxjTgPdncwZouCD6Fls86lmBoy WEcs497fGInRT5SoZeUfnNlozYlaUPft3Dxvs8AN6zRWBIBEavlieBFGmx1LAc80R8cR 9DtQ== X-Gm-Message-State: ABy/qLY6BYno3qK7JhCCUoWwGfLqkP62QvUHNwYJ43vdzTnpLgwdBrVr Gtk+gwzG2TCKXZX6Ki528TY= X-Google-Smtp-Source: APBJJlE1vwO7JvaQF1kaLXRw7jO2jBt4xswxJcRk0MoZuDLuYD0Hc/MpW0zKIT02aiix4OXl6jDtzQ== X-Received: by 2002:a05:6a20:4308:b0:122:ff52:7331 with SMTP id h8-20020a056a20430800b00122ff527331mr5455249pzk.52.1689853655870; Thu, 20 Jul 2023 04:47:35 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:35 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 1/6] KVM: selftests: Add x86 properties for Intel PMU in processor.h Date: Thu, 20 Jul 2023 19:47:09 +0800 Message-Id: <20230720114714.34079-2-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add x86 properties for Intel PMU so that tests don't have to manually retrieve the correct CPUID leaf+register, and so that the resulting code is self-documenting. Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/include/x86_64/processor.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index aa434c8f19c5..5cb7df74d0b1 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -239,7 +239,12 @@ struct kvm_x86_cpu_property { #define X86_PROPERTY_MAX_BASIC_LEAF KVM_X86_CPU_PROPERTY(0, 0, EAX, 0, 31) #define X86_PROPERTY_PMU_VERSION KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7) #define X86_PROPERTY_PMU_NR_GP_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15) +#define X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 23, 16) #define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH KVM_X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31) +#define X86_PROPERTY_PMU_EVENTS_MASK KVM_X86_CPU_PROPERTY(0xa, 0, EBX, 0, 7) +#define X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK KVM_X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31) +#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS KVM_X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4) +#define X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH KVM_X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4) #define X86_PROPERTY_SUPPORTED_XCR0_LO KVM_X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31) #define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 KVM_X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31) From patchwork Thu Jul 20 11:47:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D112EB64DC for ; Thu, 20 Jul 2023 11:48:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230471AbjGTLsH (ORCPT ); Thu, 20 Jul 2023 07:48:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230456AbjGTLrp (ORCPT ); Thu, 20 Jul 2023 07:47:45 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF21426B1; Thu, 20 Jul 2023 04:47:40 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id d9443c01a7336-1b8a8154f9cso5194335ad.1; Thu, 20 Jul 2023 04:47:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853660; x=1690458460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zsZNEzswzG8LGSiGtSvGiHDvtnXPesXlahtUeg2sYj0=; b=mTmee0o3B7ZyMTS4gt6pCHyLazCNqmiTaCdGL/Iag7wqw7agKdIFYjd0QT1ahfEqMa 4LyIhZM7o13kGeJ4m5Ayy5cMXqcKra/QWwnY1r7ZMDhr6gYNBCsrmCGihnSs1l/d4/Un mn8faT1cozcZuKaLt5dm8BQPfffComvSzFmZ1DxWoxWpBwuQ6YL4JUvV8cOzOJdHzcRq 3sM/OHWXV+vMYFFGaUiwf77nRoVGsLHampn+qY0kfD8eTHcsUpukNXocTRq0OR7chBB8 pDhSSS407cLON2dbwS+sbIjVLjnnymuqpSriZLbkc8MVXGy6V8p/nQyRB7ttWkcM4jYw l3Yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853660; x=1690458460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zsZNEzswzG8LGSiGtSvGiHDvtnXPesXlahtUeg2sYj0=; b=fdmRJ6NQ3GSFlGK8CnbsVxHxAu8Eu0fQmPnn7K7jE6W7drw5mecxVHueOm3WfxUuGn WJJqrSdNr/tXrcC3Z3QnnHMekyMpGMiE160NjCtZd8OMuImgbDo5IknHj4zDWwbAeKH5 thFsM6dJCSAH/iRx38yt4T6fly1CzdxOJ2b4IYtsG7dKupKElCS/cqKQqOoXRZ/SBIaY IDjIyPiNQvDDXAQI5BYEPrQzVyykUAlh4zZ2Xza2ykssYoKDpoy9l8PkBf+bzLIebYbK yxr6eWElu4zIOaxzs8ekk7kcVbF0670uht/x0cCcgO276CKvvDzVu7Hjyynso9QBxkMx RWMg== X-Gm-Message-State: ABy/qLbvGx/QztJx/YSECixsD4uQCvL6wCoVSBrZWGWaWxLPiPQBlxA4 tjcPW8yXoibXlDPH+V1KMCs= X-Google-Smtp-Source: APBJJlE5jvZauSRdBA5WjRqe88ogqgtwva+kvzTjOuK4V61Rro/7tnyjvvUZ5uyDwgv01kQ+ZmwGVQ== X-Received: by 2002:a17:903:294b:b0:1b6:b703:36f8 with SMTP id li11-20020a170903294b00b001b6b70336f8mr6147948plb.25.1689853659965; Thu, 20 Jul 2023 04:47:39 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:39 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 2/6] KVM: selftests: Drop the return of remove_event() Date: Thu, 20 Jul 2023 19:47:10 +0800 Message-Id: <20230720114714.34079-3-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang None of the callers consume remove_event(), and it incorrectly implies that the incoming filter isn't modified. Drop the return. Signed-off-by: Jinrong Liang --- tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 40507ed9fe8a..5ac05e64bec9 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -265,8 +265,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) * Remove the first occurrence of 'event' (if any) from the filter's * event list. */ -static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, - uint64_t event) +static void remove_event(struct kvm_pmu_event_filter *f, uint64_t event) { bool found = false; int i; @@ -279,7 +278,6 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, } if (found) f->nevents--; - return f; } #define ASSERT_PMC_COUNTING_INSTRUCTIONS() \ From patchwork Thu Jul 20 11:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89982C04A6A for ; Thu, 20 Jul 2023 11:48:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230148AbjGTLsJ (ORCPT ); Thu, 20 Jul 2023 07:48:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231415AbjGTLry (ORCPT ); Thu, 20 Jul 2023 07:47:54 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2A162705; Thu, 20 Jul 2023 04:47:45 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id d9443c01a7336-1bb119be881so4833825ad.3; Thu, 20 Jul 2023 04:47:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853665; x=1690458465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3vMxOtrn5XhPWkIrFot20ta+r1wdX6pEZasBx8PYx7Q=; b=gYg+YUylvkdMULYVanwMw2t1H/Sy8NugFWnjuCCsZwYvHL3xIh8XvIMF35mVuqFVie hmnO75sRpr/MqWBB9saBhOM8SzNEusbnL8zPYEEsDwcrY5AmnMgNAgdytK17dvYGTCy8 IPC3ZoQbHLeDzd/eMPcxHNzqNP2VNQy0Oh5LVYoaR56UsMxoAwFxLAQbfCZbGgD5PX1M LG4Q8wDl7rZeksdtYTW/BInQ98ZjyVw012J23//030Mjey3RIrPZzKQSIOPunyXv0W1s DJ1PPv37BZaajFE3xBGljfGvzVI4wh1Sxx6EGYThmMY+1K4pXA3Je2E4i8hQ56Ngniij 5CAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853665; x=1690458465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3vMxOtrn5XhPWkIrFot20ta+r1wdX6pEZasBx8PYx7Q=; b=Og3OJyM9ZinEjvPT99qtJ8yB6ZeLbIMLxA752h+t0kCY62yqoVWgoqYg4LG5GVXfOl cmYG5PVNw6AWIZsAbyS8lnt6riL3yKvXubOAL1+3zRZZOrCm8MUzbZc3T53/+WcwGRen GBEPC57UF5r+/330AIOpH6peq7IaagfyyI3uElAuZFOPNaAQO0mRgk1lt1ATxSJGB0J7 Z6jJXTDXqy8vzNNpphELbb8EOTSBlfb0bWxikepIHdVcUMqzpV1G0ziYnTJsHYieoODZ apGQCqqnC73aDgYloMGCW3c79B97tiV8RqfpUmVve1uq0KrrHXlrDMcYk67u28xeHMVP stRw== X-Gm-Message-State: ABy/qLYwtPUR+PIIR5CSgchEONTZaVn4NO2jlfBXZw7J0bevAYLp6hKc cSOtccCwJKoSa8HKonWldOc= X-Google-Smtp-Source: APBJJlHj0+97MFjXwbvs3EAgS68lDTVy1zVfxNMAsE/EArSuwJ0zT8vFwRWpQojfL2nRBvBtjSzqvw== X-Received: by 2002:a17:903:2595:b0:1bb:4861:d39e with SMTP id jb21-20020a170903259500b001bb4861d39emr1892953plb.12.1689853665096; Thu, 20 Jul 2023 04:47:45 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:44 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 3/6] KVM: selftests: Introduce __kvm_pmu_event_filter to improved event filter settings Date: Thu, 20 Jul 2023 19:47:11 +0800 Message-Id: <20230720114714.34079-4-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add custom "__kvm_pmu_event_filter" structure to improve pmu event filter settings. Simplifies event filter setup by organizing event filter parameters in a cleaner, more organized way. Signed-off-by: Jinrong Liang Reviewed-by: Isaku Yamahata --- .../kvm/x86_64/pmu_event_filter_test.c | 182 +++++++++--------- 1 file changed, 90 insertions(+), 92 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 5ac05e64bec9..94f5a89aac40 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -28,6 +28,10 @@ #define NUM_BRANCHES 42 +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ +#define MAX_FILTER_EVENTS 300 +#define MAX_TEST_EVENTS 10 + /* * This is how the event selector and unit mask are stored in an AMD * core performance event-select register. Intel's format is similar, @@ -69,21 +73,33 @@ #define INST_RETIRED EVENT(0xc0, 0) +struct __kvm_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; + __u32 flags; + __u32 pad[4]; + __u64 events[MAX_FILTER_EVENTS]; +}; + /* * This event list comprises Intel's eight architectural events plus * AMD's "retired branch instructions" for Zen[123] (and possibly * other AMD CPUs). */ -static const uint64_t event_list[] = { - EVENT(0x3c, 0), - INST_RETIRED, - EVENT(0x3c, 1), - EVENT(0x2e, 0x4f), - EVENT(0x2e, 0x41), - EVENT(0xc4, 0), - EVENT(0xc5, 0), - EVENT(0xa4, 1), - AMD_ZEN_BR_RETIRED, +static const struct __kvm_pmu_event_filter base_event_filter = { + .nevents = ARRAY_SIZE(base_event_filter.events), + .events = { + EVENT(0x3c, 0), + INST_RETIRED, + EVENT(0x3c, 1), + EVENT(0x2e, 0x4f), + EVENT(0x2e, 0x41), + EVENT(0xc4, 0), + EVENT(0xc5, 0), + EVENT(0xa4, 1), + AMD_ZEN_BR_RETIRED, + }, }; struct { @@ -225,47 +241,11 @@ static bool sanity_check_pmu(struct kvm_vcpu *vcpu) return !r; } -static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) -{ - struct kvm_pmu_event_filter *f; - int size = sizeof(*f) + nevents * sizeof(f->events[0]); - - f = malloc(size); - TEST_ASSERT(f, "Out of memory"); - memset(f, 0, size); - f->nevents = nevents; - return f; -} - - -static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], int nevents, - uint32_t action, uint32_t flags) -{ - struct kvm_pmu_event_filter *f; - int i; - - f = alloc_pmu_event_filter(nevents); - f->action = action; - f->flags = flags; - for (i = 0; i < nevents; i++) - f->events[i] = event_list[i]; - - return f; -} - -static struct kvm_pmu_event_filter *event_filter(uint32_t action) -{ - return create_pmu_event_filter(event_list, - ARRAY_SIZE(event_list), - action, 0); -} - /* * Remove the first occurrence of 'event' (if any) from the filter's * event list. */ -static void remove_event(struct kvm_pmu_event_filter *f, uint64_t event) +static void remove_event(struct __kvm_pmu_event_filter *f, uint64_t event) { bool found = false; int i; @@ -313,66 +293,73 @@ static void test_without_filter(struct kvm_vcpu *vcpu) } static void test_with_filter(struct kvm_vcpu *vcpu, - struct kvm_pmu_event_filter *f) + struct __kvm_pmu_event_filter *__f) { + struct kvm_pmu_event_filter *f = (void *)__f; + vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); run_vcpu_and_sync_pmc_results(vcpu); } static void test_amd_deny_list(struct kvm_vcpu *vcpu) { - uint64_t event = EVENT(0x1C2, 0); - struct kvm_pmu_event_filter *f; + struct __kvm_pmu_event_filter f = { + .action = KVM_PMU_EVENT_DENY, + .nevents = 1, + .events = { + EVENT(0x1C2, 0), + }, + }; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); - test_with_filter(vcpu, f); - free(f); + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_member_deny_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); + struct __kvm_pmu_event_filter f = base_event_filter; - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_DENY; + test_with_filter(vcpu, &f); ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(); } static void test_member_allow_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); + struct __kvm_pmu_event_filter f = base_event_filter; - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_ALLOW; + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); + struct __kvm_pmu_event_filter f = base_event_filter; - remove_event(f, INST_RETIRED); - remove_event(f, INTEL_BR_RETIRED); - remove_event(f, AMD_ZEN_BR_RETIRED); - test_with_filter(vcpu, f); - free(f); + f.action = KVM_PMU_EVENT_DENY; + + remove_event(&f, INST_RETIRED); + remove_event(&f, INTEL_BR_RETIRED); + remove_event(&f, AMD_ZEN_BR_RETIRED); + test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); } static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { - struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); + struct __kvm_pmu_event_filter f = base_event_filter; + + f.action = KVM_PMU_EVENT_ALLOW; - remove_event(f, INST_RETIRED); - remove_event(f, INTEL_BR_RETIRED); - remove_event(f, AMD_ZEN_BR_RETIRED); - test_with_filter(vcpu, f); - free(f); + remove_event(&f, INST_RETIRED); + remove_event(&f, INTEL_BR_RETIRED); + remove_event(&f, AMD_ZEN_BR_RETIRED); + test_with_filter(vcpu, &f); ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(); } @@ -567,19 +554,16 @@ static void run_masked_events_test(struct kvm_vcpu *vcpu, const uint64_t masked_events[], const int nmasked_events) { - struct kvm_pmu_event_filter *f; + struct __kvm_pmu_event_filter f = { + .nevents = nmasked_events, + .action = KVM_PMU_EVENT_ALLOW, + .flags = KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + }; - f = create_pmu_event_filter(masked_events, nmasked_events, - KVM_PMU_EVENT_ALLOW, - KVM_PMU_EVENT_FLAG_MASKED_EVENTS); - test_with_filter(vcpu, f); - free(f); + memcpy(f.events, masked_events, sizeof(uint64_t) * nmasked_events); + test_with_filter(vcpu, &f); } -/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ -#define MAX_FILTER_EVENTS 300 -#define MAX_TEST_EVENTS 10 - #define ALLOW_LOADS BIT(0) #define ALLOW_STORES BIT(1) #define ALLOW_LOADS_STORES BIT(2) @@ -751,17 +735,27 @@ static void test_masked_events(struct kvm_vcpu *vcpu) run_masked_events_tests(vcpu, events, nevents); } -static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - int nevents, uint32_t flags) +static int do_vcpu_set_pmu_event_filter(struct kvm_vcpu *vcpu, + struct __kvm_pmu_event_filter *__f) { - struct kvm_pmu_event_filter *f; - int r; + struct kvm_pmu_event_filter *f = (void *)__f; - f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); - r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); - free(f); + return __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); +} + +static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event, + uint32_t flags, uint32_t action) +{ + struct __kvm_pmu_event_filter f = { + .nevents = 1, + .flags = flags, + .action = action, + .events = { + event, + }, + }; - return r; + return do_vcpu_set_pmu_event_filter(vcpu, &f); } static void test_filter_ioctl(struct kvm_vcpu *vcpu) @@ -773,14 +767,18 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ - r = run_filter_test(vcpu, &e, 1, 0); + r = set_pmu_single_event_filter(vcpu, e, 0, KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = set_pmu_single_event_filter(vcpu, e, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = set_pmu_single_event_filter(vcpu, e, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } From patchwork Thu Jul 20 11:47:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26D21EB64DA for ; Thu, 20 Jul 2023 11:48:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231187AbjGTLsK (ORCPT ); Thu, 20 Jul 2023 07:48:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231490AbjGTLr6 (ORCPT ); Thu, 20 Jul 2023 07:47:58 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E6C32717; Thu, 20 Jul 2023 04:47:49 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id d9443c01a7336-1b8b2b60731so3686065ad.2; Thu, 20 Jul 2023 04:47:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853668; x=1690458468; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2fZBKSqD0Jf4EbIjfm4Dlb8jsXARp8sPconxfCXxUFY=; b=TxgaJKo30NHJbsD8qj5iMbi2YtknFcRWNPHNlyvE/r3f5bPpDbeOzcN72Oz9CIwLAM CMW84HXF1VWlyTMRVjhoT9gi+jzZ0lGGNF6AYWU665fUvU7Z0/CcYLBp6OK9WNMLq5qV EgJ+SDo6TjAL8sph1RD1ds3APon9z4j0JLmKuZ1X7Qr3u+4YgwayAUWelgSdOrJffYN2 OHKLClpWlsh6LUpg0v3M9WfuqylAQ97YrskOBH4nBu0CGeoVDeEp3LH9miRqRdP1tJob CdkSdtRzoL2RfEn2N2M6amd0gQ1x6fhIWowKfryo7nxvmCr1s2Y/kAzBBVT+983CjwZi doCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853668; x=1690458468; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2fZBKSqD0Jf4EbIjfm4Dlb8jsXARp8sPconxfCXxUFY=; b=aAechK96Yy3LGJbsqY8eKTSRsGCgoCN5DocL177HkJB1BoC5bcKyFc/vsDcbFQ0gPM rD5kTs6RRJ2ZSdRj78Sp3v4Tzszlqp6vSaKnNzDKt96x2QvkgDl+BceQzi85u8XppcnA yKMmieQ4nC7wF8hqLuYq6BZP0rsxX1KjzXlyXUFjKx58bVCW7DbBp+m4UstNtlaq4NGs mUcuvSdw5NSjmFZ6x9qpAYesfWbnGGC345h4h3Zk7Ec1Qto8NvFJDaAoP6SNxsUFq3R2 HaapbJsaQIv3XwQ/UWq+SUo9P6BHNiBw7X/pkzJL5PCZr/x2xPMNnM9fPAocQaXrw/3w XbMg== X-Gm-Message-State: ABy/qLbKr57DA53+W61jUxy296q0trLcnxUuvHtDtQFDmwxdUPyK6OPk vlG4mC/BBwQCBFcGPTMMzAdnT7mgps1fZJIBxP0= X-Google-Smtp-Source: APBJJlF77fVdbFjzM5yk7KZZfObH84Tbjs+A+leFC7VtcZUXAXEtrIo6Y0115xhBd33BKoRTyaS+dA== X-Received: by 2002:a17:902:cec3:b0:1b8:1e58:e906 with SMTP id d3-20020a170902cec300b001b81e58e906mr21334755plg.67.1689853668602; Thu, 20 Jul 2023 04:47:48 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:48 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 4/6] KVM: selftests: Add test cases for unsupported PMU event filter input values Date: Thu, 20 Jul 2023 19:47:12 +0800 Message-Id: <20230720114714.34079-5-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test cases to verify the handling of unsupported input values for the PMU event filter. The tests cover unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values. All these cases should return an error, as they are currently not supported by the filter. Furthermore, the tests also cover the case where setting non-existent fixed counters in the fixed bitmap does not fail. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 26 +++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 94f5a89aac40..8b8bfee11016 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -32,6 +32,10 @@ #define MAX_FILTER_EVENTS 300 #define MAX_TEST_EVENTS 10 +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAGS_VALID_MASK << 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) + /* * This is how the event selector and unit mask are stored in an AMD * core performance event-select register. Intel's format is similar, @@ -760,6 +764,8 @@ static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event, static void test_filter_ioctl(struct kvm_vcpu *vcpu) { + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + struct __kvm_pmu_event_filter f; uint64_t e = ~0ul; int r; @@ -780,6 +786,26 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) KVM_PMU_EVENT_FLAG_MASKED_EVENTS, KVM_PMU_EVENT_ALLOW); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + f = base_event_filter; + f.action = PMU_EVENT_FILTER_INVALID_ACTION; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Set invalid action is expected to fail"); + + f = base_event_filter; + f.flags = PMU_EVENT_FILTER_INVALID_FLAGS; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Set invalid flags is expected to fail"); + + f = base_event_filter; + f.nevents = PMU_EVENT_FILTER_INVALID_NEVENTS; + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(r, "Exceeding the max number of filter events should fail"); + + f = base_event_filter; + f.fixed_counter_bitmap = ~GENMASK_ULL(nr_fixed_counters, 0); + r = do_vcpu_set_pmu_event_filter(vcpu, &f); + TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed"); } int main(int argc, char *argv[]) From patchwork Thu Jul 20 11:47:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ED4FEB64DA for ; Thu, 20 Jul 2023 11:48:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231277AbjGTLsO (ORCPT ); Thu, 20 Jul 2023 07:48:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbjGTLsL (ORCPT ); Thu, 20 Jul 2023 07:48:11 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06E032D58; Thu, 20 Jul 2023 04:47:52 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id d2e1a72fcca58-66869feb7d1so411256b3a.3; Thu, 20 Jul 2023 04:47:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853672; x=1690458472; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AVjv2WAK3Cdmf9F59ofePZQ7PJUx/o9HNI9lncLrB+U=; b=Uu3WhS+wxAa6/eXFHI/5XnLsQKJ5NdVrQVUTo869pnB7XIlm6BzM3xzFIYicsZ45xl yKoIxFYyeuIlEim9kawM+A5rgihebtgseQ+xmOXXh4NaWNZegYAa4jwLQtYTYYyPIdUm bZC9JwcvtY4d8muZ3yaoqxSRYThFbXxRQhXaw5bXk/P93rjx7J9/WWJLz0TyCUfGOTRI sOMAvTPMYTh1EpXkPDU/IzZtKMjOImzTNEEVXflAKqWyuGoopEzwdXPt801/qyTKjoSy PQE9C9DWreVVII+5gPUJdBMZuW/nM2/+y2iAmxn8+i+QFI4ifArrK7Ac35pkbVYhmY/3 wz3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853672; x=1690458472; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AVjv2WAK3Cdmf9F59ofePZQ7PJUx/o9HNI9lncLrB+U=; b=L3f6ux2L4BMgKYB5Hb700F9WB1LPGUhmYPcrgbsMUwfwaIcX4ugwhad5Bm0q4guZVN lt9sOQFhS3O/XU/vmyYbwCEvYdQ5eN0aCUs7hAgZ6LEoytrnbEZ+XMs+YqTdIRfU6T6W 0uCUKP6vsX1oqU84cReVRotNsa0GhzeGWNrN0hctvViv2kInwKDPSRXbMwrQiU51fIaF PoUiNLPeQ2zEBckF617oKpwnDgatL8J13ElPwgHZlJc/Byj6pxIaO5vC+fkSg84Q0jmg sVrZRij3rWkdNBspZZUxYUrjxtZpWY5ojxMOy0zrjrmspn7LM3XHMCzIuCSkBg9LpYZA Nl/Q== X-Gm-Message-State: ABy/qLbnexZH9e0UU2KVj5hxRyXHWQmC74CFwMf3Zzjl3iuTuxehfzpj ZPP34aw8XVTPYCH/ThC8mz0= X-Google-Smtp-Source: APBJJlHVuHmI6wwuw4vyxxRHzGTtlq9FSgN8Jop9WFnmatGG0tTRgakrr+yPx1UYLsc+NjftNkCDsA== X-Received: by 2002:a05:6a21:3613:b0:135:38b5:7e4e with SMTP id yg19-20020a056a21361300b0013538b57e4emr1499934pzb.59.1689853672100; Thu, 20 Jul 2023 04:47:52 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:51 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 5/6] KVM: selftests: Test if event filter meets expectations on fixed counters Date: Thu, 20 Jul 2023 19:47:13 +0800 Message-Id: <20230720114714.34079-6-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add tests to cover that pmu event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest). Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 8b8bfee11016..a9d44ec210c4 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,7 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5 #define NUM_BRANCHES 42 +#define INTEL_PMC_IDX_FIXED 32 /* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ #define MAX_FILTER_EVENTS 300 @@ -808,6 +809,84 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed"); } +static void intel_run_fixed_counter_guest_code(uint8_t fixed_ctr_idx) +{ + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + + /* Only OS_EN bit is enabled for fixed counter[idx]. */ + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, + BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + } +} + +static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu, + uint32_t action, uint32_t bitmap) +{ + struct __kvm_pmu_event_filter f = { + .action = action, + .fixed_counter_bitmap = bitmap, + }; + do_vcpu_set_pmu_event_filter(vcpu, &f); + + return run_vcpu_to_sync(vcpu); +} + +static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, + uint8_t nr_fixed_counters) +{ + unsigned int i; + uint32_t bitmap; + uint64_t count; + + TEST_ASSERT(nr_fixed_counters < sizeof(bitmap) * 8, + "Invalid nr_fixed_counters"); + + /* + * Check the fixed performance counter can count normally when KVM + * userspace doesn't set any pmu filter. + */ + count = run_vcpu_to_sync(vcpu); + TEST_ASSERT(count, "Unexpected count value: %ld\n", count); + + for (i = 0; i < BIT(nr_fixed_counters); i++) { + bitmap = BIT(i); + count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_ALLOW, + bitmap); + ASSERT_EQ(!!count, !!(bitmap & BIT(idx))); + + count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_DENY, + bitmap); + ASSERT_EQ(!!count, !(bitmap & BIT(idx))); + } +} + +static void test_fixed_counter_bitmap(void) +{ + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + uint8_t idx; + + /* + * Check that pmu_event_filter works as expected when it's applied to + * fixed performance counters. + */ + for (idx = 0; idx < nr_fixed_counters; idx++) { + vm = vm_create_with_one_vcpu(&vcpu, + intel_run_fixed_counter_guest_code); + vcpu_args_set(vcpu, 1, idx); + __test_fixed_counter_bitmap(vcpu, idx, nr_fixed_counters); + kvm_vm_free(vm); + } +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -851,6 +930,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm); test_pmu_config_disable(guest_code); + test_fixed_counter_bitmap(); return 0; } From patchwork Thu Jul 20 11:47:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13320403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20CF0EB64DA for ; Thu, 20 Jul 2023 11:48:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231419AbjGTLsT (ORCPT ); Thu, 20 Jul 2023 07:48:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231216AbjGTLsO (ORCPT ); Thu, 20 Jul 2023 07:48:14 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A4001711; Thu, 20 Jul 2023 04:47:56 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id d9443c01a7336-1b8a462e0b0so4112575ad.3; Thu, 20 Jul 2023 04:47:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689853675; x=1690458475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E3msxjZdgetMbrGi/qXLG+gWb+dY8uVEsPP0nY7G4QQ=; b=nO5tehWvaIgWtdqxq77tk0usuoipGQXYmcIJVLSVxabUWEFmKxlLhb1u+voA2UhAY8 tf5PD0Lz86rT2JT4HGJbjh04Xb2NjGCZQpU1FnEl5La+yBg05yzTBgLPOMFQO/nHpiYc j7JGD3oNEpGPxax8El4mk0QXy4OFkovT7ongxxXZoAW4D4aTVTq7AJ3eXYc+okMDt15x 1gLFv1aYNv88VbErxUKhzgVXmppKMYRGMQPkWQTGG04ekeDu6FvDHrAtmrThBAPWVvBY cM5rbakOv2+xMrodu3FXbosh1KmTulNwrdCzMVvNTYJJMzIvDc5r+n8licLkGmEPhate OKHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689853675; x=1690458475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E3msxjZdgetMbrGi/qXLG+gWb+dY8uVEsPP0nY7G4QQ=; b=VDMBn5oWV1JK5trP9op+H7bukWXczrXDepjDyOQo88odvx1noBc8KSCV26uvTXcTmj 5eSzpeUkCw7pypc/PIAFbGdUyz17s7UI6QbY+tIUs+wIBffsy64HHPhRmtEDwgHCcRow 383aW4TziuI86PMO23J0j8do/o+M3NLz+U0TX2ogVQCLfd8guAu9xBBFCW0lVAwn7+O+ XSVB1aaD9lRnpBCKamCXMIpk5upREr8jB5GJJmtEVCUSvjMeH1kS5XUSC4+OaT4ts+Np DJELk58R9xO1brWVcxaIWmA4kQs/duWaYFmsuHZnBW9ziFEjGFOIS+RJIXgOmKgnF5vs JiiQ== X-Gm-Message-State: ABy/qLZEIIJ0vA0NRUw0hokPeY1MYKPF0rjxYYANZbJVUKzQVal0Jhnr NAoYjxVclVKCGww7JW7bDdk= X-Google-Smtp-Source: APBJJlGhnr3budaAVAV3RnSZI66mWhW6dvpa/wc8siefx8aeHt5+M50r4WAe/hn2d3bUgjJ025iF/Q== X-Received: by 2002:a17:903:234c:b0:1b8:86a1:9cf with SMTP id c12-20020a170903234c00b001b886a109cfmr5893507plh.32.1689853675583; Thu, 20 Jul 2023 04:47:55 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id u22-20020a170902a61600b001b2069072ccsm1164007plq.18.2023.07.20.04.47.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 04:47:55 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Paolo Bonzini , Isaku Yamahata , Jim Mattson , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Like Xu , Jinrong Liang , linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 6/6] KVM: selftests: Test gp event filters don't affect fixed event filters Date: Thu, 20 Jul 2023 19:47:14 +0800 Message-Id: <20230720114714.34079-7-cloudliang@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230720114714.34079-1-cloudliang@tencent.com> References: <20230720114714.34079-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed event filter behavior in KVM. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index a9d44ec210c4..08c7ccd81be2 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -838,6 +838,19 @@ static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu, return run_vcpu_to_sync(vcpu); } +static uint64_t test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu, + uint32_t action, + uint32_t bitmap) +{ + struct __kvm_pmu_event_filter f = base_event_filter; + + f.action = action; + f.fixed_counter_bitmap = bitmap; + do_vcpu_set_pmu_event_filter(vcpu, &f); + + return run_vcpu_to_sync(vcpu); +} + static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, uint8_t nr_fixed_counters) { @@ -864,6 +877,20 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx, count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_DENY, bitmap); ASSERT_EQ(!!count, !(bitmap & BIT(idx))); + + /* + * Check that fixed_counter_bitmap has higher priority than + * events[] when both are set. + */ + count = test_set_gp_and_fixed_event_filter(vcpu, + KVM_PMU_EVENT_ALLOW, + bitmap); + ASSERT_EQ(!!count, !!(bitmap & BIT(idx))); + + count = test_set_gp_and_fixed_event_filter(vcpu, + KVM_PMU_EVENT_DENY, + bitmap); + ASSERT_EQ(!!count, !(bitmap & BIT(idx))); } }