From patchwork Fri Apr 14 11:00:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 664CDC77B6E for ; Fri, 14 Apr 2023 11:01:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230129AbjDNLBO (ORCPT ); Fri, 14 Apr 2023 07:01:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229469AbjDNLBN (ORCPT ); Fri, 14 Apr 2023 07:01:13 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C58D35B1; Fri, 14 Apr 2023 04:01:11 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id 98e67ed59e1d1-246f856d751so518535a91.0; Fri, 14 Apr 2023 04:01:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470071; x=1684062071; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mh/XSi4I3WO2A97bJ5WxQEPSGlXqsNCf/xgmHJn584A=; b=WnuE6nxkUlOOm8aUiTcKLZz3d1ReJdCFwQ12dlUaotuNneEvqUszV9C3QTDBUaGuzz D3FTTrB7hGVjgsny9xBMOSD7vV74BzAarOP3sRO1nwtkUMMNWyAxSkeZGmJe2nWX+lug bvqBzZK4lj/yPp58hBG+ndHEH16WbMjz9I0+z9G2UxFY0MCVqvtMAyk8CZ3bMmE8Jrhc 57HZpZmnozj5S3N7fensi2ui+XkRhPISg3gv1oCnXA0NgdtcL70TnZHQivTNE9zN7wmo +E3M5PqQlOGHawZa2yBaSNFLHKkoGTA7+VPNGPZAfqwBmWsoDCbKktPn/DHWL+L8+1al 0E5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470071; x=1684062071; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mh/XSi4I3WO2A97bJ5WxQEPSGlXqsNCf/xgmHJn584A=; b=fqjcEHPh/fPvR1489G1KYKZgbi65l6vBFOV06j0PEEwyhQ4cUPRuTB91W8EG2+3dML hh9oRtXLwU/MCdbxDwM8Y1htrNPyBY9pUtqhLQF5kG4MnzVku5GT9qWlh2BlSur6WoEb Wf6ZGCiI/usaOKeRAkCJ8SS2NWM0hzprIMNjys2hMy9m2zs3rRtAH8qHSIMNagfXh50U aLANtVeKkIggekzhXcq59R8FkpFbeKi4Adb/uUKbAR98MsOBGhDViz/0bQKRcufE5d/L Jl54PAHRPOmHsT5u+pRB4BPTjSLVuP/SO18bwyavxJTxhqgYOcHJIafiYjNK4X9MhB36 fR2g== X-Gm-Message-State: AAQBX9dhvR+XDIqQ1XRGkokOxsC9T0LVa8QpqNjb6TuHc004um0PuLr7 TSBjR54mAYkhO9NUU7bqQso= X-Google-Smtp-Source: AKy350Ysfs2nFJ1j7gjqVzfTb9TkQ+bVpZvNlBcfTP28Dsay0LynWBtG/x8942HogUTA/djobpnl+w== X-Received: by 2002:a05:6a00:2e1d:b0:636:ea6c:68d8 with SMTP id fc29-20020a056a002e1d00b00636ea6c68d8mr7585283pfb.27.1681470070694; Fri, 14 Apr 2023 04:01:10 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:10 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/7] KVM: selftests: Replace int with uint32_t for nevents Date: Fri, 14 Apr 2023 19:00:50 +0800 Message-Id: <20230414110056.19665-2-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Defined as type __u32, the nevents field in kvm_pmu_event_filter can only accept positive values within a specific range. Therefore, replacing int with uint32_t for nevents ensures consistency and readability in the code. This change has been tested and verified to work correctly with all relevant code. Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 1f60dfae69e0..c0521fc9e8f6 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -194,7 +194,7 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], int nevents, +create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; @@ -648,7 +648,7 @@ const struct masked_events_test test_cases[] = { }; static int append_test_events(const struct masked_events_test *test, - uint64_t *events, int nevents) + uint64_t *events, uint32_t nevents) { const uint64_t *evts; int i; @@ -670,7 +670,7 @@ static bool bool_eq(bool a, bool b) } static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, - int nevents) + uint32_t nevents) { int ntests = ARRAY_SIZE(test_cases); struct perf_counter c; @@ -695,7 +695,7 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, } } -static void add_dummy_events(uint64_t *events, int nevents) +static void add_dummy_events(uint64_t *events, uint32_t nevents) { int i; @@ -714,7 +714,7 @@ static void add_dummy_events(uint64_t *events, int nevents) static void test_masked_events(struct kvm_vcpu *vcpu) { - int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint32_t nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; uint64_t events[MAX_FILTER_EVENTS]; /* Run the test cases against a sparse PMU event filter. */ @@ -726,7 +726,7 @@ static void test_masked_events(struct kvm_vcpu *vcpu) } static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - int nevents, uint32_t flags) + uint32_t nevents, uint32_t flags) { struct kvm_pmu_event_filter *f; int r; From patchwork Fri Apr 14 11:00:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 501E6C77B79 for ; Fri, 14 Apr 2023 11:01:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230299AbjDNLBR (ORCPT ); Fri, 14 Apr 2023 07:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230286AbjDNLBP (ORCPT ); Fri, 14 Apr 2023 07:01:15 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDC0E5BA2; Fri, 14 Apr 2023 04:01:14 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id m18so17884712plx.5; Fri, 14 Apr 2023 04:01:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470074; x=1684062074; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W+JDAvtrWEsHLmAmNtEcddpwsMXPhmUa/XaRlUIpwE0=; b=IJhMErUL5COHfQo08l1fw7uGzSHQ/msKDAANngGbXkxWvMCRggJePPzdekV5yVBmHe 0cNF8pXUmA05q4/O9UD4yLNzoSMyVyoEXye5G6ymTN824okLYTtELUbTtxZNUsDad1ri Xje4mU8ROX/cdb1rReDVh1TPcrCRvNKqwFG92MqDi1tDD/FbzY87A/L8ericyd6D38zy PSfB/3knDqmzF/Igwt1U7W0LkzPcIxD3IVGx9DJm0uKDI9OwLv8QKWPGNht/xICVTQmF T4yUvxokuSK/DQKq7dyfowQDvVcGloKWhS2znO8FekLwnqQABMOwL265gyYBCK3G7SLS VA8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470074; x=1684062074; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W+JDAvtrWEsHLmAmNtEcddpwsMXPhmUa/XaRlUIpwE0=; b=gs+qF6YEdwZJrd89A2wnI0Y6Y+FFaA7Z8gzjE8H6TzeMGuxyzDuy4DpOZdLsQAiLs4 RxnX2b1DZvw33KY6Wh4hv9qfDrJDqBXuV8pw6EkuHiShqwsM5uWr7n5vg4KV/Y/quBZR GAlN6MB8dykqlseT8VsK7zrI0e3whpiuvtxaxmhgPgbxZl9jlTOu6APTFAOjrQEGI8ab klTSNakoLyGd61Mad6baG/u/4RCHDbST6MyhXVFgJ763cH9WzBqtlopYBbEEcQIe7epb dA4kvQPcBF8YV4SeXWM5ZSccZchSrd6F1tmLvUcUdRe0AXND0HVcFUSoBiReavn7P04/ lZpQ== X-Gm-Message-State: AAQBX9eN1PuJFlSLO5A+IctSjYF3CD4Vey82WKwv1VtISwomyoxI260Y TylgzhDkUPYQvl0XA35+20M= X-Google-Smtp-Source: AKy350a0dG1c2JOXIMlUTl/xV6eZHBWW+o3n12vpBTPiE7XVLd7nrFP5RT4Xs+3rCJWLHchnaLEL6Q== X-Received: by 2002:a17:90b:4a8f:b0:23d:29c7:916f with SMTP id lp15-20020a17090b4a8f00b0023d29c7916fmr4910995pjb.32.1681470074150; Fri, 14 Apr 2023 04:01:14 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:13 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/7] KVM: selftests: Apply create_pmu_event_filter() to fixed ctrs Date: Fri, 14 Apr 2023 19:00:51 +0800 Message-Id: <20230414110056.19665-3-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Add fixed_counter_bitmap to the create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters. No functional change intended. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index c0521fc9e8f6..4e87eea6986b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -192,19 +192,22 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) return f; } - static struct kvm_pmu_event_filter * create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, - uint32_t action, uint32_t flags) + uint32_t action, uint32_t flags, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->fixed_counter_bitmap = fixed_counter_bitmap; f->flags = flags; - for (i = 0; i < nevents; i++) - f->events[i] = event_list[i]; + if (f->nevents) { + for (i = 0; i < f->nevents; i++) + f->events[i] = event_list[i]; + } return f; } @@ -213,7 +216,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action, 0); + action, 0, 0); } /* @@ -260,7 +263,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0, 0); count = test_with_filter(vcpu, f); free(f); @@ -544,7 +547,7 @@ static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, f = create_pmu_event_filter(masked_events, nmasked_events, KVM_PMU_EVENT_ALLOW, - KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, 0); r.raw = test_with_filter(vcpu, f); free(f); @@ -726,12 +729,14 @@ static void test_masked_events(struct kvm_vcpu *vcpu) } static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - uint32_t nevents, uint32_t flags) + uint32_t nevents, uint32_t flags, uint32_t action, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int r; - f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + f = create_pmu_event_filter(events, nevents, action, flags, + fixed_counter_bitmap); r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); free(f); @@ -747,14 +752,16 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ - r = run_filter_test(vcpu, &e, 1, 0); + r = run_filter_test(vcpu, &e, 1, 0, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } From patchwork Fri Apr 14 11:00:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F398C77B6E for ; Fri, 14 Apr 2023 11:01:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230365AbjDNLB2 (ORCPT ); Fri, 14 Apr 2023 07:01:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230321AbjDNLBT (ORCPT ); Fri, 14 Apr 2023 07:01:19 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 076C6A276; Fri, 14 Apr 2023 04:01:17 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id m18so17884835plx.5; Fri, 14 Apr 2023 04:01:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470077; x=1684062077; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DPnpI8Yay4CCb2SXpz08Ippsa0FVXvZQsHG96J42rN4=; b=acmSGXhoVlCz3WCg206wgeur0JhxZXQthyAoYAaPfFte3P2Du16QfJVQ2YvseR7+pS PGMAdyrIVbS9qlM2h14+OLkdEozCHyR3WZTxxucXAz8ZmIm5OHU+wXuLnnP3Trzjh5UW 6s6NTEvk/JwA8UC9VVacs746icfdH7+B7er7UrEi5ZWv4C0ck7Vk0YUX30lLvb5Se+yE Spj6KEh36+vcn89DAhvBIzWQG3onvroH9Xjl0QBP8R6tO5b21yTqXuBub3bmBtGDDQaF Jm2TMRxbhBaGF1BvlUFty6pVIMhUiiddkXgvA3lu6rIJV3+YJmS20D+eJrXPJ5vyjeQ5 puJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470077; x=1684062077; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DPnpI8Yay4CCb2SXpz08Ippsa0FVXvZQsHG96J42rN4=; b=DPFRIigjqgOtjGdr6plY8ICpFvZSaNEaqqtQ6oqKLjpNw50WqkMNH6r+OmsGvIqHZr pxe8OE/C/cM2x3BXRtJZ3dY6VHzuAp4r3Jk05vbC3nM17dkvzSiDj9M5Lm4NWUVHzEDO tbdTncSB0HLcMi32XIV0aBvHavOxJVNzKm2hHpRw2rbfQqJAeTto/Si68ddo/2/IVX7s r5ppRoXSQJ/2UNSvud6C2xr5aPQEZVHCQSJyV1bObgkltns/WNX/9MiphHWSTO3amZVL WmUTUd0vuGoxoanRu5zI55SSTuktjxRIPMcXoETwHeSpxc1dD3CiiX0N5sUViN7QPRza 99EQ== X-Gm-Message-State: AAQBX9dxzAw1EF6eVlILSibQc4Wuj3mFddAJoUaE/NWU1E1CJH2N3JDq prAyb/YmrzdxjulJA3p1qJk= X-Google-Smtp-Source: AKy350ZzeYy/GGenPKrHzd3n5DWP9/38O6KEl5puV2XiDJjmatBwHVWnLs7GR3k8hAZcbtfzJfu0pA== X-Received: by 2002:a17:902:b40a:b0:1a2:1a5b:cc89 with SMTP id x10-20020a170902b40a00b001a21a5bcc89mr2229226plr.44.1681470077676; Fri, 14 Apr 2023 04:01:17 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:17 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/7] KVM: selftests: Test unavailable event filters are rejected Date: Fri, 14 Apr 2023 19:00:52 +0800 Message-Id: <20230414110056.19665-4-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Adds unsupported input test cases for PMU filter. Specifically, it tests the input of unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values, which should all return an error, as they are currently unsupported by the filter. Additionally, the patch tests setting non-exist fixed counters in the fixed bitmap doesn't fail. This change aims to improve the testing of the PMU filter and ensure that it functions correctly in all supported use cases. The patch has been tested and verified to function correctly. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4e87eea6986b..a3d5c30ce914 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,10 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5 #define NUM_BRANCHES 42 +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) /* * This is how the event selector and unit mask are stored in an AMD @@ -743,10 +747,22 @@ static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, return r; } +static uint8_t get_kvm_supported_fixed_num(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + if (host_cpu_is_amd) + return 0; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->edx & FIXED_CTR_NUM_MASK; +} + static void test_filter_ioctl(struct kvm_vcpu *vcpu) { uint64_t e = ~0ul; int r; + uint8_t max_fixed_num = get_kvm_supported_fixed_num(); /* * Unfortunately having invalid bits set in event data is expected to @@ -763,6 +779,42 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + /* + * Test input of unsupported "action" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, 0, PMU_EVENT_FILTER_INVALID_ACTION, 0); + TEST_ASSERT(r != 0, "Set invalid action is expected to fail."); + + /* + * Test input of unsupported "flags" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, PMU_EVENT_FILTER_INVALID_FLAGS, + KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, "Set invalid flags is expected to fail."); + + /* + * Test input of unsupported "nevents" values should return an error. + * The only values currently supported are those less than or equal to + * MAX_FILTER_EVENTS. + */ + r = run_filter_test(vcpu, event_list, PMU_EVENT_FILTER_INVALID_NEVENTS, + 0, KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, + "Setting PMU event filters that exceeds the maximum supported value should fail"); + + /* + * In this case, set non-exist fixed counters in the fixed bitmap + * doesn't fail. + */ + if (max_fixed_num) { + r = run_filter_test(vcpu, 0, 0, 0, KVM_PMU_EVENT_ALLOW, + ~GENMASK_ULL(max_fixed_num, 0)); + TEST_ASSERT(r == 0, + "Set invalid or non-exist fixed cunters in the fixed bitmap fail."); + } } int main(int argc, char *argv[]) From patchwork Fri Apr 14 11:00:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1AB9C77B6E for ; Fri, 14 Apr 2023 11:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230414AbjDNLBm (ORCPT ); Fri, 14 Apr 2023 07:01:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230347AbjDNLBa (ORCPT ); Fri, 14 Apr 2023 07:01:30 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B893A5C1; Fri, 14 Apr 2023 04:01:21 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id i8so8963359plt.10; Fri, 14 Apr 2023 04:01:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470081; x=1684062081; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O0lXw7gUlhFeEgAAjqPZaP3dINO8+frpf9yOAVpBpus=; b=rOw8pFIl2ry+eW2Na6DgMXG0m73f+zmG7Te8I+Huy0OTCeDMpWUtCklYp7c1w/6Khf dQItwUAtFIHyzztFwJoWOXmdjvjgmj3KTCynYvVz1r5gRLIhQttoPlBsjRu4b6PZI8+g EE+bZPSrorkSQG/XWaKBYS1ehN3sspnHBxrn24ivUeRtkmdLXqFnUJ9FLrxdfaDjl02+ Jn6heWCbyjBODnOEfco+VurbAUx4iWTivJrKsB9J5rNLNG37bSniL64UQtqw52HjbbvL 3IxZpBdhVJCk86yXyuOTCEuROT83aDFSm4ErUgxEAtLBiwrnMZCj187NjCZr1AtuWt0A TIig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470081; x=1684062081; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O0lXw7gUlhFeEgAAjqPZaP3dINO8+frpf9yOAVpBpus=; b=lIjXd5lD0gEVksMP/mOgKbL/Ox+cnA9J8L+6f7UPqUy8A2s6unlPU8vjzJ6JJALQPH u0TLPCjsjQsQJzyKjWgDxpMRHLiCbfqe3VDsqwRFiLe4J+7rdonSzFxB+gINM3zXyFpK jd8vQHZD3esZ9I4zyintL9sZw74r6ycLmyY97EcElenKt65kegq63ad3TvVR+LacL9yW tA3Z7LniYP99knbFDChuF+Nu+nW1/CGFC7vtwu+a7eJc9g/0F9+0t+K5tNegMzTJpqyf WfsyO0JzRCFIQaoFjjQ+7uIOXEftdeOnsnolW1lR1mkFZTOUnASuqUbRhYAZ9iz2wYBs YcKw== X-Gm-Message-State: AAQBX9dRIoV7Z8cH3t6hvRuJAKPWSHwcLXrx6T2aaHxCHJz25jCW8qBh rWO0ECGFo8/INo9Ny2SNasU= X-Google-Smtp-Source: AKy350Z1Qj9SoLwP/z02WB2po6VGyCYszb3ZFkcQpt8EpZnYmP7Q4NqQDug+SKB2hTns/mou+EyjWw== X-Received: by 2002:a17:90a:9113:b0:233:76bd:9faa with SMTP id k19-20020a17090a911300b0023376bd9faamr5158057pjo.47.1681470081182; Fri, 14 Apr 2023 04:01:21 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:20 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/7] KVM: x86/pmu: Add documentation for fixed ctr on PMU filter Date: Fri, 14 Apr 2023 19:00:53 +0800 Message-Id: <20230414110056.19665-5-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Update the documentation for the KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The action and fixed_counter_bitmap members of the pmu filter to determine whether fixed performance events can be programmed by the guest. This information is helpful for correctly configuring the fixed_counter_bitmap and action fields to filter fixed performance events. Suggested-by: Like Xu Signed-off-by: Jinrong Liang --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a69e91088d76..036f5b1a39af 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5122,6 +5122,27 @@ Valid values for 'action':: #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +Via this API, KVM userspace can also control the behavior of the VM's fixed +counters (if any) by configuring the "action" and "fixed_counter_bitmap" fields. + +Specifically, KVM follows the following pseudo-code when determining whether to +allow the guest FixCtr[i] to count its pre-defined fixed event: + + FixCtr[i]_is_allowed = (action == ALLOW) && (bitmap & BIT(i)) || + (action == DENY) && !(bitmap & BIT(i)); + FixCtr[i]_is_denied = !FixCtr[i]_is_allowed; + +Note once this API interface is called, the default zero value of the field +"fixed_counter_bitmap" will implicitly affect all fixed counters, even if it's +expected to be used only to control the events on generic counters. + +In addition, pre-defined performance events on the fixed counters already have +event_select and unit_mask values defined, which means userspace can also +control fixed counters by configuring "action"+ "events" fields. + +When there is a contradiction between these two polices, the fixed performance +counter will only follow the rule of the pseudo-code above. + 4.121 KVM_PPC_SVM_OFF --------------------- From patchwork Fri Apr 14 11:00:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3BD9C77B70 for ; Fri, 14 Apr 2023 11:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230406AbjDNLB7 (ORCPT ); Fri, 14 Apr 2023 07:01:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbjDNLBl (ORCPT ); Fri, 14 Apr 2023 07:01:41 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 570D9AF29; Fri, 14 Apr 2023 04:01:25 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id x8-20020a17090a6b4800b002474c5d3367so97970pjl.2; Fri, 14 Apr 2023 04:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470084; x=1684062084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ItElodv0Lp944jxYhEPXrCk25YJBgGM6Rbhbgi2W32k=; b=SvMGD+r2ITG1qpiAbPpsPTCFTcYdnWrFWYTfD0ZUXCCWN7QFe6StxPF3pyfhES5tFw PZgtrt6kOgX86I1gF+kniRIIwxGUfKxOy1tDdzL7hdDkUVmOlInGgbTDWo91KvU+vF8d 7Q+C0lbRLvT/SHDWpQbkBLechCCCjGA6Y+Pt+gnJMvUP6xSUkohLxidk4JtoI7pE/ADb fRQVnm1IX4fPMwYB2aOygOjgEX+aGKXQ+unfqAp9gWuNHKs2o3XbLNfLw5FY+XN7cWnd nwokUzFkKgA6WycI8xyx39nxAzMhFOOjVWIgE6SlU01zmNEispcI8nJ2aHrGJX5NEzW9 Zkjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470084; x=1684062084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ItElodv0Lp944jxYhEPXrCk25YJBgGM6Rbhbgi2W32k=; b=VIks7cfoN/qzODy43NVw+MB8Tvhrx/e1Ljbv4i7bFlllpCkGNJuPfuDQjt1GIUX9Ud wLqLF9w/3mcxiqULoqYhxAv/w3aA1OomMpZRpL6N7uCHaLtrO2QPBGluBJaT79wE/MS7 cZQjG6x2rEKyUE1IxkRv+SLvBuHZ85Z7t7SADb6KF6W02YTM+JIwTqcimcxXjdJwnP2E gDWs0oH6+cSiJOjPI/kI3PIHE9swDzVJk+jI2ajwT1B7oVQfNt5JTMuzs4+4rT2x/FAB PdUswPl2yCl+zCRRc4IN+dow+PQNzgUEMeuxPbcv4y4B4uiobLbmeL3qWB7o8nLFiqPg SUEQ== X-Gm-Message-State: AAQBX9d8UbM/kVyumzby75uSJLFsObFUW5oZ5Z+N4CRctl1wyrtd7ujK oUA5x5yuHXoDB9eM1mNCsEBg2Hxr8HuPbyebJcw= X-Google-Smtp-Source: AKy350Y8LUlibw1gBdE/mFbXYx1KLO3aS0xsM3+vRWOF3qdJp11hpoaMcTSP6tSE8u16E8JVKeBcqA== X-Received: by 2002:a17:90a:4d09:b0:247:11eb:2941 with SMTP id c9-20020a17090a4d0900b0024711eb2941mr5049990pjg.37.1681470084629; Fri, 14 Apr 2023 04:01:24 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:24 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/7] KVM: selftests: Check if pmu_event_filter meets expectations on fixed ctrs Date: Fri, 14 Apr 2023 19:00:54 +0800 Message-Id: <20230414110056.19665-6-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Add tests to cover that pmu_event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest). Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index a3d5c30ce914..0f54c53d7fff 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -31,6 +31,7 @@ #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) #define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) +#define INTEL_PMC_IDX_FIXED 32 /* * This is how the event selector and unit mask are stored in an AMD @@ -817,6 +818,113 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) } } +static void intel_guest_run_fixed_counters(uint8_t fixed_ctr_idx) +{ + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + + /* Only OS_EN bit is enabled for fixed counter[idx]. */ + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, + BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + } +} + +static struct kvm_vcpu *new_vcpu(void *guest_code) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + return vcpu; +} + +static void free_vcpu(struct kvm_vcpu *vcpu) +{ + kvm_vm_free(vcpu->vm); +} + +static uint64_t test_fixed_ctr_without_filter(struct kvm_vcpu *vcpu) +{ + return run_vcpu_to_sync(vcpu); +} + +static const uint32_t actions[] = { + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_DENY, +}; + +static uint64_t test_fixed_ctr_with_filter(struct kvm_vcpu *vcpu, + uint32_t action, + uint32_t bitmap) +{ + struct kvm_pmu_event_filter *f; + uint64_t r; + + f = create_pmu_event_filter(0, 0, action, 0, bitmap); + r = test_with_filter(vcpu, f); + free(f); + return r; +} + +static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) +{ + return (action == KVM_PMU_EVENT_ALLOW && (bitmap & BIT_ULL(idx))) || + (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); +} + +static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, + uint8_t fixed_ctr_idx, + uint8_t max_fixed_num) +{ + uint8_t i; + uint32_t bitmap; + uint64_t count; + bool expected; + + /* + * Check the fixed performance counter can count normally works when + * KVM userspace doesn't set any pmu filter. + */ + TEST_ASSERT(max_fixed_num && test_fixed_ctr_without_filter(vcpu), + "Fixed counter does not exist or does not work as expected."); + + for (i = 0; i < ARRAY_SIZE(actions); i++) { + for (bitmap = 0; bitmap < BIT_ULL(max_fixed_num); bitmap++) { + expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); + count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap); + + TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + } + } +} + +static void test_fixed_counter_bitmap(void) +{ + struct kvm_vcpu *vcpu; + uint8_t idx, max_fixed_num = get_kvm_supported_fixed_num(); + + /* + * Check that pmu_event_filter works as expected when it's applied to + * fixed performance counters. + */ + for (idx = 0; idx < max_fixed_num; idx++) { + vcpu = new_vcpu(intel_guest_run_fixed_counters); + vcpu_args_set(vcpu, 1, idx); + test_fixed_ctr_action_and_bitmap(vcpu, idx, max_fixed_num); + free_vcpu(vcpu); + } +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -860,6 +968,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm); test_pmu_config_disable(guest_code); + test_fixed_counter_bitmap(); return 0; } From patchwork Fri Apr 14 11:00:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92773C77B6F for ; Fri, 14 Apr 2023 11:02:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230363AbjDNLCH (ORCPT ); Fri, 14 Apr 2023 07:02:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230360AbjDNLBy (ORCPT ); Fri, 14 Apr 2023 07:01:54 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EBE5AD12; Fri, 14 Apr 2023 04:01:28 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id m18so17885186plx.5; Fri, 14 Apr 2023 04:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470088; x=1684062088; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SK9pwOgB2nPITqmSfwvLaMyb+zQIu3ReFYaoFX7ikX4=; b=PljFlbRIrwC3l66d28sWfITf4lXIEcrg1199QsL6Xk8ADfdMZnqIiOKwFCpcjEvkvg DgtYI0mGZqCc2Ijsn2/1l3eWUFmD37wK1Ts/fhqWfWgtXHmReAsT7pzBsBBL9sDdPEVe tTWNjc9/aG6BUsBFqFpBzuJ5HPhW30Vq0BsArrL4kAgRIN1wHnOVYhLcWT0wj4wwfGwf M9ee1OIE5JtBhzizP7j/pSxzhxogCRGkvM0WGJP3DUY+/UanhWPp1TeQN1ndPxkPj9R5 wmX/mTTkWcNbzcMuxt3FfqtPq9gaTBgeNlRNQhqMKGhDpBTF4/2tSBGR4uZsARWgJqcN MzBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470088; x=1684062088; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SK9pwOgB2nPITqmSfwvLaMyb+zQIu3ReFYaoFX7ikX4=; b=mDGgmOoDUGV2rxphMn7tus7iC+xt/7J75zApZXT0ttKfnsXh/noVzF4SuT/+46Zy1g 0eyEAFlsBq2iJK5oDpvcgsdlQq/I3kNLx7hh1GPwbcbM+EKrjA9VOM2svv9EsbtO+sX5 GE+8PbthBN/wpwlLdrZRX7fbp5kUgZZDxVHYEA1sKGDWWGQ5djGGIwwCtb06v2YOhUPk 7DrgJjUVRCtxc5HlQq4DeOOBqHntBsaeKw4IjnfbBsF3OYDnbnsN/A0S6r/RJ++OJE3c az7bQ9gn/URcNUeQn1b1AGhvKX8QVtE8u+UVNgoTFfRM1rObsP4hqK7tJN3OPsFxWuBG O/YA== X-Gm-Message-State: AAQBX9eWzP5yUPCQ9Z9wdGBfHU9FjbYKV5UxPrwhlWsaXx+XiZH8SPA2 O0scb1RJpLdjKuV3XFZwpqE= X-Google-Smtp-Source: AKy350asCprAU6527zyhHcwBBYydceA7UYQzsFvjOl46nDBEehhSUpZTUIYnQBkOtkB5u2KbeJ18DQ== X-Received: by 2002:a17:902:e552:b0:1a6:6f09:6736 with SMTP id n18-20020a170902e55200b001a66f096736mr2768923plf.20.1681470088177; Fri, 14 Apr 2023 04:01:28 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:27 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/7] KVM: selftests: Check gp event filters without affecting fixed event filters Date: Fri, 14 Apr 2023 19:00:55 +0800 Message-Id: <20230414110056.19665-7-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. This test helps to ensure that the fixed performance filter works as expected even when generic performance event filters are also set. Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0f54c53d7fff..9be4c6f8fb7e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -889,6 +889,7 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint32_t bitmap; uint64_t count; bool expected; + struct kvm_pmu_event_filter *f; /* * Check the fixed performance counter can count normally works when @@ -902,6 +903,19 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap); + TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + + /* + * Check that setting both events[] and fixed_counter_bitmap + * does not affect the consistency of the fixed ctrs' behaviour. + * + * Note, the fixed_counter_bitmap rule has high priority. + */ + f = event_filter(actions[i]); + f->fixed_counter_bitmap = bitmap; + count = test_with_filter(vcpu, f); + TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); } From patchwork Fri Apr 14 11:00:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13211272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDE78C77B70 for ; Fri, 14 Apr 2023 11:02:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230514AbjDNLCd (ORCPT ); Fri, 14 Apr 2023 07:02:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229832AbjDNLCG (ORCPT ); Fri, 14 Apr 2023 07:02:06 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B4804C2D; Fri, 14 Apr 2023 04:01:38 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id kh6so16359233plb.0; Fri, 14 Apr 2023 04:01:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681470091; x=1684062091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9/8pDX/pU7JJmfGUenCJQ2UI8+BzuFusFiT+V1gcCtU=; b=EP0sMfrrsKdFH6FjpOMzZRMpKo3oBbGHMOEmqm7cIDrtgbkR26QOjynrPDPBJ4CYQr qc3XZKs5lTXbWdFsXlsTcjryDO2PqMJF5gUxalYuB3Vj5SKJxlWsYHV9ye6orcLv+At8 f6cNCaykJbW2IMqO1zNK3kl8QIE5S1XrcuqBaMbdHATOsKKThq62+srk9ckWqLSMsDB0 5mH0rkgfFle7Up3fwW8hBaMegz8J6lo95Mdize3MLc4NKiJdGey2ZW2/2StbcmdT5Jzl A0AJGIUFkeUA/rz0+GIMs04+86HPrhpFHhgLljEturKfThgCTnimFBvw7yFyiPBTzgsu 3lGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681470091; x=1684062091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9/8pDX/pU7JJmfGUenCJQ2UI8+BzuFusFiT+V1gcCtU=; b=exNFpRejGmRb9JiEXyqkL0oRTfT1waV89We27XamJjrkM9e7tbIGRNmZV5mTS+FPId +F9JJ1zxENUHC/XE7hzInjouLk930KoJrHwPsEIiDec0Rx1hljQNBcjpGqhdYW/HJhNo OtLfz5CqLmktxK31D7miKY2ru1OVFXq0C/PgeZ5Yvc999Gm4j4eYRN+qdzS8D7KSKDIk 87kSMhUxMF6ti+7Ku3TVlrV7tR7Apg6Hvu31XLpXRiWlz0yBfrloEhygOccCq/nhMECE 4uKH8NOIYhlXA4pfjhHIRpW9/+BXkj1e2LphBKU0aQwmZhGjLbcxtSeTgi9KOm30HYcG WQ/w== X-Gm-Message-State: AAQBX9ePNOPT+JyWhE0zS23cAdYlno71OrkFNYZU0W+7yD7jxYI8/v2T 7W8oURkS4RAAau4CouFDV7A= X-Google-Smtp-Source: AKy350YQH6oU0EAAitXz0YQt2587Kcb68X3kT4Wh/x5JCN5bZaXwZhyWS9d7O3SBqH8d0UBqVnre8A== X-Received: by 2002:a17:902:7d8b:b0:1a6:487d:deab with SMTP id a11-20020a1709027d8b00b001a6487ddeabmr2258786plm.24.1681470091610; Fri, 14 Apr 2023 04:01:31 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r4-20020a170902ea4400b00194caf3e975sm2835821plg.208.2023.04.14.04.01.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Apr 2023 04:01:31 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 7/7] KVM: selftests: Test pmu event filter with incompatible kvm_pmu_event_filter Date: Fri, 14 Apr 2023 19:00:56 +0800 Message-Id: <20230414110056.19665-8-cloudliang@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230414110056.19665-1-cloudliang@tencent.com> References: <20230414110056.19665-1-cloudliang@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Jinrong Liang Add test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used. By running the test, we can ensure that the pmu event filter correctly handles incomplete structures and does not allow events to be counted when they should not be. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 9be4c6f8fb7e..a6b6e0d086ae 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -881,6 +881,24 @@ static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); } +struct incompatible_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; +}; + +static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action, + uint32_t bitmap) +{ + struct incompatible_pmu_event_filter err_f; + + err_f.action = action; + err_f.fixed_counter_bitmap = bitmap; + ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action); + + return run_vcpu_to_sync(vcpu); +} + static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint8_t fixed_ctr_idx, uint8_t max_fixed_num) @@ -918,6 +936,11 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); + + /* Test incompatible event filter works as expected. */ + count = test_incompatible_filter(vcpu, actions[i], bitmap); + TEST_ASSERT(expected == !!count, + "Incompatible filter does not work as expected."); } } }