From patchwork Thu Apr 20 10:46:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7888C77B72 for ; Thu, 20 Apr 2023 10:48:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234966AbjDTKsY (ORCPT ); Thu, 20 Apr 2023 06:48:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234811AbjDTKsB (ORCPT ); Thu, 20 Apr 2023 06:48:01 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23D5672AE; Thu, 20 Apr 2023 03:46:37 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id d2e1a72fcca58-63d4595d60fso6018799b3a.0; Thu, 20 Apr 2023 03:46:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987595; x=1684579595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MXFHW7iXk4dJ6nVc5Wo6lbWk1qPwCtaiF8CuuRqSUi0=; b=d32rF8ERS9tUt0wzbfB8qbCf7jO6drVH44mgIs+0uWTVYZDXPeutVnJwBTa4E5XfjW rdRDzTG2J9z+NDBTMt9y7zT9WPqMhHwhlahB/KSlcu47rONknE3lDM9ojnMEfeDH3Osw egqmLGq00gFlNwt63Dtg+TRY0fWSCTRMkDcbGyYqm92Iy1oFcfW8q4XEKdhT1WgpoLD3 cF3q4QIiY4tjpNc0+Hxu8pHj/WTdUfvOGCp2+XaBI4sbRM7yYAqd2WjTkloNB2jyNiQc RcnVndrZU4jHZZXInwkvsEaZmqZI06PjeKVXvBtj+cxPHf9IgP8CJ12Cb3jP4KT5xxLH Aykw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987595; x=1684579595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MXFHW7iXk4dJ6nVc5Wo6lbWk1qPwCtaiF8CuuRqSUi0=; b=FpXz5BHsOJ/hi6ShFE7KndEuvo3iq723R72WkKf3utMLQqu73Eor8PxNJTMBWJ/v+q sKn/H9in0IJ8LXKcmrCgtbr8rr+mIpf2EmsaK9ETw+Hr6U5SpobLtfTqzYC2BOvjcvZE +q2wWPY95sWjuKdwrhXojEFg+NDJb4O2F+GSr/PYG5yoEtEg8N4Nhlk6dKP81TiR4V/B sYMveH3u9vMijH5aHyN8siU4/pDT3qpKfbkfHf4cHJx6bpPy27FGdy1l6YkwKMN16g4+ Xdf8CzOesDISZemfF2HcvysvwgnQm2tMFAjdNnAGqV/xcZ6QaepKtMPIgDT0I5r8mHfN TjXg== X-Gm-Message-State: AAQBX9c8lOgFklabaGe937VV9TeOTbYLVlXjsOZzYED4GSMTA2cJSAKE RZUDxi1MmTESZxYeNLD6n34= X-Google-Smtp-Source: AKy350a1y/bsuU9YAw5AlxkPFGbXMO/u5JXsUPPOo8PmN4178ccGushfA7+gQOtl4aNA/fu4rWoaxQ== X-Received: by 2002:a17:902:d486:b0:1a6:98a4:e941 with SMTP id c6-20020a170902d48600b001a698a4e941mr1442379plg.2.1681987594807; Thu, 20 Apr 2023 03:46:34 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:34 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/7] KVM: selftests: Replace int with uint32_t for nevents Date: Thu, 20 Apr 2023 18:46:16 +0800 Message-Id: <20230420104622.12504-2-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Defined as type __u32, the nevents field in kvm_pmu_event_filter can only accept positive values within a specific range. Therefore, replacing int with uint32_t for nevents ensures consistency and readability in the code. This change has been tested and verified to work correctly with all relevant code. Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 1f60dfae69e0..c0521fc9e8f6 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -194,7 +194,7 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], int nevents, +create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; @@ -648,7 +648,7 @@ const struct masked_events_test test_cases[] = { }; static int append_test_events(const struct masked_events_test *test, - uint64_t *events, int nevents) + uint64_t *events, uint32_t nevents) { const uint64_t *evts; int i; @@ -670,7 +670,7 @@ static bool bool_eq(bool a, bool b) } static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, - int nevents) + uint32_t nevents) { int ntests = ARRAY_SIZE(test_cases); struct perf_counter c; @@ -695,7 +695,7 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, } } -static void add_dummy_events(uint64_t *events, int nevents) +static void add_dummy_events(uint64_t *events, uint32_t nevents) { int i; @@ -714,7 +714,7 @@ static void add_dummy_events(uint64_t *events, int nevents) static void test_masked_events(struct kvm_vcpu *vcpu) { - int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint32_t nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; uint64_t events[MAX_FILTER_EVENTS]; /* Run the test cases against a sparse PMU event filter. */ @@ -726,7 +726,7 @@ static void test_masked_events(struct kvm_vcpu *vcpu) } static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - int nevents, uint32_t flags) + uint32_t nevents, uint32_t flags) { struct kvm_pmu_event_filter *f; int r; From patchwork Thu Apr 20 10:46:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5049DC77B7E for ; Thu, 20 Apr 2023 10:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234840AbjDTKsp (ORCPT ); Thu, 20 Apr 2023 06:48:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234440AbjDTKsH (ORCPT ); Thu, 20 Apr 2023 06:48:07 -0400 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 887D12D43; Thu, 20 Apr 2023 03:46:43 -0700 (PDT) Received: by mail-pl1-x643.google.com with SMTP id d9443c01a7336-1a66b9bd7dfso10431295ad.2; Thu, 20 Apr 2023 03:46:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987599; x=1684579599; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+LeXiyoAgzuIFL7O4vGG3hN/1It8pY+Tbdd0XVylW0Q=; b=LmNzJ9siwL8usxifCanL62JyPjnmWtKG4dW0lPBBQJ3ZwYOK4G5n7P7zhfcQw9kNvh SwSuDBJv4bevKYkE6BZmvkrnQdGONclypqrWnJCpFJLRNZYri2Xmtz096QbjY3HQU0yz vWD+Kr0WrtrMZPZ7N4dpMhuXFYrYJHS8PWGBdlh3BSpkr7rEkzi5RGGML2Iu/Bd0kGTF UTpzyNQbR8XSVBv/fQgolthyrVKrJp5J2bTtBfXxRk2eXkplo/bWzg20YeSx14wIyeNA zugDSPK4rFacvWiy8tDn91lWuwwf6tHS05xUT0Bud2pXO6pAFIhJYtg8aMgaRe8XIV5c 1E1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987599; x=1684579599; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+LeXiyoAgzuIFL7O4vGG3hN/1It8pY+Tbdd0XVylW0Q=; b=kkkWJ/wX49z0yMxJ784/Exk0olsgX3mgbzokqlKkrcsbUd0zE8sn28uJyx+nuN00cp TFV0nYVe3+A/zzz7R/mavm0hzRVnLtBd7aXhBmOKpWwdxmej+78/lSSYkOYgZ8RDfWh4 VdZCk0yvmDB3g0dKLkinO0tB6hklZef/+ovnd8sm6H3QMP0SrIjY9G7c1tKYAAx6kIpE mn3OCTEkUE5qMhEg4kBxraGegexGu/UefDDaT4rl5yKRBTDRL6Ho0K2iCKZ4JxXEcUDc fVk9+qR+m4ttVmHjZA891NTPbAYDWTNeKFNLKdg4sfyIBuK4Ykn5GECSFmwkwwk0kqEH jNLQ== X-Gm-Message-State: AAQBX9cnQJd35JF03NOpHydGJQ6VibkKktLYANd7HBlKZAgq1dwq6arU VZob6Do45nL/8HOGf+3hOpE= X-Google-Smtp-Source: AKy350bfDDYx7+W3PqyjSMho6KIU83TDG5OJp7rRawSNqhkGCNgxKbjFaztXMUb0hr89VFOkv+dS8w== X-Received: by 2002:a17:902:f544:b0:1a6:84be:a08f with SMTP id h4-20020a170902f54400b001a684bea08fmr1174663plf.64.1681987598666; Thu, 20 Apr 2023 03:46:38 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:38 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/7] KVM: selftests: Apply create_pmu_event_filter() to fixed ctrs Date: Thu, 20 Apr 2023 18:46:17 +0800 Message-Id: <20230420104622.12504-3-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Add fixed_counter_bitmap to the create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters. No functional change intended. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index c0521fc9e8f6..4e87eea6986b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -192,19 +192,22 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) return f; } - static struct kvm_pmu_event_filter * create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, - uint32_t action, uint32_t flags) + uint32_t action, uint32_t flags, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->fixed_counter_bitmap = fixed_counter_bitmap; f->flags = flags; - for (i = 0; i < nevents; i++) - f->events[i] = event_list[i]; + if (f->nevents) { + for (i = 0; i < f->nevents; i++) + f->events[i] = event_list[i]; + } return f; } @@ -213,7 +216,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action, 0); + action, 0, 0); } /* @@ -260,7 +263,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0, 0); count = test_with_filter(vcpu, f); free(f); @@ -544,7 +547,7 @@ static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, f = create_pmu_event_filter(masked_events, nmasked_events, KVM_PMU_EVENT_ALLOW, - KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, 0); r.raw = test_with_filter(vcpu, f); free(f); @@ -726,12 +729,14 @@ static void test_masked_events(struct kvm_vcpu *vcpu) } static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - uint32_t nevents, uint32_t flags) + uint32_t nevents, uint32_t flags, uint32_t action, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int r; - f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + f = create_pmu_event_filter(events, nevents, action, flags, + fixed_counter_bitmap); r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); free(f); @@ -747,14 +752,16 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ - r = run_filter_test(vcpu, &e, 1, 0); + r = run_filter_test(vcpu, &e, 1, 0, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } From patchwork Thu Apr 20 10:46:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C2FEC77B73 for ; Thu, 20 Apr 2023 10:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235060AbjDTKsw (ORCPT ); Thu, 20 Apr 2023 06:48:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234541AbjDTKsI (ORCPT ); Thu, 20 Apr 2023 06:48:08 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD36B1723; Thu, 20 Apr 2023 03:46:44 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id d9443c01a7336-1a66b9bd893so8852875ad.1; Thu, 20 Apr 2023 03:46:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987602; x=1684579602; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=amyRMXzkbBlo1Nn8Dg6ZmQl7pYumfbivxhnDwHgT/8I=; b=Gn7JdtZxGjvrp92lW/UyBh1jY6HN5BMqlYRtswXKqqVayIYeNtYqJF4Vkb/DUIVqts /80vWZFPwyEHbbxTciHZkMerUddWKZesdbuy1XXaI/EPGaXtvdxHLZRbczSSQJ3GUDI3 hkTmn3+yIsUp5zwRHTqh/zSy1x8tN7v2325RJXXqAmEt5hiyY6OOMEWBERhYDbNEoKQS ic2l7PVjLr0jFx+8QEBAcvzKNYGMDARpW464cJIARtL3ffERGANHYcZzQZTJfJ8K10gV A95SmFcB0r4x24c0ir+Cj0X2/nkdEVdiszaS3kCLYUqoKBUCO1NIy5Oaie5uSEG/7ykQ DNGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987602; x=1684579602; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=amyRMXzkbBlo1Nn8Dg6ZmQl7pYumfbivxhnDwHgT/8I=; b=K+1rbt9AAp6EaHEtUFI/Amg/GufKp7QM/177O5P8iUELgbiTD8aAoI9CquJ+3Zkz9n j0DqPJyNK0neA+O/kbT3N8GZl+AyXDZgfk/E9DrTNV9E8J65SGA04xO6Q7OrgyZ8dvGq myRIy4O2kAK+zglqtuRBnKC7+AmcTZQRU1kRUAlYPdj0d2Nqlav4FNfcjTlHWZu49AcA li7aTsRjej0yCWYFrjvMpDJKo6Uorns+6uR1NPyEbqWtCmES6CEn2PmCixh9MYPx4/8n +XhZyfvOtp2Yls7kbYgO1WwTfErM/OL8pmvzzImSHTHMI6OjTtTkElVkzOvAQxxucDGP E2qg== X-Gm-Message-State: AAQBX9eS/c7qCNuxQxXTWBKWcQCYmXk3LOTajwEyGv04t9ngwU5EYV1+ wGcYIw+Z1eartoXynVC8pB4= X-Google-Smtp-Source: AKy350YELMSrDxge3JDDa9nCYfG5YWnArj50vcMnmOucXLMOum6Atz5dw3L0xEmm0ozMqkKyjbzf9w== X-Received: by 2002:a17:902:f803:b0:1a5:22f3:220d with SMTP id ix3-20020a170902f80300b001a522f3220dmr996991plb.49.1681987602527; Thu, 20 Apr 2023 03:46:42 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:42 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/7] KVM: selftests: Test unavailable event filters are rejected Date: Thu, 20 Apr 2023 18:46:18 +0800 Message-Id: <20230420104622.12504-4-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Adds unsupported input test cases for PMU filter. Specifically, it tests the input of unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values, which should all return an error, as they are currently unsupported by the filter. Additionally, the patch tests setting non-exist fixed counters in the fixed bitmap doesn't fail. This change aims to improve the testing of the PMU filter and ensure that it functions correctly in all supported use cases. The patch has been tested and verified to function correctly. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4e87eea6986b..a3d5c30ce914 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,10 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5 #define NUM_BRANCHES 42 +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) /* * This is how the event selector and unit mask are stored in an AMD @@ -743,10 +747,22 @@ static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, return r; } +static uint8_t get_kvm_supported_fixed_num(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + if (host_cpu_is_amd) + return 0; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->edx & FIXED_CTR_NUM_MASK; +} + static void test_filter_ioctl(struct kvm_vcpu *vcpu) { uint64_t e = ~0ul; int r; + uint8_t max_fixed_num = get_kvm_supported_fixed_num(); /* * Unfortunately having invalid bits set in event data is expected to @@ -763,6 +779,42 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + /* + * Test input of unsupported "action" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, 0, PMU_EVENT_FILTER_INVALID_ACTION, 0); + TEST_ASSERT(r != 0, "Set invalid action is expected to fail."); + + /* + * Test input of unsupported "flags" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, PMU_EVENT_FILTER_INVALID_FLAGS, + KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, "Set invalid flags is expected to fail."); + + /* + * Test input of unsupported "nevents" values should return an error. + * The only values currently supported are those less than or equal to + * MAX_FILTER_EVENTS. + */ + r = run_filter_test(vcpu, event_list, PMU_EVENT_FILTER_INVALID_NEVENTS, + 0, KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, + "Setting PMU event filters that exceeds the maximum supported value should fail"); + + /* + * In this case, set non-exist fixed counters in the fixed bitmap + * doesn't fail. + */ + if (max_fixed_num) { + r = run_filter_test(vcpu, 0, 0, 0, KVM_PMU_EVENT_ALLOW, + ~GENMASK_ULL(max_fixed_num, 0)); + TEST_ASSERT(r == 0, + "Set invalid or non-exist fixed cunters in the fixed bitmap fail."); + } } int main(int argc, char *argv[]) From patchwork Thu Apr 20 10:46:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45FBAC77B72 for ; Thu, 20 Apr 2023 10:48:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234226AbjDTKsz (ORCPT ); Thu, 20 Apr 2023 06:48:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234755AbjDTKsM (ORCPT ); Thu, 20 Apr 2023 06:48:12 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08714173F; Thu, 20 Apr 2023 03:46:47 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id d9443c01a7336-1a66e7a52d3so7924165ad.0; Thu, 20 Apr 2023 03:46:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987606; x=1684579606; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ucz1xYqm1NnRhQqZf/NHWVK0FuNxfx3TblIgxh14nj0=; b=YU1yuqWA1beCHR+/WoSpYyEhAgWVxMb3wPQqqrz+yDablssc0FlgD8jlB9iRqhjjSz eiUAcJfII0DeQtv5RMY1o2HjRVYg+AS+biIXN9g7l0vXlMAbaFYHHU9WFo/UEe1EK8LR 0b+Uor6WXmLF7wbhAo3GDayQxBkFxI0FzyBsaf+IPD4HficYFf2Og0q9kxUVIgDd/HTp AyCHu68qt8kAgGqjHpwfmZoFr70/d4sPcQXMhqNzSHucLAHmaZRo9ISvMnbRwXa9Tixy 7Xa1HUDDk1u6MiaU33c15KppS/1rVcx+zzRInDlKpgz5lsIIlGNvXObVhhTcTztcQjPa DW8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987606; x=1684579606; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ucz1xYqm1NnRhQqZf/NHWVK0FuNxfx3TblIgxh14nj0=; b=JbWpSugKTu5kBiO09vtJmmUdQDK4mJH2Xx2vFvBqeJm+80P+0YRleVtj0ougJYti10 R6vF+6z+qG1VMxwQXpwMqu4piaZWnjZCA5FPhkAv8V2zw4rdU1nK+q6BMkLwM8amoEUy ur7wdeh1VwHNM4dnToCwjNL5jVu7tArbjucuP9ICG7ILNwwiBkaleA+N/BiDaf3L9upP T3EXnXZU2HjZwjALIOOjE7CcoNJa62qjSg5loAvPMTl06OBSVchbbNapGkgvsvWYQiAA 7z9B2w1ke4p8JlsORvyFLHmXqrLA3qzgTuNKongeAtT9oyUXeQcyFhKE01uwY+L2XA7a BfvA== X-Gm-Message-State: AAQBX9cVPyh43LO4BjeSxW2iVyYFvwDZMdh4PalwCj8GfwhF0BnwFAXW 9gugyW5RxSIFYkllS3YJBc4= X-Google-Smtp-Source: AKy350aGYUTqpyJWeNKlNoKbyJpwRjY+6rT6xYPnyPu6gtZLQhghx1XM05BQt6Qkgsmg1qiD2li/Qg== X-Received: by 2002:a17:903:246:b0:19f:3b86:4715 with SMTP id j6-20020a170903024600b0019f3b864715mr1206887plh.8.1681987606197; Thu, 20 Apr 2023 03:46:46 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:45 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/7] KVM: x86/pmu: Add documentation for fixed ctr on PMU filter Date: Thu, 20 Apr 2023 18:46:19 +0800 Message-Id: <20230420104622.12504-5-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Update the documentation for the KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The action and fixed_counter_bitmap members of the pmu filter to determine whether fixed performance events can be programmed by the guest. This information is helpful for correctly configuring the fixed_counter_bitmap and action fields to filter fixed performance events. Suggested-by: Like Xu Reported-by: kernel test robot Link: https://lore.kernel.org/oe-kbuild-all/202304150850.rx4UDDsB-lkp@intel.com Signed-off-by: Jinrong Liang --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a69e91088d76..b5836767e0e7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5122,6 +5122,27 @@ Valid values for 'action':: #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +Via this API, KVM userspace can also control the behavior of the VM's fixed +counters (if any) by configuring the "action" and "fixed_counter_bitmap" fields. + +Specifically, KVM follows the following pseudo-code when determining whether to +allow the guest FixCtr[i] to count its pre-defined fixed event:: + + FixCtr[i]_is_allowed = (action == ALLOW) && (bitmap & BIT(i)) || + (action == DENY) && !(bitmap & BIT(i)); + FixCtr[i]_is_denied = !FixCtr[i]_is_allowed; + +Note once this API interface is called, the default zero value of the field +"fixed_counter_bitmap" will implicitly affect all fixed counters, even if it's +expected to be used only to control the events on generic counters. + +In addition, pre-defined performance events on the fixed counters already have +event_select and unit_mask values defined, which means userspace can also +control fixed counters by configuring "action"+ "events" fields. + +When there is a contradiction between these two polices, the fixed performance +counter will only follow the rule of the pseudo-code above. + 4.121 KVM_PPC_SVM_OFF --------------------- From patchwork Thu Apr 20 10:46:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19AF9C77B7F for ; Thu, 20 Apr 2023 10:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234811AbjDTKtG (ORCPT ); Thu, 20 Apr 2023 06:49:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbjDTKsM (ORCPT ); Thu, 20 Apr 2023 06:48:12 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECAA26EA0; Thu, 20 Apr 2023 03:46:50 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id d9443c01a7336-1a67bcde3a7so10271305ad.3; Thu, 20 Apr 2023 03:46:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987610; x=1684579610; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m58sQ2zZW4kkVQwmxOu9IKi6UpDywmoKBSwhLDA+62U=; b=ePGwN4tL1X/kVDLO7KIqIF1t4QwQRWkV6pdTmYPK78yRK1Ks4nD3e9qD8M8rJBXFiP Cu7sXwOigJ1u5doRf/ryzLo16XE2VPt5+ryJmt0wiumswgHIlvLriFXigSzdrW327TbQ DBpiruj3fhinapWbo6UtB6Vlnw/Xc9uz8rDA0yflU5a+jTuLOBHLyYOtT77UyZr7nnhQ 7jDcREmbAAPNRYXWxb7HXFTiFaMReleNPnKzPw5v4lvvfCUIBc9hfd+MSKA1sryuGnFm ua3cqEcl4Pq3suom7yqyrnKxnE62p/DTgnYYTdpBQLTB3mBWKrxoDxakZZS71hMFry60 Dulg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987610; x=1684579610; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m58sQ2zZW4kkVQwmxOu9IKi6UpDywmoKBSwhLDA+62U=; b=jI273tbn0dn25qQSjjK6wbUANbxOgwgKgdUT0ttrXSFAr9tmVjC2pbwnXIDgeBk3Rw vZDWg9yB69r49F5p3LlOrDB2EztPmlRFLAr6MwI3Ql/T0+rTd5b2XNI8dRMnFbORp1ct nKjpty9h7jDixAiiQ5vNKWUg3YttlYsOfrTFR4IS192EM5h5OCUM0Qnp5jiGzxWihrPF nQilPdgEK13chZueL0zpoZUjG5Kz1H9ZMwy/U6ME+QLMt/wcuEJYlimLDkhPopNu8XhA /NA+p+rMj8U8HJbSzEOfpfqvVQW+8Rb7fo+bWIHGo1KbFnrRFZgyWbHWYXCKJXnWVI/S IKPQ== X-Gm-Message-State: AAQBX9dbcgMG9iCT+K1ypDBBwWoyd6eem7C3KWNXZP71iX3qGjNtgVKS sHC1/iE8IxJORy1yNAS0Jxs= X-Google-Smtp-Source: AKy350Z6Ua1OcQuBr4eInEUPBjDfMv2rx88wJpK8ERipkHWVd5XIXonkGtexK/9canUWHeGOFKLgxA== X-Received: by 2002:a17:902:ba8a:b0:1a5:1b94:e63d with SMTP id k10-20020a170902ba8a00b001a51b94e63dmr1058109pls.65.1681987609995; Thu, 20 Apr 2023 03:46:49 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:49 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/7] KVM: selftests: Check if pmu_event_filter meets expectations on fixed ctrs Date: Thu, 20 Apr 2023 18:46:20 +0800 Message-Id: <20230420104622.12504-6-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Add tests to cover that pmu_event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest). Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index a3d5c30ce914..0f54c53d7fff 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -31,6 +31,7 @@ #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) #define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) +#define INTEL_PMC_IDX_FIXED 32 /* * This is how the event selector and unit mask are stored in an AMD @@ -817,6 +818,113 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) } } +static void intel_guest_run_fixed_counters(uint8_t fixed_ctr_idx) +{ + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + + /* Only OS_EN bit is enabled for fixed counter[idx]. */ + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, + BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + } +} + +static struct kvm_vcpu *new_vcpu(void *guest_code) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + return vcpu; +} + +static void free_vcpu(struct kvm_vcpu *vcpu) +{ + kvm_vm_free(vcpu->vm); +} + +static uint64_t test_fixed_ctr_without_filter(struct kvm_vcpu *vcpu) +{ + return run_vcpu_to_sync(vcpu); +} + +static const uint32_t actions[] = { + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_DENY, +}; + +static uint64_t test_fixed_ctr_with_filter(struct kvm_vcpu *vcpu, + uint32_t action, + uint32_t bitmap) +{ + struct kvm_pmu_event_filter *f; + uint64_t r; + + f = create_pmu_event_filter(0, 0, action, 0, bitmap); + r = test_with_filter(vcpu, f); + free(f); + return r; +} + +static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) +{ + return (action == KVM_PMU_EVENT_ALLOW && (bitmap & BIT_ULL(idx))) || + (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); +} + +static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, + uint8_t fixed_ctr_idx, + uint8_t max_fixed_num) +{ + uint8_t i; + uint32_t bitmap; + uint64_t count; + bool expected; + + /* + * Check the fixed performance counter can count normally works when + * KVM userspace doesn't set any pmu filter. + */ + TEST_ASSERT(max_fixed_num && test_fixed_ctr_without_filter(vcpu), + "Fixed counter does not exist or does not work as expected."); + + for (i = 0; i < ARRAY_SIZE(actions); i++) { + for (bitmap = 0; bitmap < BIT_ULL(max_fixed_num); bitmap++) { + expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); + count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap); + + TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + } + } +} + +static void test_fixed_counter_bitmap(void) +{ + struct kvm_vcpu *vcpu; + uint8_t idx, max_fixed_num = get_kvm_supported_fixed_num(); + + /* + * Check that pmu_event_filter works as expected when it's applied to + * fixed performance counters. + */ + for (idx = 0; idx < max_fixed_num; idx++) { + vcpu = new_vcpu(intel_guest_run_fixed_counters); + vcpu_args_set(vcpu, 1, idx); + test_fixed_ctr_action_and_bitmap(vcpu, idx, max_fixed_num); + free_vcpu(vcpu); + } +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -860,6 +968,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm); test_pmu_config_disable(guest_code); + test_fixed_counter_bitmap(); return 0; } From patchwork Thu Apr 20 10:46:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86120C7EE20 for ; Thu, 20 Apr 2023 10:48:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235087AbjDTKs6 (ORCPT ); Thu, 20 Apr 2023 06:48:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233876AbjDTKsU (ORCPT ); Thu, 20 Apr 2023 06:48:20 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2350C8A58; Thu, 20 Apr 2023 03:46:57 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id d2e1a72fcca58-63d4595d60fso6020402b3a.0; Thu, 20 Apr 2023 03:46:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987613; x=1684579613; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3yAzb4w95wvVcgK4ZPqFRB6yxn5Nne+39hIPjBrv118=; b=F3aF8Cnc7IXk2LbCDoFPoAOekJWCyTFvqXfYTAAH3Za74ef9yaytvuxUxZBzIAfcK7 AQiBQP8RoAHHBthO1+apTkKVw6ZV3gk9SFnw8lZKdgZ7eqCYc0tX1/yo3zxtL2lcnB99 jawHlqrm369l5TK8ukjEaBgSkSYFfpXdbexiovT5hDaqVFeCRcniRQUU9umtvw7/7hWx BsoEH5oLSuYrpujfnCzhWMRPJhoU7f123YG6oActbOfs7KoLKH1AQWleoGy82gP3WlEA TUatCgIJS8QuiEXHgRIR238qLSYRQ/SFdxJNGC+CEyfVGyY5gSbVFyJMooNX7AnU99v0 w4LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987613; x=1684579613; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3yAzb4w95wvVcgK4ZPqFRB6yxn5Nne+39hIPjBrv118=; b=ezBzqpJYdvDig4GhSNkHBUmPz0C6jY9WZQlCS4Ggtfas6BNX/vws3YcX4zy4KBwWQW OtOB1dKz027NK6HvBIXJDTpuAz/7Wtc5j06TKLWwtDZv1n5+W4bq6Bbo7yNvmXQ0Spgf oshQCUwJB2Ya20GNQleIpSR4w26KGdO9WjEx5S3N485QcB3nI6KOg+0bJy0qj5BUgh1t FMAptzLPshhhJn2jVUpxTs1hq2yZxJBdKHve0HZGknhxziCjEWu9olaOfsUXlFFk4XtJ YuCfeEfViOyO5qtIbT8fC73r6+zuVIzdiC6oH/d4sQdrHGdUST+rA9LJMI2CGlWow/3t sxcg== X-Gm-Message-State: AAQBX9eJ/RXVZyWBUHCrVArEtdSIqRDnqDI/nKjz9oW2A/wtOdMsidRj yty0CfSjj6UtI4TykrZViKs= X-Google-Smtp-Source: AKy350bqAMWPDl52EnaNvLd4H6X1qnaaLvTseHnoGDVw+9AmZEUGC6HORZFIEbnL049P3giRB1qeEQ== X-Received: by 2002:a17:903:32d2:b0:1a9:3b64:3761 with SMTP id i18-20020a17090332d200b001a93b643761mr120785plr.9.1681987613692; Thu, 20 Apr 2023 03:46:53 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:53 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 6/7] KVM: selftests: Check gp event filters without affecting fixed event filters Date: Thu, 20 Apr 2023 18:46:21 +0800 Message-Id: <20230420104622.12504-7-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. This test helps to ensure that the fixed performance filter works as expected even when generic performance event filters are also set. Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0f54c53d7fff..9be4c6f8fb7e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -889,6 +889,7 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint32_t bitmap; uint64_t count; bool expected; + struct kvm_pmu_event_filter *f; /* * Check the fixed performance counter can count normally works when @@ -902,6 +903,19 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap); + TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + + /* + * Check that setting both events[] and fixed_counter_bitmap + * does not affect the consistency of the fixed ctrs' behaviour. + * + * Note, the fixed_counter_bitmap rule has high priority. + */ + f = event_filter(actions[i]); + f->fixed_counter_bitmap = bitmap; + count = test_with_filter(vcpu, f); + TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); } From patchwork Thu Apr 20 10:46:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinrong Liang X-Patchwork-Id: 13218479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0F7CC77B72 for ; Thu, 20 Apr 2023 10:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234541AbjDTKtA (ORCPT ); Thu, 20 Apr 2023 06:49:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234540AbjDTKsf (ORCPT ); Thu, 20 Apr 2023 06:48:35 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 974BE10EA; Thu, 20 Apr 2023 03:47:01 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id 41be03b00d2f7-5144a9c11c7so835695a12.2; Thu, 20 Apr 2023 03:47:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681987619; x=1684579619; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fUsKL4hqeOrADLXVSzAKCm9V00hdTaU+8spEB1aYNQY=; b=XwOj6TyipWhhJdAB/ZOgsH+8NkWNHP9eop/f1eQlml/0xc71YH8eJGrLygEzTd9ECL keYbS0+HlxqcLXMjXaOR9Z3Hl1CeU5/pLmBQ/EzVtnfppCWcRK4YogbQ6deH1DSen2sV Mlxu7ov93Wj0KZ5ZRy3WaY8enAZrb1A8C/33Sn6oOVYdbAzQgYzdvQDDMDMJmKXaYbKK hSdNVHvVyNfa3Gpl7OHpDSLxEuGu7CBu2cw6JuS8DfqUf/1tL5d/5LtDYxr/Sw7U0rNd DcMRhwq1zCKdubu8rnzugBJIqls8P7zkxv8eTup0v/x2huQERG2UVCtnl6CaPt9w/ilU IoGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681987619; x=1684579619; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fUsKL4hqeOrADLXVSzAKCm9V00hdTaU+8spEB1aYNQY=; b=PqXrqwd7UYdU1ecQQ1QlV3+V+b6lpwfBlg16/6xYaTL5C128rm3fH5Y2RfM1P516Hu +wutCwdwPVQ3nnO61ywcOg1f245NgVhxUGrNzonnUm35vnzSZnPC/yMO0Hqi9LxOYUbQ ExrhrSFOf0aR5gAUeAmZGt5hJcWcHnVEqoWGONAHyBBerwCHtGzY/TAU6jTONq9ZhcD7 2iKsE3hPoBZiEmUfKYVkWR0koBUrSUolCbHzaXvhDr50J3vBc+uR47fGuo/+J6DOUcQA pSidfIYWjVjN3dwdd6MXZEVwxWYARsK6G1QADH5QRvoQFTwWFPnppa2ZS6gW23NnpdCk dK2A== X-Gm-Message-State: AAQBX9ekZzkVqnFGPV/+JMBnJuzCkR6fHq60qxuZkb3d6XeBMbpgJSiN 9JnHWMzO6bqNrWOaN1R2V0g497/dz6hZL+a0 X-Google-Smtp-Source: AKy350aI2hqildqUB6LxE9IbVKgqCR4ALns3I3izkecaOeS3k/gPSceIf14vciqRl46CPQ7yhdCZhg== X-Received: by 2002:a17:90b:1bc8:b0:249:897b:fac with SMTP id oa8-20020a17090b1bc800b00249897b0facmr1154624pjb.39.1681987618824; Thu, 20 Apr 2023 03:46:58 -0700 (PDT) Received: from CLOUDLIANG-MB2.tencent.com ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ju18-20020a170903429200b001a526805b86sm923735plb.191.2023.04.20.03.46.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:46:58 -0700 (PDT) From: Jinrong Liang X-Google-Original-From: Jinrong Liang To: Sean Christopherson Cc: Like Xu , Paolo Bonzini , Jonathan Corbet , Shuah Khan , Aaron Lewis , David Matlack , Vishal Annapurve , Wanpeng Li , Bagas Sanjaya , Jinrong Liang , linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 7/7] KVM: selftests: Test pmu event filter with incompatible kvm_pmu_event_filter Date: Thu, 20 Apr 2023 18:46:22 +0800 Message-Id: <20230420104622.12504-8-ljrcore@126.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420104622.12504-1-ljrcore@126.com> References: <20230420104622.12504-1-ljrcore@126.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang From: Jinrong Liang Add test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used. By running the test, we can ensure that the pmu event filter correctly handles incomplete structures and does not allow events to be counted when they should not be. Signed-off-by: Jinrong Liang --- .../kvm/x86_64/pmu_event_filter_test.c | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 9be4c6f8fb7e..a6b6e0d086ae 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -881,6 +881,24 @@ static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); } +struct incompatible_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; +}; + +static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action, + uint32_t bitmap) +{ + struct incompatible_pmu_event_filter err_f; + + err_f.action = action; + err_f.fixed_counter_bitmap = bitmap; + ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action); + + return run_vcpu_to_sync(vcpu); +} + static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint8_t fixed_ctr_idx, uint8_t max_fixed_num) @@ -918,6 +936,11 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); + + /* Test incompatible event filter works as expected. */ + count = test_incompatible_filter(vcpu, actions[i], bitmap); + TEST_ASSERT(expected == !!count, + "Incompatible filter does not work as expected."); } } }