From patchwork Tue Feb 28 00:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13154330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 426A2C7EE30 for ; Tue, 28 Feb 2023 00:09:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229690AbjB1AJT (ORCPT ); Mon, 27 Feb 2023 19:09:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229620AbjB1AJM (ORCPT ); Mon, 27 Feb 2023 19:09:12 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E9BB6A68 for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id u4-20020a170902bf4400b0019e30a57694so232459pls.20 for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W+8dwHsB+Gu2Q76MGxzkoyzzfv3d8yoeeGPaquCGBac=; b=OGDDYdCth6uOxOAOjkLpR4SVviA2JCyYfvFvjd/Zb3gf0QIsMtKQKbH5pRX2WYVvPB oUof0MzMrD15TiKIqJItvS60MeGziFcqXuxR1TFFdcy37h1HoeY22bYv0GYQKG08Pkor ONX8vLl6vr0wE1sbLFOzFar1IqpDny0L6ma8HulwRrn9Ae0PvlGI79tMy9FeM/o0bE+U yDqqdaoHe84KwkElaJobu0NdCSv/2X/ZY4vlz9Fc8t5TuZ5AjFJsjoBPNQsb1+dvMKIP addSPrQdiSeTSvko1eRG0lw0TW8WW9qHfuFvZnCRF0xbvOgvtwiG3blHhtymC6pcJbAu Wrhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W+8dwHsB+Gu2Q76MGxzkoyzzfv3d8yoeeGPaquCGBac=; b=41MhvfONKZRaSZzT7PzO0k9FMXohjAOvI9GH258vw3xppj61Lz8Zmuo17l/ZQA0P6j SjV+G4qDjGRzqJr/rzV7knC++sKVm++7CQ5ebmZo7XoIpZo5mjB2Yh6mjwKI0Rzvu8+4 jDsclLuV9yQaYa/s0Eeafm8LuTLWqT9TFBE6x2/tSHJPXhfxOGrKgTgigIiBxh8uuVT3 IjFZFPZ6sDJnRdMhWRRpOCnKgmMUe4EA8fKTuCOiYUTVi1fiAIDLeJF5oQJ4NJVcmubI 8OmFfYo/faCtQUqHKzlgaNqnre/kE99Hn2ylb/W7Jb8HYiCRB1cerMSiEtQi3xeXByfF 0tTg== X-Gm-Message-State: AO0yUKVkAazCjC0wvLdFZ7IBPLiOQ2CGSHEnVhsGLroMdO9gtQtBE8rc lU9X4yElIPe/VzY2cQwm4IIeZ5stmHAPkbjT7EyVHRq/nGVJLv8yHqiEqxH/VsHyK9wGUh4omHa d73P8Xlk3akapOkbXW/1b5BsNOxQVz26ypEeNryg7Fcymcl37gGBp2W5L+yG+QTfUDe0E X-Google-Smtp-Source: AK7set8TiqlOy2/6W7SQbvSZPEX/DMZFIXr/FCOO8xO+JRgDPa0lt0+xBP/BKzhRaxGpLyUhVfBNVZv/llb+W64y X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a62:8245:0:b0:5a8:b513:f942 with SMTP id w66-20020a628245000000b005a8b513f942mr323814pfd.1.1677542929553; Mon, 27 Feb 2023 16:08:49 -0800 (PST) Date: Tue, 28 Feb 2023 00:06:40 +0000 In-Reply-To: <20230228000644.3204402-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230228000644.3204402-1-aaronlewis@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230228000644.3204402-2-aaronlewis@google.com> Subject: [PATCH v2 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When counting "Instructions Retired" (0xc0) in a guest, KVM will occasionally increment the PMU counter regardless of if that event is being filtered. This is because some PMU events are incremented via kvm_pmu_trigger_event(), which doesn't know about the event filter. Add the event filter to kvm_pmu_trigger_event(), so events that are disallowed do not increment their counters. Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions") Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 612e6c70ce2e..0fe23bda855b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return is_fixed_event_allowed(filter, pmc->idx); } +static bool pmc_is_allowed(struct kvm_pmc *pmc) +{ + return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) && + check_pmu_event_filter(pmc); +} + static void reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); @@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) pmc_pause_counter(pmc); - if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc)) - goto reprogram_complete; - - if (!check_pmu_event_filter(pmc)) + if (!pmc_is_allowed(pmc)) goto reprogram_complete; if (pmc->counter < pmc->prev_counter) @@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); - if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) + if (!pmc || !pmc_is_allowed(pmc)) continue; /* Ignore checks for edge detect, pin control, invert and CMASK bits */ From patchwork Tue Feb 28 00:06:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13154328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3D9AC64ED6 for ; Tue, 28 Feb 2023 00:09:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229672AbjB1AJP (ORCPT ); Mon, 27 Feb 2023 19:09:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229629AbjB1AJM (ORCPT ); Mon, 27 Feb 2023 19:09:12 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D08CC1BDA for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id y1-20020a631801000000b00503696ca95dso960637pgl.1 for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2hx5vfPBIO0/rxJHl8Nre9/ek5yd0Nw1zRCzUtOuy94=; b=n0kdTP6jCacO3bGvV0DjyNeAc4CtVcdWb4mRnVRfnvogDa65Ao9oyONBCCMsTUYQ7m O3h9uTqLq2DdSEkJ7B938msjH4ar7PT0KcnY4sNskxR+jvUB0yplXP1Wg+reWoIEDtVL Hnu/Bzajty6nxvIuI173J5ibHRB4U+iavNPWXbpDHNjCV5djxGqLmjgcO2vGU+4zq4lQ mKbGUx50b8MQhAE28SY+sr3YB6xqad87Lj+ubFs6zC881MZZNqmVTdoceAA+7i5a8YK8 Yg+xEh1TwKy77A41QV00jg6MYlUaXO7sT5ZqE8jJOD1V6pOsxK6mu4P9cS1Q9zmlvl8X QoSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2hx5vfPBIO0/rxJHl8Nre9/ek5yd0Nw1zRCzUtOuy94=; b=frt8x8I0UuGxKHfhGVuGGCIK423G/3noWEbLTMPPUYdcuX3APhLPi1fml4CtmuTq3H ACr0RDkGJUVKh3ihUr54qOHyqIvWEUjoYHqIPrdGCFiaKIZYw/+d4bC+zXyAdgjV36mO 57bxe7qGuNtELj4E86aaNYdCCHghdi9aK4Ydb+i0gqFzipScYpyVbRD3ai8eyq1Vi73K l4yX5/s1OKxt7np1JDsGnwZbLrRvwuJAiSeV7SPPchHufXklehKkdUxgv2nxrGVya+9b 5QZ9wQuHJ3Qj06/qD9v7W5o56pnt8B1KetnpNd0PqSpa8f/mvZAk5GC5HNjQKGFJQeBJ shdg== X-Gm-Message-State: AO0yUKXMfPtrAEyGLntscwQ+9M3nOezeIGawC2DfOSkVgkL6tEOscoIi cf6nxadJYDFdoAKauGcnBK+e9Bxh857BC5Z6OVnS+JkziGX+kUkOCIRUXrG1art4nVr3r4+p9qM smz7fEmS7pOBJHxkKBT7c9op3VBuGE8lzR7TI1UP+2gIk/4nmtDRCJpY/YXcCUtWayLHM X-Google-Smtp-Source: AK7set/2xsxQ3ysgVs0TyKe8CwMXREDxVU48/d1MStTWnQPLDWhmgSK7nuzIdfuXx0NLSb0loh0sgrFzf8vVxNC5 X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:3247:b0:593:dcf6:acc2 with SMTP id bn7-20020a056a00324700b00593dcf6acc2mr340761pfb.1.1677542931322; Mon, 27 Feb 2023 16:08:51 -0800 (PST) Date: Tue, 28 Feb 2023 00:06:41 +0000 In-Reply-To: <20230228000644.3204402-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230228000644.3204402-1-aaronlewis@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230228000644.3204402-3-aaronlewis@google.com> Subject: [PATCH v2 2/5] KVM: selftests: Add a common helper to the guest From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out the common parts of the Intel and AMD guest code into a helper function. This is in preparation for adding additional counters to the test. No functional changes intended. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index bad7ef8c5b92..f33079fc552b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -100,6 +100,17 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip) GUEST_SYNC(0); } +static uint64_t test_guest(uint32_t msr_base) +{ + uint64_t br0, br1; + + br0 = rdmsr(msr_base + 0); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + br1 = rdmsr(msr_base + 0); + + return br1 - br0; +} + static void intel_guest_code(void) { check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1); @@ -108,16 +119,15 @@ static void intel_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t count; wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1); - br0 = rdmsr(MSR_IA32_PMC0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_IA32_PMC0); - GUEST_SYNC(br1 - br0); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1); + + count = test_guest(MSR_IA32_PMC0); + GUEST_SYNC(count); } } @@ -133,15 +143,14 @@ static void amd_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t count; wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); - br0 = rdmsr(MSR_K7_PERFCTR0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_K7_PERFCTR0); - GUEST_SYNC(br1 - br0); + + count = test_guest(MSR_K7_PERFCTR0); + GUEST_SYNC(count); } } From patchwork Tue Feb 28 00:06:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13154329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7F0CC7EE2E for ; Tue, 28 Feb 2023 00:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229671AbjB1AJQ (ORCPT ); Mon, 27 Feb 2023 19:09:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229659AbjB1AJN (ORCPT ); Mon, 27 Feb 2023 19:09:13 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03FA83C17 for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id s8-20020a170902b18800b0019c92f56a8aso4435976plr.22 for ; Mon, 27 Feb 2023 16:08:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sExX4ipGAFA5HR1B4tUeetDfYpcC755hrJJmqJQj/nI=; b=BL+SSvniZL6hJK6GecG807zeJqcw3hq2K+WkWSJAQshjtU2FPzuditvX40NmmrhTv6 rul8V7l3rhM5i5KDuuKCqPG16Ul5Fl/fTDT7TQKqZm/ynbpqDcjasMUgY3Wa58CditSH NcZW2mDtoS1oXZHdK1bUKXBt5Hjfdpy55TYOB+0/9L1bWro0UVt2ijf2NmR/aU7KtUhV nmCTQQe42dCJ14vlfO8DxdruEhvOFBsguYGhYea7Q+9msdzxs5I0LWh1+TYjSjyKdM2c PCmAShs+CxaVXVnngkMCRfYVbHW/We3u6GIO8uchz/OMnpqUEGZgcu8+LdEry00PX0UD IwqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sExX4ipGAFA5HR1B4tUeetDfYpcC755hrJJmqJQj/nI=; b=cwyyRtq5r0g6WkV7FBLFxxReBnnvnPqB6+y/56dXhsY4mwDS+Ep1w6AfM7xSvjYxqL 8BSJDvtZul2BHQXifkF9Hh4+Oc/EwCmkUb/9GF3VJwKLNu+KwA1Ze5bOhS0ri48AOliG Qcuq8dpkAIRqnmWvzaGQFWl9LI4B0EnVAEdwFodI410MDHpDaEZ74rECJVuuu2ywRBGk dEHR1PT7xV2JNBPNJI1kKIDp+1K1/os+NZQ9O0lDgdOmslBlr21ura3m7weF9FxRUb/N i0fW1HpLoMavSv4xk92DEPdWEKIc3LqHGu9KVQL2bFw1+MPP4f1zceejE98W+A9wQJ/M l7cA== X-Gm-Message-State: AO0yUKW41kDOTo0tHty2TbZipYNZMrwER+KM91kE0EhR7Ny/pdT2AQj1 ohkh9LFaBL+PRNiO1G5Ow0LOOdObw3DK9xWmwdLUhA2JxN9E94e5AEdE+wkI8QWtJvLe0Vj98Wo 6gH9A8iu6oRyVzLBWFNfrEzujoXmk98nJBSuMSweQ0Zsd9c0RWdGMLzx5/DItHX2F6Lol X-Google-Smtp-Source: AK7set8+RalalUFulr1TP4UKoJ7I/A6yJv1ZPo3jY9UBB/CAItxtaXKIPvb1J4+x/eStiu9KV1te9IQJS5B24jDy X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:26eb:b0:5de:ece4:2674 with SMTP id p43-20020a056a0026eb00b005deece42674mr305826pfw.3.1677542932825; Mon, 27 Feb 2023 16:08:52 -0800 (PST) Date: Tue, 28 Feb 2023 00:06:42 +0000 In-Reply-To: <20230228000644.3204402-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230228000644.3204402-1-aaronlewis@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230228000644.3204402-4-aaronlewis@google.com> Subject: [PATCH v2 3/5] KVM: selftests: Add helpers for PMC asserts From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the helpers, ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING, to split out the asserts into one place. This will make it easier to add additional asserts related to counting later on. No functional changes intended. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 70 ++++++++++--------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index f33079fc552b..8277b8f49dca 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -250,14 +250,27 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } +#define ASSERT_PMC_COUNTING(count) \ +do { \ + if (count != NUM_BRANCHES) \ + pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ + __func__, count, NUM_BRANCHES); \ + TEST_ASSERT(count, "Allowed PMU event is not counting."); \ +} while (0) + +#define ASSERT_PMC_NOT_COUNTING(count) \ +do { \ + if (count) \ + pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \ + __func__, count); \ + TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \ +} while (0) + static void test_without_filter(struct kvm_vcpu *vcpu) { - uint64_t count = run_vcpu_to_sync(vcpu); + uint64_t c = run_vcpu_to_sync(vcpu); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + ASSERT_PMC_COUNTING(c); } static uint64_t test_with_filter(struct kvm_vcpu *vcpu, @@ -271,70 +284,59 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) { uint64_t event = EVENT(0x1C2, 0); struct kvm_pmu_event_filter *f; - uint64_t count; + uint64_t c; f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); - count = test_with_filter(vcpu, f); - + c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count = test_with_filter(vcpu, f); + uint64_t c = test_with_filter(vcpu, f); free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + + ASSERT_PMC_NOT_COUNTING(c); } static void test_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count = test_with_filter(vcpu, f); + uint64_t c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count; + uint64_t c; remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); + c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count; + uint64_t c; remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); + c = test_with_filter(vcpu, f); free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + + ASSERT_PMC_NOT_COUNTING(c); } /* From patchwork Tue Feb 28 00:06:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13154331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D2FC7EE2E for ; Tue, 28 Feb 2023 00:09:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229779AbjB1AJU (ORCPT ); Mon, 27 Feb 2023 19:09:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229660AbjB1AJN (ORCPT ); Mon, 27 Feb 2023 19:09:13 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CBBB23121 for ; Mon, 27 Feb 2023 16:08:56 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id y35-20020a056a00182300b005e8e2c6afe2so4169861pfa.12 for ; Mon, 27 Feb 2023 16:08:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a89MD90/uLzwK1MwsV9/HXg1EUzJUQbpy0l28i0eMws=; b=jzLICPz2NMjWYm5tnviLDEBUR7X0/N0P+qCjZhAtRMPGg2oYdQRnoEKrKG1qQWrQFj BgUXqwoKvAUpBA2hY1OvnqTvGm7c3gB3XkkXP7J5QRMMALosNG+ivn81YmHoptBXt/+s LVW0CHB+KtH9ss6C6B1YqJ637NqqZtK/PHQJB/XW+LRepKkPJifZIgaYIvTizMxr+uUb Ntyd4mD6I4pawER6svzv/7tl9oP7YzR0kQR7ZKSxen97JsdU8jgXq99c4t48Ib6LEU8+ IqOzUWMKMN+Yki7glUE0d3Kavgktz/EiXcn2ehZ6kgl5S0RHv8Q+Axd0luCY+GTzHqN5 KTeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a89MD90/uLzwK1MwsV9/HXg1EUzJUQbpy0l28i0eMws=; b=NcVYB5JOJH+2C1aDd3cRqiNYD+EUAm4JW5MxHBB5KCL4535Kk1rqoJL5sa00hHHsLh cfJU/HCBDasNX+uBFueugSRIMC/vLfGfh6A7VViANoD0ICR6Gz8IhMSvC4RZbgUEpg4O 67Li+jtcMthIiPr+UQwZKAiStqTNIJZA8rpyE85Gpsz1bmPf3uscv3w+ouQAek26qSoT G6wsS7g4swNZxO5IG4IZMfdA2DSoZ/8ZlvbhFR5dc3uOSbet5NNCk3D5MxEAzZXtsZ1I 2op2Qrps6cGpNwwcbMMzmBcET+waYmcBIESabqlrxQIzSWI+4eLLhI5xN0ZIblBP+OI5 6TeA== X-Gm-Message-State: AO0yUKUO9objfY8pqRtPGch3fmJpqz01jS2EwVxgT2n8hzYuHaMHsyTJ UJXhunUuVQcjEjLSzC+00Y+Crl7m2QMIoO45N48GYszO6TwRGPlH2WwPKgQSQs0oJapfLV0dEOm 3b3YBev24H1aA8U9x5dQCNBooXBMr9tZtjByusvZ4/Fykf7DEWdA9n+pXknr9MKQCtOGn X-Google-Smtp-Source: AK7set/XzxvIHXw5i+rJJWRhYxyibQMgKJ+qNEpAJGI55LeVC0DsXh9mOMfJC3UUyMfd5BOogrsAsuSb6N9zi4Ph X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a63:3707:0:b0:503:77ce:a1ab with SMTP id e7-20020a633707000000b0050377cea1abmr94969pga.9.1677542934421; Mon, 27 Feb 2023 16:08:54 -0800 (PST) Date: Tue, 28 Feb 2023 00:06:43 +0000 In-Reply-To: <20230228000644.3204402-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230228000644.3204402-1-aaronlewis@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230228000644.3204402-5-aaronlewis@google.com> Subject: [PATCH v2 4/5] KVM: selftests: Fixup test asserts From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix up both ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING in the pmu_event_filter_test by adding additional context in the assert message. With the added context the print in ASSERT_PMC_NOT_COUNTING is redundant. Remove it. Signed-off-by: Aaron Lewis --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 8277b8f49dca..78bb48fcd33e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -252,18 +252,17 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, #define ASSERT_PMC_COUNTING(count) \ do { \ - if (count != NUM_BRANCHES) \ + if (count && count != NUM_BRANCHES) \ pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ __func__, count, NUM_BRANCHES); \ - TEST_ASSERT(count, "Allowed PMU event is not counting."); \ + TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \ + __func__, count); \ } while (0) #define ASSERT_PMC_NOT_COUNTING(count) \ do { \ - if (count) \ - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \ - __func__, count); \ - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \ + TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \ + __func__, count); \ } while (0) static void test_without_filter(struct kvm_vcpu *vcpu) From patchwork Tue Feb 28 00:06:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13154332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A44FC7EE31 for ; Tue, 28 Feb 2023 00:09:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229762AbjB1AJV (ORCPT ); Mon, 27 Feb 2023 19:09:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229661AbjB1AJN (ORCPT ); Mon, 27 Feb 2023 19:09:13 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEDE17284 for ; Mon, 27 Feb 2023 16:08:57 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-536a4eba107so173417877b3.19 for ; Mon, 27 Feb 2023 16:08:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D006eQccB/YfCTWYisON8RLDYL9oi52cmWtPqe0hG2Q=; b=gIjh3nfFBZCEmbLZxY9cp1FvHn6RY9fe8XhwD5eRTHENgNlVSZqBPNdH3YXYazv8fN 5CnAiRZJwapuD3+6VAkcz9hrM9K7mX+4KYAnVGVnj7eWOc2xT2vwMI1vw42XRSXkJf0c Ss0+sAsslnDn0MFqH+k1e1LTDdwK0orODPOk5QA3qhO/1+tDpBqKtPwhZzTsDfnKSxQ6 5TtWXV2yrqe6E5JiXzdg5djApMgbEQeqVlP6fqPO3Z7iahCFAUdHwiMrv3azVHqZUEc8 XQfgbMx+WznNOM/tv/wmChpzbKQqvkxmMoYsSSWVeLgxx0RN6zIdJt+L7Pe5HM1bY0j/ KGBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D006eQccB/YfCTWYisON8RLDYL9oi52cmWtPqe0hG2Q=; b=1P3zID4eZ27oXIKNd185cggNC192h+uil2c32Rw+LXoVSMbUDRghowspeY8/1nKiLL Kvtp0jvBKJygTwr76gBXZHGEYI0JFpC7LmRhcvtYe1QCTPAxdXr+35qk2Amd8ggp+XQF wOTGU16SiYzwnBww4wMdxVAkBoenw0rlsJOxH0IJTr/TmOil80X4MtOdS9L/SSOBnKpq X3uchEtf4aSAu5923Sqx2U7/CtbgP/Zn2tV4vo+phAdbtqjZwv7muFdgeVBjN6jUv6Pz onPTrOzS/3ilBChlt/9aO9D/37D/uDicjdVEQdimMbNN8H1c1GyPtUc8/ZFWRhQaIGDr Fk+A== X-Gm-Message-State: AO0yUKUeuQNdLiXju9T4FAuzpifx9IahvpSAReU5zL+XGTLq5HL7vHQi eV8VE7rFwtXaw1Scb+N5GIMMGFMhOIPGJiffRVVpdXxExw+sTHstyGldurfyKnrwC2vk67YyvNg /NUW8IvBwiG1FmL3eS4FvPSf1VhfMzYU92McXRDoRahp9RT6f+gMA8WENvyl1lbOtOZjY X-Google-Smtp-Source: AK7set9MNbJkLgYlOqaghuoB+g0Ly4IF+DHAlQPysp1TABiQUhD2j9IAmfXheWt96/oV4RNHlHXooUwUbmg5GufK X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a05:6902:118c:b0:a06:538f:265f with SMTP id m12-20020a056902118c00b00a06538f265fmr8975885ybu.4.1677542936347; Mon, 27 Feb 2023 16:08:56 -0800 (PST) Date: Tue, 28 Feb 2023 00:06:44 +0000 In-Reply-To: <20230228000644.3204402-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230228000644.3204402-1-aaronlewis@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230228000644.3204402-6-aaronlewis@google.com> Subject: [PATCH v2 5/5] KVM: selftests: Test the PMU event "Instructions retired" From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing for the event "Instructions retired" (0xc0) in the PMU event filter on both Intel and AMD to ensure that the event doesn't count when it is disallowed. Unlike most of the other events, the event "Instructions retired", will be incremented by KVM when an instruction is emulated. Test that this case is being properly handled and that KVM doesn't increment the counter when that event is disallowed. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 80 ++++++++++++++----- 1 file changed, 62 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 78bb48fcd33e..9e932b99d4fa 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -54,6 +54,21 @@ #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) + +/* + * "Retired instructions", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + * --- and --- + * "Instructions retired", from the Intel SDM, volume 3, + * "Pre-defined Architectural Performance Events." + */ + +#define INST_RETIRED EVENT(0xc0, 0) + /* * This event list comprises Intel's eight architectural events plus * AMD's "retired branch instructions" for Zen[123] (and possibly @@ -61,7 +76,7 @@ */ static const uint64_t event_list[] = { EVENT(0x3c, 0), - EVENT(0xc0, 0), + INST_RETIRED, EVENT(0x3c, 1), EVENT(0x2e, 0x4f), EVENT(0x2e, 0x41), @@ -71,6 +86,16 @@ static const uint64_t event_list[] = { AMD_ZEN_BR_RETIRED, }; +struct perf_results { + union { + uint64_t raw; + struct { + uint64_t br_count:32; + uint64_t ir_count:32; + }; + }; +}; + /* * If we encounter a #GP during the guest PMU sanity check, then the guest * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0). @@ -102,13 +127,20 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip) static uint64_t test_guest(uint32_t msr_base) { + struct perf_results r; uint64_t br0, br1; + uint64_t ir0, ir1; br0 = rdmsr(msr_base + 0); + ir0 = rdmsr(msr_base + 1); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); br1 = rdmsr(msr_base + 0); + ir1 = rdmsr(msr_base + 1); - return br1 - br0; + r.br_count = br1 - br0; + r.ir_count = ir1 - ir0; + + return r.raw; } static void intel_guest_code(void) @@ -119,15 +151,17 @@ static void intel_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t count; + uint64_t counts; wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1); + wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3); - count = test_guest(MSR_IA32_PMC0); - GUEST_SYNC(count); + counts = test_guest(MSR_IA32_PMC0); + GUEST_SYNC(counts); } } @@ -143,14 +177,16 @@ static void amd_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t count; + uint64_t counts; wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); - count = test_guest(MSR_K7_PERFCTR0); - GUEST_SYNC(count); + counts = test_guest(MSR_K7_PERFCTR0); + GUEST_SYNC(counts); } } @@ -250,19 +286,25 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } -#define ASSERT_PMC_COUNTING(count) \ +#define ASSERT_PMC_COUNTING(counts) \ do { \ - if (count && count != NUM_BRANCHES) \ - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ - __func__, count, NUM_BRANCHES); \ - TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \ - __func__, count); \ + struct perf_results r = {.raw = counts}; \ + if (r.br_count && r.br_count != NUM_BRANCHES) \ + pr_info("%s: Branch instructions retired = %u (expected %u)\n", \ + __func__, r.br_count, NUM_BRANCHES); \ + TEST_ASSERT(r.br_count, "%s: Branch instructions retired = %u (expected > 0)", \ + __func__, r.br_count); \ + TEST_ASSERT(r.ir_count, "%s: Instructions retired = %u (expected > 0)", \ + __func__, r.ir_count); \ } while (0) -#define ASSERT_PMC_NOT_COUNTING(count) \ +#define ASSERT_PMC_NOT_COUNTING(counts) \ do { \ - TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \ - __func__, count); \ + struct perf_results r = {.raw = counts}; \ + TEST_ASSERT(!r.br_count, "%s: Branch instructions retired = %u (expected 0)", \ + __func__, r.br_count); \ + TEST_ASSERT(!r.ir_count, "%s: Instructions retired = %u (expected 0)", \ + __func__, r.ir_count); \ } while (0) static void test_without_filter(struct kvm_vcpu *vcpu) @@ -317,6 +359,7 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); uint64_t c; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); c = test_with_filter(vcpu, f); @@ -330,6 +373,7 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); uint64_t c; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); c = test_with_filter(vcpu, f);