From patchwork Mon Jun 6 17:52:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12870730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C102C433EF for ; Mon, 6 Jun 2022 17:53:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230057AbiFFRxq (ORCPT ); Mon, 6 Jun 2022 13:53:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230251AbiFFRxn (ORCPT ); Mon, 6 Jun 2022 13:53:43 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB9B51455A1 for ; Mon, 6 Jun 2022 10:53:41 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id l7-20020a170903244700b001675991fb6aso3813780pls.6 for ; Mon, 06 Jun 2022 10:53:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GRjeGm7wq7WaOEXMIg+oSOlcTXMumAeTDB0o6bhsqtk=; b=dG6EsfR6uYPndy3bzV/8sMdHRau52kbF2cuA9ftnASA+uV3OQ0SW7MIZLg8FSd8B9o Fz/lm0J31Nv1rkgBa0DHIk6FEmQPKPIk4V0YGrNrFSdAk5NCvik3rNoBf8Be9IWpgRjg 84jvWbnx49xT+iRNOU669lEPRxKfCWPsXBU3Ewos7joDXGUcuT07VpewRFORPmUnmfru ZwPXSDpWbXgUbDeO0F0qzcPdGHLDnFZdfE9wS/jKVFdIi3VAExKolnNj9Zo4gq6FvqTm i3c1qYDKIx2ncbhaXeKrcj1tkf7wKyJzYqtcYyPvkoC5R2tJUcdsXIY82DgIC7HE5q4e XTyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GRjeGm7wq7WaOEXMIg+oSOlcTXMumAeTDB0o6bhsqtk=; b=0iXl0goVluvwXkRZ8sc60INj2GJ+aoGRCXpvGDAvBZjigErVZNbHIx83O4/Ycz8m4W WyqZ8l1u2Wy9WAoG0fdEV+4iHVgZUxltDOQ8mz56+9ASyNBifbn16zoRGgZQxLLyg9Ln tp7duTPOD/IqBWu1XzbP8yhbyCZGZAauD6Vqlq/KEm2DcHw1eLDlZ2wIJm9CDxxA6ZM3 JkVBcgrr21TgEAygjo5lh4l9bCSffbPUqbU82VHjTVZT4rGqc5InuZxY3mDfAs4jAQwn qMnN3D+ozjGJmYFs4UWLPFt2yt96DOuXJxVdsd4aqEynqIo7QGUzlCZPa6klLZaUvjET sr8g== X-Gm-Message-State: AOAM532KYhzsF8Sak7x0HCRUmjurztF2HZJdQlrHDh1xF0bk/ur1CSoh HrEJDtv+yIjsAFzc9WVw/NJhvzWCa1SX6yPRYss3ToTyylbe1u5PQUOWg/L/FIluhL4ObEPMRRx b3lU92paAByapLiN3D+j0nr5SFfRNSXayZqKzcdc84B3K67xrVjaSSsBR8s5ZFP8mblNT X-Google-Smtp-Source: ABdhPJwoKTgnAV7hVBmPiKhQYwFzzZpa+b9V4qlZP6/8tuHnbcdUtMOXJ8mp+3s1wMPZZvAyCRinxrOYUawfD3pQ X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:1907:b0:4f7:945:14cf with SMTP id y7-20020a056a00190700b004f7094514cfmr25556306pfi.47.1654538020978; Mon, 06 Jun 2022 10:53:40 -0700 (PDT) Date: Mon, 6 Jun 2022 17:52:46 +0000 In-Reply-To: <20220606175248.1884041-1-aaronlewis@google.com> Message-Id: <20220606175248.1884041-2-aaronlewis@google.com> Mime-Version: 1.0 References: <20220606175248.1884041-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v2 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When building an event list for the pmu event filter, fitting all the events in the limited space can be a challenge. It becomes particularly challenging when trying to include various unit mask combinations for a particular event the guest is allow to or not allow to program. Instead of increasing the size of the list to allow for these, add a new encoding in the pmu event filter's events field. These encoded events can then be used to test against the event the guest is attempting to program to determine if the guest should have access to it. The encoded values are: mask, match, and invert. When filtering events the mask is applied to the guest's unit mask to see if it matches the match value (ie: unit_mask & mask == match). If it does and the pmu event filter is an allow list the event is allowed, and denied if it's a deny list. Additionally, the result is reversed if the invert flag is set in the encoded event. This feature is enabled by setting the flags field to KVM_PMU_EVENT_FLAG_MASKED_EVENTS. Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_EVENT(). It is an error to have a bit set outside valid encoded bits, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases, including bits that are set in the high nybble[1] for AMD if called on Intel. [1] bits 35:32 in the event and bits 11:8 in the eventsel. Signed-off-by: Aaron Lewis Reported-by: kernel test robot --- Documentation/virt/kvm/api.rst | 46 +++++++-- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/include/uapi/asm/kvm.h | 8 ++ arch/x86/kvm/pmu.c | 128 ++++++++++++++++++++++--- arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/svm/pmu.c | 12 +++ arch/x86/kvm/vmx/pmu_intel.c | 12 +++ 7 files changed, 190 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 11e00a46c610..4e904772da5b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5017,7 +5017,13 @@ using this ioctl. :Architectures: x86 :Type: vm ioctl :Parameters: struct kvm_pmu_event_filter (in) -:Returns: 0 on success, -1 on error +:Returns: 0 on success, + -EFAULT args[0] cannot be accessed. + -EINVAL args[0] contains invalid data in the filter or events field. + Note: event validation is only done for modes where + the flags field is non-zero. + -E2BIG nevents is too large. + -ENOMEM not enough memory to allocate the filter. :: @@ -5030,14 +5036,42 @@ using this ioctl. __u64 events[0]; }; -This ioctl restricts the set of PMU events that the guest can program. -The argument holds a list of events which will be allowed or denied. -The eventsel+umask of each event the guest attempts to program is compared -against the events field to determine whether the guest should have access. +This ioctl restricts the set of PMU events the guest can program. The +argument holds a list of events which will be allowed or denied. + The events field only controls general purpose counters; fixed purpose counters are controlled by the fixed_counter_bitmap. -No flags are defined yet, the field must be zero. +Valid values for 'flags':: + +``0`` + +This is the default behavior for the pmu event filter, and used when the +flags field is clear. In this mode the eventsel+umask for the event the +guest is attempting to program is compared against each event in the events +field to determine whether the guest should have access to it. + +``KVM_PMU_EVENT_FLAG_MASKED_EVENTS`` + +In this mode each event in the events field will be encoded with mask, match, +and invert values in addition to an eventsel. These encoded events will be +matched against the event the guest is attempting to program to determine +whether the guest should have access to it. When matching an encoded event +with a guest event these steps are followed: + 1. Match the encoded eventsel to the guest eventsel. + 2. If that matches, match the mask and match values from the encoded event to + the guest's unit mask (ie: unit_mask & mask == match). + 3. If that matches, the guest is allow to program the event if its an allow + list or the guest is not allow to program the event if its a deny list. + 4. If the invert value is set in the encoded event, reverse the meaning of #3 + (ie: deny if its an allow list, allow if it's a deny list). + +To encode an event in the pmu_event_filter use +KVM_PMU_EVENT_ENCODE_MASKED_EVENT(). + +If a bit is set in an encoded event that is not apart of the bits used for +eventsel, mask, match or invert a call to KVM_SET_PMU_EVENT_FILTER will +return -EINVAL. Valid values for 'action':: diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index fdfd8e06fee6..016713b583bf 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -24,6 +24,7 @@ KVM_X86_PMU_OP(set_msr) KVM_X86_PMU_OP(refresh) KVM_X86_PMU_OP(init) KVM_X86_PMU_OP(reset) +KVM_X86_PMU_OP(get_event_mask) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 21614807a2cb..2964f3f15fb5 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -522,6 +522,14 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS (1u << 0) + +#define KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert) \ + (((select) & 0xfful) | (((select) & 0xf00ul) << 24) | \ + (((mask) & 0xfful) << 24) | \ + (((match) & 0xfful) << 8) | \ + (((invert) & 0x1ul) << 23)) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3f868fed9114..69edc71b5ef8 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -197,14 +197,99 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } -static int cmp_u64(const void *pa, const void *pb) +static inline u64 get_event(u64 eventsel) { - u64 a = *(u64 *)pa; - u64 b = *(u64 *)pb; + return eventsel & AMD64_EVENTSEL_EVENT; +} +static inline u8 get_unit_mask(u64 eventsel) +{ + return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; +} + +static inline u8 get_counter_mask(u64 eventsel) +{ + return (eventsel & ARCH_PERFMON_EVENTSEL_CMASK) >> 24; +} + +static inline bool get_invert_comparison(u64 eventsel) +{ + return !!(eventsel & ARCH_PERFMON_EVENTSEL_INV); +} + +static inline int cmp_safe64(u64 a, u64 b) +{ return (a > b) - (a < b); } +static int cmp_eventsel_event(const void *pa, const void *pb) +{ + return cmp_safe64(*(u64 *)pa & AMD64_EVENTSEL_EVENT, + *(u64 *)pb & AMD64_EVENTSEL_EVENT); +} + +static int cmp_u64(const void *pa, const void *pb) +{ + return cmp_safe64(*(u64 *)pa, + *(u64 *)pb); +} + +static bool is_match(u64 masked_event, u64 eventsel) +{ + u8 mask = get_counter_mask(masked_event); + u8 match = get_unit_mask(masked_event); + u8 unit_mask = get_unit_mask(eventsel); + + return (unit_mask & mask) == match; +} + +static bool is_event_allowed(u64 masked_event, u32 action) +{ + if (get_invert_comparison(masked_event)) + return action != KVM_PMU_EVENT_ALLOW; + + return action == KVM_PMU_EVENT_ALLOW; +} + +static bool filter_masked_event(struct kvm_pmu_event_filter *filter, + u64 eventsel) +{ + u64 key = get_event(eventsel); + u64 *event, *evt; + + event = bsearch(&key, filter->events, filter->nevents, sizeof(u64), + cmp_eventsel_event); + + if (event) { + /* Walk the masked events backward looking for a match. */ + for (evt = event; evt >= filter->events && + get_event(*evt) == get_event(eventsel); evt--) + if (is_match(*evt, eventsel)) + return is_event_allowed(*evt, filter->action); + + /* Walk the masked events forward looking for a match. */ + for (evt = event + 1; + evt < (filter->events + filter->nevents) && + get_event(*evt) == get_event(eventsel); evt++) + if (is_match(*evt, eventsel)) + return is_event_allowed(*evt, filter->action); + } + + return filter->action == KVM_PMU_EVENT_DENY; +} + +static bool filter_default_event(struct kvm_pmu_event_filter *filter, + u64 eventsel) +{ + u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; + + if (bsearch(&key, filter->events, filter->nevents, + sizeof(u64), cmp_u64)) + return filter->action == KVM_PMU_EVENT_ALLOW; + + return filter->action == KVM_PMU_EVENT_DENY; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { u64 config; @@ -226,14 +311,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (filter) { - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; - - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; + allow_event = (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) ? + filter_masked_event(filter, eventsel) : + filter_default_event(filter, eventsel); } + if (!allow_event) return; @@ -572,8 +654,22 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); +static int has_invalid_event(struct kvm_pmu_event_filter *filter) +{ + u64 event_mask; + int i; + + event_mask = static_call(kvm_x86_pmu_get_event_mask)(filter->flags); + for (i = 0; i < filter->nevents; i++) + if (filter->events[i] & ~event_mask) + return true; + + return false; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { + int (*cmp)(const void *a, const void *b) = cmp_u64; struct kvm_pmu_event_filter tmp, *filter; size_t size; int r; @@ -585,7 +681,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) tmp.action != KVM_PMU_EVENT_DENY) return -EINVAL; - if (tmp.flags != 0) + if (tmp.flags & ~KVM_PMU_EVENT_FLAG_MASKED_EVENTS) return -EINVAL; if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS) @@ -603,10 +699,18 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) /* Ensure nevents can't be changed between the user copies. */ *filter = tmp; + r = -EINVAL; + /* To maintain backwards compatibility don't validate flags == 0. */ + if (filter->flags != 0 && has_invalid_event(filter)) + goto cleanup; + + if (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + cmp = cmp_eventsel_event; + /* * Sort the in-kernel list so that we can search it with bsearch. */ - sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); + sort(&filter->events, filter->nevents, sizeof(u64), cmp, NULL); mutex_lock(&kvm->lock); filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e745f443b6a8..f13fcc692d04 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -37,6 +37,7 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + u64 (*get_event_mask)(u32 flag); }; void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 136039fc6d01..41b7bd51fd11 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -342,6 +342,17 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } } +static u64 amd_pmu_get_event_mask(u32 flag) +{ + if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + return AMD64_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK | + ARCH_PERFMON_EVENTSEL_INV | + ARCH_PERFMON_EVENTSEL_CMASK; + return AMD64_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK; +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .pmc_perf_hw_id = amd_pmc_perf_hw_id, .pmc_is_enabled = amd_pmc_is_enabled, @@ -355,4 +366,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .reset = amd_pmu_reset, + .get_event_mask = amd_pmu_get_event_mask, }; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 37e9eb32e3d9..27c44105760d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -719,6 +719,17 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) intel_pmu_release_guest_lbr_event(vcpu); } +static u64 intel_pmu_get_event_mask(u32 flag) +{ + if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + return ARCH_PERFMON_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK | + ARCH_PERFMON_EVENTSEL_INV | + ARCH_PERFMON_EVENTSEL_CMASK; + return ARCH_PERFMON_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK; +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .pmc_perf_hw_id = intel_pmc_perf_hw_id, .pmc_is_enabled = intel_pmc_is_enabled, @@ -734,4 +745,5 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .get_event_mask = intel_pmu_get_event_mask, }; From patchwork Mon Jun 6 17:52:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12870731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E5AFC43334 for ; Mon, 6 Jun 2022 17:53:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230261AbiFFRxt (ORCPT ); Mon, 6 Jun 2022 13:53:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbiFFRxp (ORCPT ); Mon, 6 Jun 2022 13:53:45 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B538F14640A for ; Mon, 6 Jun 2022 10:53:43 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id u8-20020a170903124800b0015195a5826cso8113608plh.4 for ; Mon, 06 Jun 2022 10:53:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DaF9KSZ65paej3K5cVkPeneqfkvSWx8TBryPelAmSyg=; b=Qza6dG1LznRAnSGyPeMVwt9f989sQlODWg3K2AM4aOYAjjuOG9yEsQeZg/R9E2LmFu KhMzWSTA3e6ywpEm1IOoscbNkhe3IUaYaXVFHI20dQihRLdObHJ2T5LGP28LAk7aLufp wXaWDcP3EMSUPHsHV3H4o/lrampk67/RtBwHWC8xXkcvIw+zUHYQWJTndlVK/1RC9v4M /MG9c78hNiCofiee6j+yRuXg+Z0D3jhaZ45anlB4p9CxyP97Cw+FHXndG/bhRPvsa3SM TcuC+J5CVkoSGBGdsSHH7+yzHdXc1/Sc4FZHXOJQNnHVRS7UVIfyC6zzjKo5w8t9n5Ts mpOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DaF9KSZ65paej3K5cVkPeneqfkvSWx8TBryPelAmSyg=; b=HDbViHilw0Wq0GDFQyLtD8ZBFU0WxOw4ERbm63DrlYIiFjpBmh/4VAitVAtOsXRy08 vPSXBMrXamGDnLi/R5iieP4uH1KMAeAOvaxb/fWoYkqsy161QbTU3RduK6a3PAprOZWs l41xsN8zYwqD0fVhwEzJEoaIIiULzPzHZoGsNYM64rtkBHEJW7MKeB9ZXfRPuwecgwRs xZWxFaOGQAUQ0SnfbDcFu55wXvElLzRe7QsK0GatIaFI7mm/JUq3UQHQWsxOv/b7MPl+ YpKKBv68GVQf/LlaAnLGzmOem5cFRpT13NNZjRKRpQThsX0PpUxgvkwpfEalXtA5iBHo TCQQ== X-Gm-Message-State: AOAM530ppah6tEQDaBPDX53cmfe4p7x9Jwl0KGnDIt+8rB2H34Zpluto n1pgqIKBwRBJYAzXjNmsI5vPMm/UrBPpnh48HZso4Qh3j1JKTCZkEimPjx1m3vNx8IuN0+pfUXG aevTCsWmV3yDLfnWqzHlEX0leSBN9qQGNIRJivmK7X3ytb2m2VnWXW22ra4DBm5PFhJ3R X-Google-Smtp-Source: ABdhPJykMZo2zfHYs8+HB3663LzFN178ZxGkgB5RPHYFpgn/g4r61iXBKb6Y4e/4YyEVAd+YwGNdm5p508lcIYcP X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90b:3696:b0:1e6:6f6d:962b with SMTP id mj22-20020a17090b369600b001e66f6d962bmr29668213pjb.8.1654538022979; Mon, 06 Jun 2022 10:53:42 -0700 (PDT) Date: Mon, 6 Jun 2022 17:52:47 +0000 In-Reply-To: <20220606175248.1884041-1-aaronlewis@google.com> Message-Id: <20220606175248.1884041-3-aaronlewis@google.com> Mime-Version: 1.0 References: <20220606175248.1884041-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v2 2/4] selftests: kvm/x86: Add flags when creating a pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the flags field can be non-zero, pass it in when creating a pmu event filter. This is needed in preparation for testing masked events. No functional change intended. Signed-off-by: Aaron Lewis --- .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 93d77574b255..4bff4c71ac45 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -222,14 +222,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], - int nevents, uint32_t action) +create_pmu_event_filter(const uint64_t event_list[], int nevents, + uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->flags = flags; for (i = 0; i < nevents; i++) f->events[i] = event_list[i]; @@ -240,7 +241,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action); + action, 0); } /* @@ -287,7 +288,7 @@ static void test_amd_deny_list(struct kvm_vm *vm) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); count = test_with_filter(vm, f); free(f); From patchwork Mon Jun 6 17:52:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12870732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E357ECCA47E for ; Mon, 6 Jun 2022 17:53:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230267AbiFFRxu (ORCPT ); Mon, 6 Jun 2022 13:53:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230247AbiFFRxr (ORCPT ); Mon, 6 Jun 2022 13:53:47 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0AB81455B3 for ; Mon, 6 Jun 2022 10:53:45 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id n201-20020a2540d2000000b0065cbae85d67so13033345yba.11 for ; Mon, 06 Jun 2022 10:53:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QzopdMwdu4IBE2IoQwg6i1+thAJhsUz4qkuEcGV8vhw=; b=RSJlD7UyBwEdBbjKT5KR3abwvG1riPQmNDuFYX0ESS6oi6v6MiIUwkH0iSHV6F4Gx7 s+hiIIonGGxyAWkgdUQN+ZuwWueh7CwKNGwweQCqprCAfupNO7dEPCWsnsQC8v5b7u4r WkOGv8/L3OfcWfMfgI/qtBSnb8kVvuGM+C6CFdQfmigx4qKtr9mZtMd3Bud1eHUJbdD5 mhkiUo0irlTr7qMOclk7U7RJ0Es0zEKV29XxuwEi51cV0aec0HYDO7lsjZViNuDzoLc8 tlhtiFRaysdZh2IfJoM+tfR5Vg0/1wKG78ot9LVW4lDVM3so/HoTt636ag448+SInS6v kqnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QzopdMwdu4IBE2IoQwg6i1+thAJhsUz4qkuEcGV8vhw=; b=iacTT5M4QhB6JqjynDVFijCQtnQ06Dv3hM9aJ1Npa4Z/7jANd6YLxnsCFHQUB7Sc1m axlMjI8HNa8PJhP8iXsDkVce7Wt32Dh6xEgTXfhsT4qntoSSVQoybXMUSyOchTp0rws5 gI8YfKmbK99ZXziqNYvTcHl/KxBii+SO2NAuKrCl3MfQdJ8jn5IIsq0BYFQVtUbO1eIF G6RzmkVzwDeEpEyZmzjJ/YZsuC9o9wcQ16Jtryxaqe4i1p89ob8QXSLwnqzwg6wKX++L LTBs2e4HIU92vk05NsdG1GomJWwZDrBlfrjlC3Xoub4mjdBRNU6TIxTpqG+IpD+3MXYV KiUQ== X-Gm-Message-State: AOAM530IKZaW87JGvbCG3jtGFOzeV9Isa48DmlnhZGCx14M34E5t9XD2 jAWkfkEAnTeuBRWOO3PKFxXfqvl4jVBeaJSpdkXhJ3zbKuDPTRu3TxWeSvlg2OPq7D9EE7ml40D r8FXUYt1VeJsIrvTkSO/+l5YHSfpt93oWvdi2jE3S7lHD0HnW4dq3a+/obf+lzv1Z46gE X-Google-Smtp-Source: ABdhPJwMYFZPRZpMmZNRIYdj/CgU2kELfWH50tixyku1nohz2ATY64/DpNNc7pHrTAsBYSzyrsiaWZ5JWxqNESrQ X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a25:d906:0:b0:663:aa8e:b379 with SMTP id q6-20020a25d906000000b00663aa8eb379mr4506341ybg.455.1654538024804; Mon, 06 Jun 2022 10:53:44 -0700 (PDT) Date: Mon, 6 Jun 2022 17:52:48 +0000 In-Reply-To: <20220606175248.1884041-1-aaronlewis@google.com> Message-Id: <20220606175248.1884041-4-aaronlewis@google.com> Mime-Version: 1.0 References: <20220606175248.1884041-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v2 3/4] selftests: kvm/x86: Add testing for masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing for the pmu event filter's masked events. These tests run through different ways of finding an event the guest is attempting to program in an event list. For any given eventsel, there may be multiple instances of it in an event list. These tests try different ways of looking up a match to force the matching algorithm to walk the relevant eventsel's and ensure it is able to a) find a match, b) stays within its bounds. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4bff4c71ac45..5b0163f9ba84 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -18,8 +18,12 @@ /* * In lieu of copying perf_event.h into tools... */ +#define ARCH_PERFMON_EVENTSEL_EVENT 0x000000FFULL #define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) #define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) +#define AMD64_EVENTSEL_EVENT \ + (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32)) + union cpuid10_eax { struct { @@ -445,6 +449,107 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +#define ENCODE_MASKED_EVENT(select, mask, match, invert) \ + KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert) + +static void expect_success(uint64_t count) +{ + if (count != NUM_BRANCHES) + pr_info("masked filter: Branch instructions retired = %lu (expected %u)\n", + count, NUM_BRANCHES); + TEST_ASSERT(count, "Allowed PMU event is not counting"); +} + +static void expect_failure(uint64_t count) +{ + if (count) + pr_info("masked filter: Branch instructions retired = %lu (expected 0)\n", + count); + TEST_ASSERT(!count, "Disallowed PMU Event is counting"); +} + +static void run_masked_filter_test(struct kvm_vm *vm, uint64_t masked_events[], + const int nmasked_events, uint64_t event, + uint32_t action, bool invert, + void (*expected_func)(uint64_t)) +{ + struct kvm_pmu_event_filter *f; + uint64_t old_event; + uint64_t count; + int i; + + for (i = 0; i < nmasked_events; i++) { + if ((masked_events[i] & AMD64_EVENTSEL_EVENT) != EVENT(event, 0)) + continue; + + old_event = masked_events[i]; + + masked_events[i] = + ENCODE_MASKED_EVENT(event, ~0x00, 0x00, invert); + + f = create_pmu_event_filter(masked_events, nmasked_events, action, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + + count = test_with_filter(vm, f); + free(f); + + expected_func(count); + + masked_events[i] = old_event; + } +} + +static void run_masked_filter_tests(struct kvm_vm *vm, uint64_t masked_events[], + const int nmasked_events, uint64_t event) +{ + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_ALLOW, /*invert=*/false, + expect_success); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_ALLOW, /*invert=*/true, + expect_failure); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_DENY, /*invert=*/false, + expect_failure); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_DENY, /*invert=*/true, + expect_success); +} + +static void test_masked_filters(struct kvm_vm *vm) +{ + uint64_t masked_events[11]; + const int nmasked_events = ARRAY_SIZE(masked_events); + uint64_t prev_event, event, next_event; + int i; + + if (use_intel_pmu()) { + /* Instructions retired */ + prev_event = 0xc0; + event = INTEL_BR_RETIRED; + /* Branch misses retired */ + next_event = 0xc5; + } else { + TEST_ASSERT(use_amd_pmu(), "Unknown platform"); + /* Retired instructions */ + prev_event = 0xc0; + event = AMD_ZEN_BR_RETIRED; + /* Retired branch instructions mispredicted */ + next_event = 0xc3; + } + + for (i = 0; i < nmasked_events; i++) + masked_events[i] = + ENCODE_MASKED_EVENT(event, ~0x00, i+1, 0); + + run_masked_filter_tests(vm, masked_events, nmasked_events, event); + + masked_events[0] = ENCODE_MASKED_EVENT(prev_event, ~0x00, 0, 0); + masked_events[1] = ENCODE_MASKED_EVENT(next_event, ~0x00, 0, 0); + + run_masked_filter_tests(vm, masked_events, nmasked_events, event); +} + int main(int argc, char *argv[]) { void (*guest_code)(void) = NULL; @@ -489,6 +594,8 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vm); test_not_member_allow_list(vm); + test_masked_filters(vm); + kvm_vm_free(vm); test_pmu_config_disable(guest_code); From patchwork Mon Jun 6 17:52:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12870734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCB6FC433EF for ; Mon, 6 Jun 2022 17:53:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230115AbiFFRxy (ORCPT ); Mon, 6 Jun 2022 13:53:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230260AbiFFRxu (ORCPT ); Mon, 6 Jun 2022 13:53:50 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C5B6146405 for ; Mon, 6 Jun 2022 10:53:47 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id t14-20020a65608e000000b003fa321e8ea3so7263644pgu.18 for ; Mon, 06 Jun 2022 10:53:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DsaAs5Mtj+7v7RCIqvKHiaNPafiwu1xMM4QFAYPQb14=; b=DCn8lnjACbULOpGUugKkD1S6OYb51O+TtQmxUPs+/ku/L/mhzN7zrKK6s62TVHZlg4 SgFXmo3e5nxGrT4mcvk9Mje4g50Ffrt6efUxa2X+QiAbQ5cICxnwjQJgbVSoyP9b+bYr am0/KCkDauRrflAHhB4kx5KXi4tneI51pRUJQOLQ/SgFVF1zO4kBsCqXrR1T+9kRo24o a8nrjmWgE4CSib7cdzKqHoqKw6mL0W21ka7jCnd6d4l3JtGanGY4MiGnG0vGPIXzw5r/ JeyknonNS209SOTjfyk91lvqPp4gyp7Bi59O2Xd+oG9wj9x5znd7jma8L4bZ/Xh4TWOW ++ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DsaAs5Mtj+7v7RCIqvKHiaNPafiwu1xMM4QFAYPQb14=; b=7SvB/vDMrafgjbj3BYd1vbtUDI5/PQKt2yL9yUF4WTYT+v63yHys8hcK78NJ2x/Pu9 BF/UNNlrTj0kw1FlK4QQmmvuecq3TPOdlWgiPmVtM75ao7n+IAkiOjiKi4L3HlWDfDSR Mh+yho7eNsacdm2CrHi470T/3DVmoWNOkCeP5GKJW107VD9uyPOHKq44PUh7I8dDBNfA 7wseQnDxoLOwkn3FlA68l6apG51+441XF6YRRkEqTJ2lVSlcmeLncl+tYEIhObOMvMxS N3c7oa46TJttseuMk1fMMMjp5xDTYdkEicf3Z1JOlHaR3N5Gui/o05VaB1dgE7AcCGb8 E/1w== X-Gm-Message-State: AOAM531hQT75vpakGPP0WHh3a2gcHn5AfynL5QbhDvo8jr9YPaWtjLlx dZ6638P1ve2+4I3BvuFFyBEI1jLT8jCRLZIVqlKDCedmZ3PVKK+eXb1AVShnsNWeofdag1SfenP CABxKVoy+2YeT2bawa8Tk8oLo1+kr9TYzJPyjDS2JTKmPO1R15c/Kh5ruS6sJp09ucCb+ X-Google-Smtp-Source: ABdhPJzCdj7Tls6sh5fpxBRFGPdDQMcRP8Jns0CaCV1+pJ9J2ny5w+pn7Z1G1P1c/ujvaFsVcQin7TfYQwAQM3Cq X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:d591:b0:165:ddec:f6ef with SMTP id k17-20020a170902d59100b00165ddecf6efmr24878940plh.35.1654538026464; Mon, 06 Jun 2022 10:53:46 -0700 (PDT) Date: Mon, 6 Jun 2022 17:52:49 +0000 In-Reply-To: <20220606175248.1884041-1-aaronlewis@google.com> Message-Id: <20220606175248.1884041-5-aaronlewis@google.com> Mime-Version: 1.0 References: <20220606175248.1884041-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog Subject: [PATCH v2 4/4] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test that masked events are not using invalid bits, and if they are, ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER. The only valid bits that can be used for masked events are set when using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one caveat. If any bits in the high nybble[1] of the eventsel for AMD are used on Intel setting the pmu event filter with KVM_SET_PMU_EVENT_FILTER will fail. Also, because no validation was being done on the event list prior to the introduction of masked events, verify that this continues for the original event type (flags == 0). If invalid bits are set (bits other than eventsel+umask) the pmu event filter will be accepted by KVM_SET_PMU_EVENT_FILTER. [1] bits 35:32 in the event and bits 11:8 in the eventsel. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 5b0163f9ba84..1fe1cbd36146 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -550,6 +550,36 @@ static void test_masked_filters(struct kvm_vm *vm) run_masked_filter_tests(vm, masked_events, nmasked_events, event); } +static void test_filter_ioctl(struct kvm_vm *vm) +{ + struct kvm_pmu_event_filter *f; + uint64_t e = ~0ul; + int r; + + /* + * Unfortunately having invalid bits set in event data is expected to + * pass when flags == 0 (bits other than eventsel+umask). + */ + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, 0); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + free(f); + + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); + free(f); + + e = ENCODE_MASKED_EVENT(0xff, 0xff, 0xff, 0xf); + + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + free(f); +} + int main(int argc, char *argv[]) { void (*guest_code)(void) = NULL; @@ -595,6 +625,7 @@ int main(int argc, char *argv[]) test_not_member_allow_list(vm); test_masked_filters(vm); + test_filter_ioctl(vm); kvm_vm_free(vm);