From patchwork Tue Sep 20 17:45:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B94F9C6FA82 for ; Tue, 20 Sep 2022 17:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231284AbiITRqc (ORCPT ); Tue, 20 Sep 2022 13:46:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230522AbiITRqO (ORCPT ); Tue, 20 Sep 2022 13:46:14 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35FC361D6B for ; Tue, 20 Sep 2022 10:46:13 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id l2-20020a170902f68200b00177ee7e673eso2150583plg.2 for ; Tue, 20 Sep 2022 10:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=0WKAUbzNduZWcQJDezq4mULZL7u8oBPWqooVtCo8CZo=; b=A4m9DEiDwnvT35knwdn96mrLfnhdr8IqH1Wz7qdVwQ6c/QFvXHnZgonGhYMP6O8X+u 5NPzpgdG3pQhe/5V0GspJUNuM35yTvzj03jM0sH9XjEyNfA0hVThtC0lI0yy4TxIzfmL hGn9sYNeiDy8ZY0qZ2BebmkQqqkd8lZwbRufWv9do1Jcfzi9TKA3UiHmy/r0vLdm8NPP /0pt4U/0ZHbKpt5Dfnn+bPHBZmWdVbwT5qj05R+Zh5jW1XsKdOc6UJz/cGFJXoNiOT6T ix3M80gUwR2yOshtpISNWZkoHXtvm7RKN//klh1THIrDiPb7k1MRQKr8/yCNHvmWSWPN Yo8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=0WKAUbzNduZWcQJDezq4mULZL7u8oBPWqooVtCo8CZo=; b=OpAFBA+mAMY9E/aJYDsKNS1qmBugeP7rPro0DhP+AuKrySYhRBQ6AB2WStayH+tmHK zZMHUmpYekAGdsDBk2y84tyS3Y7pszuMF3tWr5ZG+ILs+MKbW/bvV2/U4o8UeEMcB1lG fmw4mECiZ3llxVwUULwkEgs7GLeFCG7iNrERnJNKmFemPfzCUzuZ9FR1xxwKo39s2yoT dtkMdHTqnllM3Tvk0xzQKrHuIiuC1YBK/Ksc7la8ul/X4wNGuQDFMIgVf7rb16k3ePzg X0S2PbLglxmRGlWTW96xxPLUkzs9EgeSGz6QXrpUtGARKjpbmpEv9/iK/3S0AejH7k8C A3Tg== X-Gm-Message-State: ACrzQf34KKeBzEVySVq6w5EEwNhSIFPLujWwoCOSX08QmK1+BS2KiIqp Qq3F1HXzartu77PFcUZe2A/KgASRxiXE0jUCrMlqkQKk0KboKiMxRMkKdBk2/8wSJogpsQ0Y1MD 1IQEDb21HbmpyUOnPWYk+fza7IeZ83Vw3ySZOTTk1qVSFcU/9FzZUtZALu+VKiHCzC5qU X-Google-Smtp-Source: AMsMyM69ZGC3J4oPv+AapS4/k4CR/6UMRz23k3sl6qAhn4mplsg/x4HptQai9OIunmyslzAW5g/Bv1RxTqTqoL5P X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a63:8a4a:0:b0:439:49b3:6586 with SMTP id y71-20020a638a4a000000b0043949b36586mr21140378pgd.44.1663695972606; Tue, 20 Sep 2022 10:46:12 -0700 (PDT) Date: Tue, 20 Sep 2022 17:45:57 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-2-aaronlewis@google.com> Subject: [PATCH v5 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When checking if a pmu event the guest is attempting to program should be filtered, only consider the event select + unit mask in that decision. Use an architecture specific mask to mask out all other bits, including bits 35:32 on Intel. Those bits are not part of the event select and should not be considered in that decision. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 + arch/x86/kvm/pmu.c | 15 ++++++++++++++- arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/svm/pmu.c | 6 ++++++ arch/x86/kvm/vmx/pmu_intel.c | 6 ++++++ 5 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index c17e3e96fc1d..e0280cc3e6e4 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -24,6 +24,7 @@ KVM_X86_PMU_OP(set_msr) KVM_X86_PMU_OP(refresh) KVM_X86_PMU_OP(init) KVM_X86_PMU_OP(reset) +KVM_X86_PMU_OP(get_eventsel_event_mask) KVM_X86_PMU_OP_OPTIONAL(deliver_pmi) KVM_X86_PMU_OP_OPTIONAL(cleanup) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 02f9e4f245bd..98f383789579 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -247,6 +247,19 @@ static int cmp_u64(const void *pa, const void *pb) return (a > b) - (a < b); } +static inline u64 get_event_select(u64 eventsel) +{ + return eventsel & static_call(kvm_x86_pmu_get_eventsel_event_mask)(); +} + +static inline u64 get_raw_event(u64 eventsel) +{ + u64 event_select = get_event_select(eventsel); + u64 unit_mask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK; + + return event_select | unit_mask; +} + static bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_pmu_event_filter *filter; @@ -263,7 +276,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) goto out; if (pmc_is_gp(pmc)) { - key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + key = get_raw_event(pmc->eventsel); if (bsearch(&key, filter->events, filter->nevents, sizeof(__u64), cmp_u64)) allow_event = filter->action == KVM_PMU_EVENT_ALLOW; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 5cc5721f260b..4e22f9f55400 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -40,6 +40,7 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + u64 (*get_eventsel_event_mask)(void); }; void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index f24613a108c5..0b35eb04aa60 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -294,6 +294,11 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } } +static u64 amd_pmu_get_eventsel_event_mask(void) +{ + return AMD64_EVENTSEL_EVENT; +} + struct kvm_pmu_ops amd_pmu_ops __initdata = { .hw_event_available = amd_hw_event_available, .pmc_is_enabled = amd_pmc_is_enabled, @@ -307,4 +312,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .reset = amd_pmu_reset, + .get_eventsel_event_mask = amd_pmu_get_eventsel_event_mask, }; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c399637a3a79..0aec7576af0c 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -793,6 +793,11 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } } +static u64 intel_pmu_get_eventsel_event_mask(void) +{ + return ARCH_PERFMON_EVENTSEL_EVENT; +} + struct kvm_pmu_ops intel_pmu_ops __initdata = { .hw_event_available = intel_hw_event_available, .pmc_is_enabled = intel_pmc_is_enabled, @@ -808,4 +813,5 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .get_eventsel_event_mask = intel_pmu_get_eventsel_event_mask, }; From patchwork Tue Sep 20 17:45:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F082C6FA82 for ; Tue, 20 Sep 2022 17:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231314AbiITRqe (ORCPT ); Tue, 20 Sep 2022 13:46:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbiITRqP (ORCPT ); Tue, 20 Sep 2022 13:46:15 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A83BF642D3 for ; Tue, 20 Sep 2022 10:46:14 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id f16-20020a17090a4a9000b001f234757bbbso1947655pjh.6 for ; Tue, 20 Sep 2022 10:46:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=h15zewX4aPbMc4C2GYlo8ZIeVJfYD6uzoHaarPDW2js=; b=LFcjKoMZ7qt8MFE8KkJndripEch7U9Hv4+rLs7BPAPQigGoG+c273uRqtWabO0F8rp 3z2de34RZk4AZDaRKHNWwBAhJYKl+vaKQeifsduPBTqgTkVqqTle4x29A3B54TuOCXkC A8y/5BLu03/HCwsm68FT95ZmlDiPXhbEblrCuhqCPzkHlRaqALnJbl19c9oWudyYheZi ww9gDlJv7i0xeLD62qCNhSlVnSrnJF1vdCkk18cKZcC0z0mFRst2J+d6Bqg3aaEnQCbo EA3lbzNcRFn1Nk8qN+0Anv13wqyHCHHqTMz0m3LF5tpdcMDRJqLLDRH60FE5zIeludxg 1TOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=h15zewX4aPbMc4C2GYlo8ZIeVJfYD6uzoHaarPDW2js=; b=LVMK/EtvCH4taIB0i7CyqZ0Fg6BFa3ps5qH/9H9vmdOQAG0Cd5cZCY+S6A0LGNLvbC 5y40PyAkOeDuzqJ89A1xzc+z97t3W8uCX4llUskpuPWnUx8iVF+nL7ha+oMIyqacSuoz QOe7MaQe/1L7r2HwNpMkaRhqeXEQCfuU7f3FLA839cjSV7+FPEDJQLrdV5CHbhsHlz4K 0f0vIL/MINsfV7DXOMvG24PD0Bnv2BeM4A1E9KaGn471S4xKIzASDSrqbGYshFyJpy5w W+odojKMHqs32UI2JKChEuedOu2BPLKQlkuSkppG8iPKljASM6viiMeAlT/vSWjo76bG u3SA== X-Gm-Message-State: ACrzQf2TK4w+368sYlTeU+MHyvG3Klw+Ur0F8bgbNjSwni4Pc7XbJTK5 8jB3LSTdBjZ30ODxyFydRhFRX0oklGnKx3KY46LPKbQ+96ttvjNXNf7FgFBpv+1ij+IngHBX5g0 EQivAIts/pzyhDIMowy856aRAFNzXG5Q6ejC+FkoOOyVjPi4dGP+kkfMhVBQaA2M8f8ZY X-Google-Smtp-Source: AMsMyM7evh2lG0Zo/9sECnfbRk5U8mv3yZhAhC9KzMoV0oHB/Ebzcxs+65Vn688SY+f7gFfTlsMlM/lDbybionq7 X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90b:4c92:b0:202:fcca:60ae with SMTP id my18-20020a17090b4c9200b00202fcca60aemr5185995pjb.52.1663695974100; Tue, 20 Sep 2022 10:46:14 -0700 (PDT) Date: Tue, 20 Sep 2022 17:45:58 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-3-aaronlewis@google.com> Subject: [PATCH v5 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If a raw event is invalid, i.e. bits set outside the event select + unit mask, the event will never match the search, so it's pointless to have it in the list. Opt for a shorter list by removing invalid raw events. Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 98f383789579..e7d94e6b7f28 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -577,6 +577,38 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); +static inline u64 get_event_filter_mask(void) +{ + u64 event_select_mask = + static_call(kvm_x86_pmu_get_eventsel_event_mask)(); + + return event_select_mask | ARCH_PERFMON_EVENTSEL_UMASK; +} + +static inline bool is_event_valid(u64 event, u64 mask) +{ + return !(event & ~mask); +} + +static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter) +{ + u64 raw_mask; + int i, j; + + if (filter->flags) + return; + + raw_mask = get_event_filter_mask(); + for (i = 0, j = 0; i < filter->nevents; i++) { + u64 raw_event = filter->events[i]; + + if (is_event_valid(raw_event, raw_mask)) + filter->events[j++] = raw_event; + } + + filter->nevents = j; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; @@ -608,6 +640,8 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) /* Ensure nevents can't be changed between the user copies. */ *filter = tmp; + remove_invalid_raw_events(filter); + /* * Sort the in-kernel list so that we can search it with bsearch. */ From patchwork Tue Sep 20 17:45:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3517C54EE9 for ; Tue, 20 Sep 2022 17:46:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231279AbiITRqg (ORCPT ); Tue, 20 Sep 2022 13:46:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230193AbiITRq3 (ORCPT ); Tue, 20 Sep 2022 13:46:29 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CCA161DB7 for ; Tue, 20 Sep 2022 10:46:16 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id w1-20020a63d741000000b0042c254a4ccdso2027181pgi.15 for ; Tue, 20 Sep 2022 10:46:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=EaA8Fc7N5dB/C4oEm/Nmn+GWZpgy4SP7/lSYzAJxUBo=; b=Hlq60qOb3jqa/GM7T5dzpxsdzDT8Ik+gNna9mkX2J+OoIkF79pJ113e++2kLu0NOIP Amgkkg7UMMyxYuSdbjo34wvxSRAb+yHhoqrXxrDP4EzofwHUulx3oOEy+IosGtcEU5R4 zamWh5qw3RsVI8GHr0/VKqg1CbcbFbaxdxqOddCWiJGC/lF5F6/xYqD2nFP+pXTt9IXi XDupNQvwNes666btgSKbTT7GYdbX4t1XJ4xwvWjQ3x4tmzQMm0K7uNsfaz5SjlqckUrY xzd9LlsQmm0AgixLARCFlJHdZzu9Y0jf3WG6Sg7psgS2oUi1a1pIq/4qyBdBE6nTflX8 9NCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=EaA8Fc7N5dB/C4oEm/Nmn+GWZpgy4SP7/lSYzAJxUBo=; b=LBXc5aSoxzHnW0WR723kXpdNNzYutFzzkvtmpIhINEjsJEBQrm9/zs+89vSVtHYHyJ Ci6Q3O8513PLlZjrWNX6pmRvHaINEpEXiURhU/otcN0ryppKeqmwQWMjEBM5KtMD69/z ziH4SHLOn/FX3ow3184O8yTaLVXxdhK+TZzE1MVtfW5aLNrICiJ2j2+xk6ZovwyHm3IT iUaJncWZpuAlxXsrzKeqUchY3e2NAlSZLGviozxJYIXjnIh5BBg9/yRfDZvqw8fJwtbc dJ3UGr7dNdEtvbs890fRnDUVzKQvLNdOvNmT9NOnAGqDSd9RkytqGSMLttCdF/Evokks 3fUw== X-Gm-Message-State: ACrzQf3QkTUdDHRB+hH5b7uXdLTnS7O8sNuJfClbF8XSMEyO2i3s4v2g 7r6cujLsFq/5yioFUz+TU2AB7LPX2BimWzsgtlyaN8M5xMoti6d4fl5VaOpRvLzacsLbzEFpvPa mcpwPHpOIXjyJ1HCkiAZ4S1+VrzeQVJprSh3VaSUjEh1mUCNGQ4GijcudFKHSrMyCjG0w X-Google-Smtp-Source: AMsMyM52E493mBM6RLfSaQmPLv5gdPFR35Dgg9WWu57cO0Cnp/7xoKDjjYeeq0lWRKurpmF3zV9RXI+wDV1Xdm3s X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:1249:b0:178:639a:1ab1 with SMTP id u9-20020a170903124900b00178639a1ab1mr786014plh.64.1663695975772; Tue, 20 Sep 2022 10:46:15 -0700 (PDT) Date: Tue, 20 Sep 2022 17:45:59 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-4-aaronlewis@google.com> Subject: [PATCH v5 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create an internal representation for filter events to abstract the events userspace uses from the events the kernel uses. That will allow the kernel to use a common event and a common code path between the different types of filter events used in userspace once masked events are introduced. No functional changes intended Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.c | 116 ++++++++++++++++++++++++++++++++------------- arch/x86/kvm/pmu.h | 16 +++++++ 2 files changed, 98 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index e7d94e6b7f28..7ce8bfafea91 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -239,6 +239,19 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } +static inline u16 get_event_select(u64 eventsel) +{ + u64 e = eventsel & + static_call(kvm_x86_pmu_get_eventsel_event_mask)(); + + return (e & ARCH_PERFMON_EVENTSEL_EVENT) | ((e >> 24) & 0xF00ULL); +} + +static inline u8 get_unit_mask(u64 eventsel) +{ + return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; +} + static int cmp_u64(const void *pa, const void *pb) { u64 a = *(u64 *)pa; @@ -247,53 +260,61 @@ static int cmp_u64(const void *pa, const void *pb) return (a > b) - (a < b); } -static inline u64 get_event_select(u64 eventsel) +static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) +{ + return bsearch(&key, filter->events, filter->nevents, + sizeof(filter->events[0]), cmp_u64); +} + +static bool filter_contains_match(struct kvm_pmu_event_filter *filter, + u64 eventsel) { - return eventsel & static_call(kvm_x86_pmu_get_eventsel_event_mask)(); + u16 event_select = get_event_select(eventsel); + u8 unit_mask = get_unit_mask(eventsel); + u64 key; + + key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask); + return find_filter_entry(filter, key); } -static inline u64 get_raw_event(u64 eventsel) +static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) { - u64 event_select = get_event_select(eventsel); - u64 unit_mask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK; + if (filter_contains_match(filter, eventsel)) + return filter->action == KVM_PMU_EVENT_ALLOW; - return event_select | unit_mask; + return filter->action == KVM_PMU_EVENT_DENY; +} + +static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) +{ + int fixed_idx = idx - INTEL_PMC_IDX_FIXED; + + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + + return true; } static bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; - bool allow_event = true; - __u64 key; - int idx; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) return false; filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) - goto out; + return true; - if (pmc_is_gp(pmc)) { - key = get_raw_event(pmc->eventsel); - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } else { - idx = pmc->idx - INTEL_PMC_IDX_FIXED; - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - } + if (pmc_is_gp(pmc)) + return is_gp_event_allowed(filter, pmc->eventsel); -out: - return allow_event; + return is_fixed_event_allowed(filter, pmc->idx); } void reprogram_counter(struct kvm_pmc *pmc) @@ -609,6 +630,38 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter) filter->nevents = j; } +static inline u64 encode_filter_entry(u64 event) +{ + u16 event_select = get_event_select(event); + u8 unit_mask = get_unit_mask(event); + + return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask); +} + +static void convert_to_filter_events(struct kvm_pmu_event_filter *filter) +{ + int i; + + for (i = 0; i < filter->nevents; i++) { + u64 e = filter->events[i]; + + filter->events[i] = encode_filter_entry(e); + } +} + +static void prepare_filter_events(struct kvm_pmu_event_filter *filter) +{ + remove_invalid_raw_events(filter); + + convert_to_filter_events(filter); + + /* + * Sort the in-kernel list so that we can search it with bsearch. + */ + sort(&filter->events, filter->nevents, sizeof(filter->events[0]), + cmp_u64, NULL); +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; @@ -640,12 +693,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) /* Ensure nevents can't be changed between the user copies. */ *filter = tmp; - remove_invalid_raw_events(filter); - - /* - * Sort the in-kernel list so that we can search it with bsearch. - */ - sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); + prepare_filter_events(filter); mutex_lock(&kvm->lock); filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 4e22f9f55400..df4f81e5c685 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -205,4 +205,20 @@ bool is_vmware_backdoor_pmc(u32 pmc_idx); extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; + +struct kvm_pmu_filter_entry { + union { + u64 raw; + struct { + u64 event_select:12; + u64 unit_mask:8; + u64 rsvd:44; + }; + }; +}; + +#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask) \ + (((event_select) & 0xFFFULL) | \ + (((unit_mask) & 0xFFULL) << 12)) + #endif /* __KVM_X86_PMU_H */ From patchwork Tue Sep 20 17:46:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C64C5C6FA82 for ; Tue, 20 Sep 2022 17:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231377AbiITRqi (ORCPT ); Tue, 20 Sep 2022 13:46:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231168AbiITRq3 (ORCPT ); Tue, 20 Sep 2022 13:46:29 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D15C65543 for ; Tue, 20 Sep 2022 10:46:18 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h13-20020a63530d000000b0043087bbf72dso2023435pgb.17 for ; Tue, 20 Sep 2022 10:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=PIKskGvcUQkkklmZCkg5HD8zRwjvURysKIXUxb9BGAc=; b=W8/zBR5X02zmVIryG4TWhB9E0c6QOr1DC9O8FJWtPBxgzNwv41vT0DT6UoaRatN+z+ 3hEXn8QlsZtqFEp3lLjLrPCOOR8KhC2kCqYRov6cFs2VAg/lQqSOrzyibkDkyc4nNpfe Qtpcqu2HTddSx/g4rUFqchRR8VX0VwiXllGRiMoz/DRyeiaWqdl8qXvgG8lGs7dIufNJ j6W1/Wbue+UF4hPbKT6EojFfX2i0bHuFxn6lyupOfOy+rQeu39KE79QwFa610RGrAIyZ MeoUoLgqFqNxpBvj2dJTSXLheoRUzGtvBCBlAYTpfEn0nbkeP3pFne3OGAKSLFSb4CJ7 jTFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=PIKskGvcUQkkklmZCkg5HD8zRwjvURysKIXUxb9BGAc=; b=zN3Nm60FaHcZNHB7x3+OwtbtTrISWTquKzs9xg6WeupzMKFd8WL0/bw+j9OJBglUMb HdH5YobcEAwc34+4AX/L+RPThSU/b6afr5Cltqxbl3qK02AgcJ1kK8y4ekX0YT5Xx0E7 x2yBLndmcWveFEqmiP1V7i9uaxZZxS2PCx9FSzNYJjoq8yM78JoMgRt3h1pRTphSMbbb an8d+2zhHT1vClDC4MtAGHYEsVuvgeBRLv3Snpa7A+YL/+tTB/E2DsnFVkE2jSjf67Ej o6+nCeRWsy4W7jyqL3YHpmn3JffhOBaF4KQxZ4AH4qjFX6dk5ssZayMNKxW1i64OOded 811w== X-Gm-Message-State: ACrzQf2k+na/1t8j8j2zQJ7YbckujTITDTK+UhKVoxWGkJGXUV339Sln 8iTK++72+id3HpSSpoTpy0PUBD8wPVqJvjRY2bluMWKPNEJhBExEeT9YvUPFfVqLz99Y8PJaB7a nwHNp/BVjAdVdNn2tsovnXrZYTnjZBQYOkQlY8cBS22RmMEfqONh2rxXw4G1rPZsUVbp4 X-Google-Smtp-Source: AMsMyM4GSIiJY5zF1lghpq9ynQKnahK5T5BkRaqZ9ZcJD2JpQEQKQDRvc7Bk4KJeapOILY/ZY7tMKku1UU9Jb3sO X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:ccd2:b0:178:29a3:df2c with SMTP id z18-20020a170902ccd200b0017829a3df2cmr742652ple.77.1663695977518; Tue, 20 Sep 2022 10:46:17 -0700 (PDT) Date: Tue, 20 Sep 2022 17:46:00 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-5-aaronlewis@google.com> Subject: [PATCH v5 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When building a list of filter events, it can sometimes be a challenge to fit all the events needed to adequately restrict the guest into the limited space available in the pmu event filter. This stems from the fact that the pmu event filter requires each raw event (i.e. event select + unit mask) be listed, when the intention might be to restrict the event select all together, regardless of it's unit mask. Instead of increasing the number of filter events in the pmu event filter, add a new encoding that is able to do a more generalized match on the unit mask. Introduce masked events as a new encoding that the pmu event filter understands in addition to raw events. A masked event has the fields: mask, match, and exclude. When filtering based on these events, the mask is applied to the guest's unit mask to see if it matches the match value (i.e. unit_mask & mask == match). The exclude bit can then be used to exclude events from that match. E.g. for a given event select, if it's easier to say which unit mask values shouldn't be filtered, a masked event can be set up to match all possible unit mask values, then another masked event can be set up to match the unit mask values that shouldn't be filtered. Userspace can query to see if this feature exists by looking for the capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS. This feature is enabled by setting the flags field in the pmu event filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS. Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(). It is an error to have a bit set outside the valid bits for a masked event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases, including the high bits (11:8) of the event select if called on Intel. Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- Documentation/virt/kvm/api.rst | 82 ++++++++++++++++-- arch/x86/include/uapi/asm/kvm.h | 28 ++++++ arch/x86/kvm/pmu.c | 145 +++++++++++++++++++++++++++----- arch/x86/kvm/pmu.h | 32 +++++-- arch/x86/kvm/x86.c | 1 + include/uapi/linux/kvm.h | 1 + 6 files changed, 258 insertions(+), 31 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index abd7c32126ce..16c3e6fd4ed7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5029,6 +5029,15 @@ using this ioctl. :Parameters: struct kvm_pmu_event_filter (in) :Returns: 0 on success, -1 on error +Errors: + + ====== ============================================================ + EFAULT args[0] cannot be accessed + EINVAL args[0] contains invalid data in the filter or filter events + E2BIG nevents is too large + EBUSY not enough memory to allocate the filter + ====== ============================================================ + :: struct kvm_pmu_event_filter { @@ -5040,14 +5049,73 @@ using this ioctl. __u64 events[0]; }; -This ioctl restricts the set of PMU events that the guest can program. -The argument holds a list of events which will be allowed or denied. -The eventsel+umask of each event the guest attempts to program is compared -against the events field to determine whether the guest should have access. -The events field only controls general purpose counters; fixed purpose -counters are controlled by the fixed_counter_bitmap. +This ioctl restricts the set of PMU events the guest can program by limiting +which event select and unit mask combinations are permitted. + +The argument holds a list of filter events which will be allowed or denied. + +Filter events only control general purpose counters; fixed purpose counters +are controlled by the fixed_counter_bitmap. + +Valid values for 'flags':: + +``0`` + +To use this mode, clear the 'flags' field. + +In this mode each filter event will contain an event select + unit mask. + +When the guest attempts to program the PMU the event select + unit mask from +the event select register being programmed is compared against the filter +events to determine whether the guest should have access. + +``KVM_PMU_EVENT_FLAG_MASKED_EVENTS`` +:Capability: KVM_CAP_PMU_EVENT_MASKED_EVENTS + +In this mode each filter event will contain an event select, mask, match, and +exclude value. + +When the guest attempts to program the PMU, these steps are followed in +determining if the guest should have access: + 1. Match the event select from the guest against the filter events. + 2. If a match is found, match the guest's unit mask to the mask and match + values of the included filter events. + I.e. (unit mask & mask) == match && !exclude. + 3. If a match is found, match the guest's unit mask to the mask and match + values of the excluded filter events. + I.e. (unit mask & mask) == match && exclude. + 4. + a. If an included match is found and an excluded match is not found, filter + the event. + b. For everything else, do not filter the event. + 5. + a. If the event is filtered and it's an allow list, allow the guest to + program the event. + b. If the event is filtered and it's a deny list, do not allow the guest to + program the event. + +To encode a filter event use: + KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(). + +To access individual components in a masked entry use: +:: + struct kvm_pmu_event_masked_entry { + union { + __u64 raw; + struct { + /* event_select bits 11:8 are only valid on AMD. */ + __u64 event_select:12; + __u64 mask:8; + __u64 match:8; + __u64 exclude:1; + __u64 rsvd:35; + }; + }; + }; -No flags are defined yet, the field must be zero. +-EINVAL will be returned if the 'rsvd' field is not zero or if any of +the high bits (11:8) in the 'event_select' field are set when called +on Intel. Valid values for 'action':: diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 46de10a809ec..c82400a06c8b 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -528,6 +528,34 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0) +#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + +struct kvm_pmu_event_masked_entry { + union { + __u64 raw; + struct { + /* event_select bits 11:8 are only valid on AMD. */ + __u64 event_select:12; + __u64 mask:8; + __u64 match:8; + __u64 exclude:1; + __u64 rsvd:35; + }; + }; +}; + +#define KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, exclude) \ + (((event_select) & 0xFFFULL) | \ + (((mask) & 0xFFULL) << 12) | \ + (((match) & 0xFFULL) << 20) | \ + ((__u64)(!!(exclude)) << 28)) + +#define KVM_PMU_EVENT_MASKED_EVENTSEL_EVENT (GENMASK_ULL(11, 0)) +#define KVM_PMU_EVENT_MASKED_EVENTSEL_MASK (GENMASK_ULL(19, 12)) +#define KVM_PMU_EVENT_MASKED_EVENTSEL_MATCH (GENMASK_ULL(27, 20)) +#define KVM_PMU_EVENT_MASKED_EVENTSEL_EXCLUDE (BIT_ULL(28)) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7ce8bfafea91..b188ddb23f75 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -252,34 +252,61 @@ static inline u8 get_unit_mask(u64 eventsel) return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; } -static int cmp_u64(const void *pa, const void *pb) +static int filter_event_cmp(const void *pa, const void *pb) { - u64 a = *(u64 *)pa; - u64 b = *(u64 *)pb; + u64 a = *(u64 *)pa & (KVM_PMU_FILTER_EVENTSEL_INDEX | + KVM_PMU_FILTER_EVENTSEL_EVENT | + KVM_PMU_FILTER_EVENTSEL_EXCLUDE); + u64 b = *(u64 *)pb & (KVM_PMU_FILTER_EVENTSEL_INDEX | + KVM_PMU_FILTER_EVENTSEL_EVENT | + KVM_PMU_FILTER_EVENTSEL_EXCLUDE); return (a > b) - (a < b); } -static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) +static int find_filter_index(struct kvm_pmu_event_filter *filter, u64 key) { - return bsearch(&key, filter->events, filter->nevents, - sizeof(filter->events[0]), cmp_u64); + u64 *fe = bsearch(&key, filter->events, filter->nevents, + sizeof(filter->events[0]), filter_event_cmp); + + if (!fe) + return -1; + + return fe - filter->events; } static bool filter_contains_match(struct kvm_pmu_event_filter *filter, - u64 eventsel) + u64 eventsel, bool exclude) { u16 event_select = get_event_select(eventsel); u8 unit_mask = get_unit_mask(eventsel); + struct kvm_pmu_filter_entry fe; + int i, index; u64 key; - key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask); - return find_filter_entry(filter, key); + key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, 0, 0, exclude); + index = find_filter_index(filter, key); + if (index < 0) + return false; + + for (i = index; i < filter->nevents; i++) { + fe.raw = filter->events[i]; + + if (fe.event_select != event_select || + fe.exclude != exclude) + break; + + if ((unit_mask & fe.mask) == fe.match) + return true; + } + + return false; } static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) { - if (filter_contains_match(filter, eventsel)) + if (filter_contains_match(filter, eventsel, false) && + !filter_contains_match(filter, eventsel, true)) return filter->action == KVM_PMU_EVENT_ALLOW; return filter->action == KVM_PMU_EVENT_DENY; @@ -598,11 +625,20 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); -static inline u64 get_event_filter_mask(void) +static inline u64 get_event_filter_mask(u32 flags) { u64 event_select_mask = static_call(kvm_x86_pmu_get_eventsel_event_mask)(); + if (flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) { + u64 masked_eventsel_event = get_event_select(event_select_mask); + + return masked_eventsel_event | + KVM_PMU_EVENT_MASKED_EVENTSEL_MASK | + KVM_PMU_EVENT_MASKED_EVENTSEL_MATCH | + KVM_PMU_EVENT_MASKED_EVENTSEL_EXCLUDE; + } + return event_select_mask | ARCH_PERFMON_EVENTSEL_UMASK; } @@ -611,6 +647,29 @@ static inline bool is_event_valid(u64 event, u64 mask) return !(event & ~mask); } +static bool is_filter_valid(struct kvm_pmu_event_filter *filter) +{ + u64 event_mask; + int i; + + /* To maintain backwards compatibility don't validate raw events. */ + if (!filter->flags) + return true; + + event_mask = get_event_filter_mask(filter->flags); + for (i = 0; i < filter->nevents; i++) { + if (!is_event_valid(filter->events[i], event_mask)) + return false; + } + + for (i = 0; i < ARRAY_SIZE(filter->pad); i++) { + if (filter->pad[i]) + return false; + } + + return true; +} + static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter) { u64 raw_mask; @@ -619,7 +678,7 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter) if (filter->flags) return; - raw_mask = get_event_filter_mask(); + raw_mask = get_event_filter_mask(filter->flags); for (i = 0, j = 0; i < filter->nevents; i++) { u64 raw_event = filter->events[i]; @@ -630,12 +689,27 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter) filter->nevents = j; } -static inline u64 encode_filter_entry(u64 event) +static inline u64 encode_filter_entry(u64 event, u32 flags) { - u16 event_select = get_event_select(event); - u8 unit_mask = get_unit_mask(event); + u16 event_select; + u8 mask, match; + bool exclude; - return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask); + if (flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) { + struct kvm_pmu_event_masked_entry me = { .raw = event }; + + mask = me.mask; + match = me.match; + event_select = me.event_select; + exclude = me.exclude; + } else { + mask = 0xFF; + match = get_unit_mask(event); + event_select = get_event_select(event); + exclude = false; + } + + return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, mask, match, exclude); } static void convert_to_filter_events(struct kvm_pmu_event_filter *filter) @@ -645,7 +719,34 @@ static void convert_to_filter_events(struct kvm_pmu_event_filter *filter) for (i = 0; i < filter->nevents; i++) { u64 e = filter->events[i]; - filter->events[i] = encode_filter_entry(e); + filter->events[i] = encode_filter_entry(e, filter->flags); + } +} + +/* + * Sort will order the list by exclude, then event select. This function will + * then index the sublists of event selects such that when a search is done on + * the list, the head of the event select sublist is returned. This simplifies + * the logic in filter_contains_match() when walking the list. + */ +static void index_filter_events(struct kvm_pmu_event_filter *filter) +{ + struct kvm_pmu_filter_entry *prev, *curr; + int i, index = 0; + + if (filter->nevents) + prev = (struct kvm_pmu_filter_entry *)(filter->events); + + for (i = 0; i < filter->nevents; i++) { + curr = (struct kvm_pmu_filter_entry *)(&filter->events[i]); + + if (curr->event_select != prev->event_select || + curr->exclude != prev->exclude) { + index = 0; + prev = curr; + } + + curr->event_index = index++; } } @@ -659,7 +760,9 @@ static void prepare_filter_events(struct kvm_pmu_event_filter *filter) * Sort the in-kernel list so that we can search it with bsearch. */ sort(&filter->events, filter->nevents, sizeof(filter->events[0]), - cmp_u64, NULL); + filter_event_cmp, NULL); + + index_filter_events(filter); } int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) @@ -675,7 +778,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) tmp.action != KVM_PMU_EVENT_DENY) return -EINVAL; - if (tmp.flags != 0) + if (tmp.flags & ~KVM_PMU_EVENT_FLAGS_VALID_MASK) return -EINVAL; if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS) @@ -693,6 +796,10 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) /* Ensure nevents can't be changed between the user copies. */ *filter = tmp; + r = -EINVAL; + if (!is_filter_valid(filter)) + goto cleanup; + prepare_filter_events(filter); mutex_lock(&kvm->lock); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index df4f81e5c685..ffc07e4d8d71 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -206,19 +206,41 @@ bool is_vmware_backdoor_pmc(u32 pmc_idx); extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; +#define KVM_PMU_FILTER_EVENTSEL_INDEX (GENMASK_ULL(27, 16)) +#define KVM_PMU_FILTER_EVENTSEL_EVENT (GENMASK_ULL(39, 28)) +#define KVM_PMU_FILTER_EVENTSEL_EXCLUDE (BIT_ULL(40)) + +/* + * Internal representation by which all filter events converge (e.g. masked + * events, raw events). That way there is only one way filter events + * behave once the filter is set. + * + * When filter events are converted into this format then sorted, the + * resulting list naturally ends up in two sublists. One for the 'include + * list' and one for the 'exclude list'. These sublists are further broken + * down into sublists ordered by their event select. After that, the + * event select sublists are indexed such that a search for: exclude = n, + * event_select = n, and event_index = 0 will return the head of an event + * select sublist that can be walked to see if a match exists. + */ struct kvm_pmu_filter_entry { union { u64 raw; struct { + u64 mask:8; + u64 match:8; + u64 event_index:12; u64 event_select:12; - u64 unit_mask:8; - u64 rsvd:44; + u64 exclude:1; + u64 rsvd:23; }; }; }; -#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask) \ - (((event_select) & 0xFFFULL) | \ - (((unit_mask) & 0xFFULL) << 12)) +#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, mask, match, exclude) \ + (((mask) & 0xFFULL) | \ + (((match) & 0xFFULL) << 8) | \ + (((event_select) & 0xFFFULL) << 28) | \ + ((u64)(!!(exclude)) << 40)) #endif /* __KVM_X86_PMU_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 43a6a7efc6ec..0a6c5e1aca0f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4374,6 +4374,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SPLIT_IRQCHIP: case KVM_CAP_IMMEDIATE_EXIT: case KVM_CAP_PMU_EVENT_FILTER: + case KVM_CAP_PMU_EVENT_MASKED_EVENTS: case KVM_CAP_GET_MSR_FEATURES: case KVM_CAP_MSR_PLATFORM_INFO: case KVM_CAP_EXCEPTION_PAYLOAD: diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index eed0315a77a6..685034baea9d 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1177,6 +1177,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220 #define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_CPU_TOPOLOGY 222 +#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 223 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Sep 20 17:46:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01802C6FA90 for ; Tue, 20 Sep 2022 17:46:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbiITRqk (ORCPT ); Tue, 20 Sep 2022 13:46:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231230AbiITRqa (ORCPT ); Tue, 20 Sep 2022 13:46:30 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E83816DF94 for ; Tue, 20 Sep 2022 10:46:19 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id s16-20020a170902ea1000b00176cf52a348so2149388plg.3 for ; Tue, 20 Sep 2022 10:46:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=rzYXp5WQnEAZeKlyKnAxtsq7oHUJ3RKxDsON93GIG1k=; b=SjBTtMmJDng3GlRvJDzNLF/o+wjrva1GqQXUWPbfhwkEoo7pSwZuNteLmNZQHb8D2d FmVaWQeFV/ZWllM1a4bP4Gn7/vX4oKPUz45w+m0rDgvBfNjU1WXtSxGG7c/qG7TwPhYc qIbnRStw58vDiVkqdg4dyPhKjRuwVdNa/v7rbgxUdBMLrvQowUKGsschcSjHaDayo8e1 oRMoWnBW0vdnIvwU5sDlbOWE+xPIAU1qaXhOmbDkYgiNdiNSLeBNTChGPaZoEJY3wtjl kJO40DUpRwKMGwrD6z2IkVvIOIm4hvL3XS7zcxr8Xb9MSGUjGjtonRrYGM4vGGCxI78X 1ycg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=rzYXp5WQnEAZeKlyKnAxtsq7oHUJ3RKxDsON93GIG1k=; b=LysJdymU4sLiVA1PtnD/YzJ7DSgLMx4lH1bzCgeh4SjEQQol0xBeoGY1HutZUcCz2t doltY1V2HlPE4nu6xjf634QxBDmHrFqnhVyjEa5ki+qA9jbNwhbvGxwryw11eotYGHj4 SF4gHeVQFub+A7w3nA5U9fmswvC3UK4AWi4rreBMGsS70Vr6Uwa4C7b4Fjbbc9RGdBo1 GofIpmCiR1FQysKAp0n4xXoUD3O2QDMrRUE+q9dH5WBnVizq3SHbTp+Xsb9TAUsTmfEX IS0tDUVkNmovEhDACdAX5A/oul1nDaNb67LDwPcJPhaAirNLYAQjZP7uVHVETxAmykD3 nOQw== X-Gm-Message-State: ACrzQf0/tTQiA+Dyyc44onl4XAr9Mibih18aH9zHu4muVOTSgxuxQZRZ owfWKPpOE6Kbl+nwbKYgtIGmId8oz1kRjGhns7yqmVd7JAx/vmCAjGTmRekm7MVTeF0GvUQKGk7 6/eLaGDdCHGdo+cm++0ZJVe599nf8O0OF8p+/hi521yTUjifER1UEJqMclpaEKYz9pUIp X-Google-Smtp-Source: AMsMyM73mIQYkxuHapgIoHSfAH1p/o3QuyxpyEz9KsafJ2ihR+z2GeJx4ZxV/NHQIlm7fFGk5s76NSA1O0Wb9Clw X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:aa7:88c7:0:b0:542:3229:8d11 with SMTP id k7-20020aa788c7000000b0054232298d11mr24884967pff.41.1663695979276; Tue, 20 Sep 2022 10:46:19 -0700 (PDT) Date: Tue, 20 Sep 2022 17:46:01 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-6-aaronlewis@google.com> Subject: [PATCH v5 5/7] selftests: kvm/x86: Add flags when creating a pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the flags field can be non-zero, pass it in when creating a pmu event filter. This is needed in preparation for testing masked events. No functional change intended. Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index ea4e259a1e2e..bd7054a53981 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -221,14 +221,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], - int nevents, uint32_t action) +create_pmu_event_filter(const uint64_t event_list[], int nevents, + uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->flags = flags; for (i = 0; i < nevents; i++) f->events[i] = event_list[i]; @@ -239,7 +240,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action); + action, 0); } /* @@ -286,7 +287,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); count = test_with_filter(vcpu, f); free(f); From patchwork Tue Sep 20 17:46:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93ADFC54EE9 for ; Tue, 20 Sep 2022 17:46:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbiITRqm (ORCPT ); Tue, 20 Sep 2022 13:46:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231258AbiITRqb (ORCPT ); Tue, 20 Sep 2022 13:46:31 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C94BE6E2FC for ; Tue, 20 Sep 2022 10:46:21 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r7-20020a632047000000b00439d0709849so2018768pgm.22 for ; Tue, 20 Sep 2022 10:46:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=NAsanB3vQZ9bxau0wNd4LNIEqMiagw1T0WlwtZnZHTU=; b=XC0thiMB5FsALg+OeNvHlun/06tzsqnfvJKdO71xUaFmtGFcYNCKDwHSEe2YzAVLYt jCJ+RHyyCpefOJ2CmivtrKzITGDHW6dOxovASVDIh+Eacbr+O2L8VXH6fYuPgcWYqpev ojzKiWX30AzRsjTyUcSLPOh9etoEkVg8Ne0dSyT9GGCL4yzzCqcw47KYYhV3gTCG8LnL cD7YZfLIuf20qOS5Bkzhokjq/SORu9KP07jeNK7szCbvx0kGHGd3Y3AmiPF16HocJyNr c1XI7JDq05upJEO+G5W90Fk8kpldiBVpZ6wvsVoV4zvh1fKPtJvfGQxHYygBbg6HjJIS FT6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=NAsanB3vQZ9bxau0wNd4LNIEqMiagw1T0WlwtZnZHTU=; b=FNjK82yZOESuuOwaIEVW3IHEc4xJWjtyv76srOfk8t95LAV0bDCaRBxWo6GVLiQ45p 9Lv64SuF/MpksTr+0skRNnJd1Xqkqkt39S72+PYNQm1POIKxdY500t54P/ebdxBsu4jM iTQfnIMVYvJRSpJb/47W/6P+5/1VptxUhWBKrBgDvxFJLwfbzIFEABZD0CaDCMut/63d uKSqawZemPzLcmJoLvvK1Qr/SJDxgQ+JM9UsR0h0LSUmK10/WKQblC4I6Am7U69g4HUU dmSV7A/n+i+jNRM9IKTfkh6WesSOgaDWfluKcB9IPFpwII/+Z3WOTiFdvIARKE5uXyTt jh7A== X-Gm-Message-State: ACrzQf2u6uysOVamF56U/5jbiCGx834GuBRT+zZaRqp9O9AlXhsJXQN6 aaZUl5oewDMRxtnP91tMWWGYi7iTmjsUI0+LWn9X6XF9WoQCRW11IhnegOodd6Ipisuh9WCQrOo /FSzNwNdlDVctbNvFKRRS37hoV7vkgYv1v8CsBt+k6HKNFYVB8tNl6nV/sirct2BCrlrz X-Google-Smtp-Source: AMsMyM7F3HppuLSoB4Z3gcLLTUySQmruLmJkpozrmthy5/4PlNSBwSw3yiLIQ0PMocmLyC5Y150ClwXkOUSbqCwO X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a62:fb18:0:b0:548:9dff:89da with SMTP id x24-20020a62fb18000000b005489dff89damr25175324pfm.23.1663695981188; Tue, 20 Sep 2022 10:46:21 -0700 (PDT) Date: Tue, 20 Sep 2022 17:46:02 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-7-aaronlewis@google.com> Subject: [PATCH v5 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test that masked events are not using invalid bits, and if they are, ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER. The only valid bits that can be used for masked events are set when using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one exception: If any of the high bits (11:8) of the event select are set when using Intel, the PMU event filter will fail. Also, because validation was not being done prior to the introduction of masked events, only expect validation to fail when masked events are used. E.g. in the first test a filter event with all its bits set is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0. Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- .../kvm/x86_64/pmu_event_filter_test.c | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index bd7054a53981..0750e2fa7a38 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,39 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, + int nevents, uint32_t flags) +{ + struct kvm_pmu_event_filter *f; + int r; + + f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); + free(f); + + return r; +} + +static void test_filter_ioctl(struct kvm_vcpu *vcpu) +{ + uint64_t e = ~0ul; + int r; + + /* + * Unfortunately having invalid bits set in event data is expected to + * pass when flags == 0 (bits other than eventsel+umask). + */ + r = run_filter_test(vcpu, &e, 1, 0); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); + + e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -452,6 +485,7 @@ int main(int argc, char *argv[]) setbuf(stdout, NULL); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER)); + TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS)); TEST_REQUIRE(use_intel_pmu() || use_amd_pmu()); guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code; @@ -472,6 +506,8 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + test_filter_ioctl(vcpu); + kvm_vm_free(vm); test_pmu_config_disable(guest_code); From patchwork Tue Sep 20 17:46:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12982473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BE9CC6FA82 for ; Tue, 20 Sep 2022 17:46:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231229AbiITRqn (ORCPT ); Tue, 20 Sep 2022 13:46:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230178AbiITRqb (ORCPT ); Tue, 20 Sep 2022 13:46:31 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38C316E2EA for ; Tue, 20 Sep 2022 10:46:23 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id o1-20020a170902d4c100b00177f59a9889so2150136plg.13 for ; Tue, 20 Sep 2022 10:46:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=RWwplaiG8IEQDmWoHf67SSH3m+cS7wW+Jv8en9cvH28=; b=BpI4lzYn/KCWtOAQGF3qxdQstx4SyvIZIM168y+nvel7eHE4GCK4+a/OreICGqhbN7 BWagduGfhj6vvS4fZdcC1uizQo/oZL7KhRjM7BFIpjPjPqEzT2jOuMuH22lFVBDMkt1D VMttDup/phhy+4xly4pBrR2dv/zhrLgdJBGZlj40DQ8RQUNVWai/uNJAM2E1INvXDgZy PjjgYFnOb0WHse4f7Gdnjj/619H6s3kxIVrwWogvngTwwNvoGWkXDwQtg8DL9ZFgttY4 xZborYlm0GCgmyv03pakuZDrDvkuno438opOzUBsIhlCwpeqdxAlw2eEC9NHWT4S6vUr xPTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=RWwplaiG8IEQDmWoHf67SSH3m+cS7wW+Jv8en9cvH28=; b=rUeLJ6fASHRlmtu2g3j2pHWb9Ijscr9nhoZXFjjuUSim0fazY19th8BD6kUlA9Sa3Z 3Ts7Ej4R4XmWCE+N16Fc14yX1S4dMd/0Vv78kLjTxKL8cVWFlHe4XWn6feaeDhfFluDK 7ARuR+OjTAIlUtXHdEr2jPVf78IMs4bfp7Y61F22lsX6cQqVN16npIvcuZ/0MFryZ7fl E+PFMya/SHtj61BnjZ2MS9FDZpIeYKzQ7WA+UyfRsqPtO8M6wqenyl+9BKHXP3GmrmdJ 6VxLmkXd3Um1vb10IuAajdWVR4RYQwjfiZmxjQwcsR6Na5db6coqs7bJygSjZHpkZF6r YvuQ== X-Gm-Message-State: ACrzQf2WlFrLiRwsyHnMNNvAjeXOoR1sQyf5d6SKKBajoQphFdTokt8P 4ZmppU7orBVodtXSPSCvanATBFRvELZiddTdL58877TdEOwRmAWBbFu1Vww97xSjd7y0/QyPGWH v8wHXjZJt9oGTOGEmwx8z05Q4MFWbHaW3teLUIiWY+tdY32Lkzgb2paVlA+w1F17qC9MI X-Google-Smtp-Source: AMsMyM5ccV2WiVT4qyuT8dzsPVAbjKCyxgbiJRgQkgC1gesGL9XnnlDmWqAf0JsooWvg5LN8TiMQ/XXaDYdZgKMI X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:aa7:8c4f:0:b0:54d:14d5:d0c9 with SMTP id e15-20020aa78c4f000000b0054d14d5d0c9mr15851171pfd.13.1663695982580; Tue, 20 Sep 2022 10:46:22 -0700 (PDT) Date: Tue, 20 Sep 2022 17:46:03 +0000 In-Reply-To: <20220920174603.302510-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20220920174603.302510-1-aaronlewis@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Message-ID: <20220920174603.302510-8-aaronlewis@google.com> Subject: [PATCH v5 7/7] selftests: kvm/x86: Test masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing to show that a pmu event can be filtered with a generalized match on it's unit mask. These tests set up test cases to demonstrate various ways of filtering a pmu event that has multiple unit mask values. It does this by setting up the filter in KVM with the masked events provided, then enabling three pmu counters in the guest. The test then verifies that the pmu counters agree with which counters should be counting and which counters should be filtered for both a sparse filter list and a dense filter list. Signed-off-by: Aaron Lewis Reviewed-by: Jim Mattson --- .../kvm/x86_64/pmu_event_filter_test.c | 342 +++++++++++++++++- 1 file changed, 341 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0750e2fa7a38..6cf11d82ad5b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,337 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +/* + * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and + * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/ + * supported on Intel Xeon processors: + * - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake. + */ +#define MEM_INST_RETIRED 0xD0 +#define MEM_INST_RETIRED_LOAD EVENT(MEM_INST_RETIRED, 0x81) +#define MEM_INST_RETIRED_STORE EVENT(MEM_INST_RETIRED, 0x82) +#define MEM_INST_RETIRED_LOAD_STORE EVENT(MEM_INST_RETIRED, 0x83) + +static bool supports_event_mem_inst_retired(void) +{ + uint32_t eax, ebx, ecx, edx; + + cpuid(1, &eax, &ebx, &ecx, &edx); + if (x86_family(eax) == 0x6) { + switch (x86_model(eax)) { + /* Sapphire Rapids */ + case 0x8F: + /* Ice Lake */ + case 0x6A: + /* Skylake */ + /* Cascade Lake */ + case 0x55: + return true; + } + } + + return false; +} + +static int num_gp_counters(void) +{ + const struct kvm_cpuid_entry2 *entry; + + entry = kvm_get_supported_cpuid_entry(0xa); + union cpuid10_eax eax = { .full = entry->eax }; + + return eax.split.num_counters; +} + +/* + * "LS Dispatch", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + */ +#define LS_DISPATCH 0x29 +#define LS_DISPATCH_LOAD EVENT(LS_DISPATCH, BIT(0)) +#define LS_DISPATCH_STORE EVENT(LS_DISPATCH, BIT(1)) +#define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2)) + +#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, false) +#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, true) + +struct perf_counter { + union { + uint64_t raw; + struct { + uint64_t loads:22; + uint64_t stores:22; + uint64_t loads_stores:20; + }; + }; +}; + +static uint64_t masked_events_guest_test(uint32_t msr_base) +{ + uint64_t ld0, ld1, st0, st1, ls0, ls1; + struct perf_counter c; + int val; + + ld0 = rdmsr(msr_base + 0); + st0 = rdmsr(msr_base + 1); + ls0 = rdmsr(msr_base + 2); + + __asm__ __volatile__("movl $0, %[v];" + "movl %[v], %%eax;" + "incl %[v];" + : [v]"+m"(val) :: "eax"); + + ld1 = rdmsr(msr_base + 0); + st1 = rdmsr(msr_base + 1); + ls1 = rdmsr(msr_base + 2); + + c.loads = ld1 - ld0; + c.stores = st1 - st0; + c.loads_stores = ls1 - ls0; + + return c.raw; +} + +static void intel_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD); + wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE); + wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7); + + r = masked_events_guest_test(MSR_IA32_PMC0); + + GUEST_SYNC(r); + } +} + +static void amd_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_K7_EVNTSEL0, 0); + wrmsr(MSR_K7_EVNTSEL1, 0); + wrmsr(MSR_K7_EVNTSEL2, 0); + + wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE); + wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE); + + r = masked_events_guest_test(MSR_K7_PERFCTR0); + + GUEST_SYNC(r); + } +} + +static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, + const uint64_t masked_events[], + const int nmasked_events) +{ + struct kvm_pmu_event_filter *f; + struct perf_counter r; + + f = create_pmu_event_filter(masked_events, nmasked_events, + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r.raw = test_with_filter(vcpu, f); + free(f); + + return r; +} + +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ +#define MAX_FILTER_EVENTS 300 +#define MAX_TEST_EVENTS 10 + +#define ALLOW_LOADS BIT(0) +#define ALLOW_STORES BIT(1) +#define ALLOW_LOADS_STORES BIT(2) + +struct masked_events_test { + uint64_t intel_events[MAX_TEST_EVENTS]; + uint64_t intel_event_end; + uint64_t amd_events[MAX_TEST_EVENTS]; + uint64_t amd_event_end; + const char *msg; + uint32_t flags; +}; + +/* + * These are the test cases for the masked events tests. + * + * For each test, the guest enables 3 PMU counters (loads, stores, + * loads + stores). The filter is then set in KVM with the masked events + * provided. The test then verifies that the counters agree with which + * ones should be counting and which ones should be filtered. + */ +const struct masked_events_test test_cases[] = { + { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow loads.", + .flags = ALLOW_LOADS, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow stores.", + .flags = ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)), + }, + .msg = "Only allow loads + stores.", + .flags = ALLOW_LOADS_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0), + }, + .msg = "Only allow loads and stores.", + .flags = ALLOW_LOADS | ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow loads and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow stores and loads + stores.", + .flags = ALLOW_STORES | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + }, + .msg = "Only allow loads, stores, and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES + }, +}; + +static int append_test_events(const struct masked_events_test *test, + uint64_t *events, int nevents) +{ + const uint64_t *evts; + int i; + + evts = use_intel_pmu() ? test->intel_events : test->amd_events; + for (i = 0; i < MAX_TEST_EVENTS; i++) { + if (evts[i] == 0) + break; + + events[nevents + i] = evts[i]; + } + + return nevents + i; +} + +static bool bool_eq(bool a, bool b) +{ + return a == b; +} + +static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, + int nevents) +{ + int ntests = ARRAY_SIZE(test_cases); + struct perf_counter c; + int i, n; + + for (i = 0; i < ntests; i++) { + const struct masked_events_test *test = &test_cases[i]; + + /* Do any test case events overflow MAX_TEST_EVENTS? */ + assert(test->intel_event_end == 0); + assert(test->amd_event_end == 0); + + n = append_test_events(test, events, nevents); + + c = run_masked_events_test(vcpu, events, n); + TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) && + bool_eq(c.stores, test->flags & ALLOW_STORES) && + bool_eq(c.loads_stores, + test->flags & ALLOW_LOADS_STORES), + "%s loads: %u, stores: %u, loads + stores: %u", + test->msg, c.loads, c.stores, c.loads_stores); + } +} + +static void add_dummy_events(uint64_t *events, int nevents) +{ + int i; + + for (i = 0; i < nevents; i++) { + int event_select = i % 0xFF; + bool exclude = ((i % 4) == 0); + + if (event_select == MEM_INST_RETIRED || + event_select == LS_DISPATCH) + event_select++; + + events[i] = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, 0, + 0, exclude); + } +} + +static void test_masked_events(struct kvm_vcpu *vcpu) +{ + int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint64_t events[MAX_FILTER_EVENTS]; + + /* Run the test cases against a sparse PMU event filter. */ + run_masked_events_tests(vcpu, events, 0); + + /* Run the test cases against a dense PMU event filter. */ + add_dummy_events(events, MAX_FILTER_EVENTS); + run_masked_events_tests(vcpu, events, nevents); +} + static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, int nevents, uint32_t flags) { @@ -478,7 +809,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) int main(int argc, char *argv[]) { void (*guest_code)(void); - struct kvm_vcpu *vcpu; + struct kvm_vcpu *vcpu, *vcpu2 = NULL; struct kvm_vm *vm; /* Tell stdout not to buffer its content */ @@ -506,6 +837,15 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + if (use_intel_pmu() && + supports_event_mem_inst_retired() && + num_gp_counters() >= 3) + vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code); + else if (use_amd_pmu()) + vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code); + + if (vcpu2) + test_masked_events(vcpu2); test_filter_ioctl(vcpu); kvm_vm_free(vm);