From patchwork Fri Oct 21 20:50:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9805AC3A59D for ; Fri, 21 Oct 2022 20:51:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229925AbiJUUv0 (ORCPT ); Fri, 21 Oct 2022 16:51:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229897AbiJUUvS (ORCPT ); Fri, 21 Oct 2022 16:51:18 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5BC81EF06F for ; Fri, 21 Oct 2022 13:51:17 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-367c2e72a6dso39935017b3.1 for ; Fri, 21 Oct 2022 13:51:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Uk4BW0JD1RYXAeT0Rcp18HTfg5VEFzTrzuBnkS8/VcA=; b=XlXcV9L70nwuY6MeU2S9T6iMXVjR5L4sssENmTKeCsnqgUEqO3/0wAVhYVUXkGPh2S ultmNB5ixSkKNDuiHAcetS3xTRTLCvTZQs5f1g6nEum5Rxi63IgmZMjKdpnf9VKAnsnZ XWQituwGiR7hqDgZGZrSu3MrZEtpOo2hM+ZSKxidBzvsx3Bt65lojZpivuWMQF9iMU4o cnReSIuJTxsQcCXy/B74Mf0TkLdh2L7WUnF4GTGiyBJKrGYlizo0WRnuLYK83v29XtVM 1fJxpONNDpJnc1tb7PP9KkZhxyTXIp1bcVcg+/T9rjcYObtZGcGye1n5ycdT1FOV9far IJHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Uk4BW0JD1RYXAeT0Rcp18HTfg5VEFzTrzuBnkS8/VcA=; b=z8R4jSo8XHotuAF8g8QHVN8F5Y9bzCJ3xUOSpcDTapnOZNm4vcXw/RPmPtNlJ+N8YG PYd/E9wnqFWCByniXtQBA7Rv8hZainn9Ezs+78COlqADI+OP9A69dnZwPf8Z7lp/GsuT /5mDtJi9TR8NkDD2t/5LRXK7MJWZT51qrVQ6PpcJDzJ1bwHjm/MocHOla3gj1Sowlj3m duTMRYSVodgtRLDmLMNrepdbqNXCAllZVT9mGVDzoT7xaKFcwCmZapMakfb1eN8OIsRT Dbw2cpqa38LGXFRAUWp0dw5dmoFI3pQtu+iEQimQkg7piL6dxUcn0+FJK18FvwGFtXyj wzvQ== X-Gm-Message-State: ACrzQf13DomDAY1nJZWLMLp630p2eZZb+pt9NuPSUDXgZ6gKON0zlbLv QvJR2bcJQjRSwQF0K2lVQbhGC9jEfGuyKM/2cZ7LOhRoZNvAF4UNicSR9yrmLsWZw2ejxbwXnWo 416liRZo6RWXeMMYcDp3YK/ZXHT0RcEMTnr0UsfOsCHrSSZQlQBsV7PiWJUt+p4kAkjIv X-Google-Smtp-Source: AMsMyM6ExarsiJYUryPh3yKjb9DOmjCqvDtB59QVOEAxnU3OCToehTcLHE/XJSRWJNEAvLWmQh55oyTdd2MJb+D9 X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a25:22d4:0:b0:6c3:6ec7:3dc9 with SMTP id i203-20020a2522d4000000b006c36ec73dc9mr17932507ybi.520.1666385476775; Fri, 21 Oct 2022 13:51:16 -0700 (PDT) Date: Fri, 21 Oct 2022 20:50:59 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-2-aaronlewis@google.com> Subject: [PATCH v6 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When checking if a pmu event the guest is attempting to program should be filtered, only consider the event select + unit mask in that decision. Use an architecture specific mask to mask out all other bits, including bits 35:32 on Intel. Those bits are not part of the event select and should not be considered in that decision. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 3 ++- arch/x86/kvm/pmu.h | 2 ++ arch/x86/kvm/svm/pmu.c | 1 + arch/x86/kvm/vmx/pmu_intel.c | 1 + 4 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d9b9a0f0db17..f1615aed2edb 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -273,7 +273,8 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) goto out; if (pmc_is_gp(pmc)) { - key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + key = pmc->eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK); if (bsearch(&key, filter->events, filter->nevents, sizeof(__u64), cmp_u64)) allow_event = filter->action == KVM_PMU_EVENT_ALLOW; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 5cc5721f260b..aa1799b1562a 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -40,6 +40,8 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + + const u64 EVENTSEL_EVENT; }; void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index b68956299fa8..8af8f4d0336c 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -228,4 +228,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .reset = amd_pmu_reset, + .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, }; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 25b70a85bef5..57d006410ae4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -811,4 +811,5 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, }; From patchwork Fri Oct 21 20:51:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D041FA3740 for ; Fri, 21 Oct 2022 20:51:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229954AbiJUUv1 (ORCPT ); Fri, 21 Oct 2022 16:51:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229921AbiJUUvU (ORCPT ); Fri, 21 Oct 2022 16:51:20 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD9D31DDDEB for ; Fri, 21 Oct 2022 13:51:19 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id q12-20020a170902dacc00b00184ba4faf1cso2281928plx.23 for ; Fri, 21 Oct 2022 13:51:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zjDeRZW/tcKRSwT0d4EUt6UgZbpYaaAXDqRA6BwR2I8=; b=XzHpV26/SwaypDZJ7ik1NdI7z/JxGGQ7ufzeHGi+itBDTCIS1z+mn5pw3yBmzUz1Kt wzLAkqTi46Y2epOGGZsvYEfGp2d+ZgElqjjDcrdhnNumtqqFAMcDBcT2Lr7hUpzpSeTm UZpVxGzUxzemlaaTJUUwMpwKVoGdj69Hiv6vXB7sydJCQJ0HjBKo4a4qlyY9KriTQknS taR+ZYMxlpenIpAivdMDgm58GdAb50jVr2LwcHUIeh3eKbe+l6iy63SPpIX7f2M+hNde arMAKfknGFZQK9qfdy3KjxU6Gj5o3HGEqnR0w3DImgEpkXgTo6KThxAxEG+QOTNesjD6 UYPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zjDeRZW/tcKRSwT0d4EUt6UgZbpYaaAXDqRA6BwR2I8=; b=ExA19z0hqH3/oTu+9rzjAuBBcRK8Dh1/T+xNn8dV9bcVB6cF+moCJUcyGvE7AXyY1D DJ6QPZrwEruqZttzqX6pcY7rdPnLDInJi9+gOrlD7n/nnARZmaMkkWfWaDaok9bSNIKp XbG9dszOwJdogEsxz1I6RyUpqx/M1WIAjgUqGSSexKug6ijZWWVemvyMomhReGGG9IE0 MnccLzvS0iif9R0df5nEQ3kP/ePaf1jDuPwae75OuMkFj6lntA6wgxl94PGDy/GMJnPu lM6GsqEJ+MY4aBrJGA5Z3Z47ruPS3wybkB4CVb6t/X1i8aB5qs86BJzOER3GODO+uJgA wc8w== X-Gm-Message-State: ACrzQf1UEx6kGg0qB8Y2ynf3XC/S41rMssBet7WUEHjwNB2HiC5yMkEg azscyMVvE4KD04LInS28Z3ErAOZNTwQWRk06pl28xoY9ppFM0CcjTQSknKFC8GZSTYOb4loO0D8 LhSlNVCSRNhaeGDMj0WNLdQa3E65qZRQE4S2MRi73YzmedjBCsdMdt2v3MM7VUDi3kPYa X-Google-Smtp-Source: AMsMyM5374hXhNFSuV/QeenJZlqgFmvv/ogUqrd8c6gBF0+VDpocjz3jWpfpeksXe8PBHG6v3MHUU4acNsObct7M X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:be8:b0:56b:2c80:31e0 with SMTP id x40-20020a056a000be800b0056b2c8031e0mr3171703pfu.44.1666385479256; Fri, 21 Oct 2022 13:51:19 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:00 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-3-aaronlewis@google.com> Subject: [PATCH v6 2/7] kvm: x86/pmu: Remove impossible events from the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If it's not possible for an event in the pmu event filter to match a pmu event being programmed by the guest, it's pointless to have it in the list. Opt for a shorter list by removing those events. Because this is established uAPI the pmu event filter can't outright rejected these events as garbage and return an error. Instead, play nice and remove them from the list. Also, opportunistically rewrite the comment when the filter is set to clarify that it guards against *all* TOCTOU attacks on the verified data. Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index f1615aed2edb..a79f0d5ecaf0 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -575,6 +575,21 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); +static void remove_impossible_events(struct kvm_pmu_event_filter *filter) +{ + int i, j; + + for (i = 0, j = 0; i < filter->nevents; i++) { + if (filter->events[i] & ~(kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK)) + continue; + + filter->events[j++] = filter->events[i]; + } + + filter->nevents = j; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; @@ -603,9 +618,11 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) if (copy_from_user(filter, argp, size)) goto cleanup; - /* Ensure nevents can't be changed between the user copies. */ + /* Restore the verified state to guard against TOCTOU attacks. */ *filter = tmp; + remove_impossible_events(filter); + /* * Sort the in-kernel list so that we can search it with bsearch. */ From patchwork Fri Oct 21 20:51:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32CCBC433FE for ; Fri, 21 Oct 2022 20:51:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbiJUUv3 (ORCPT ); Fri, 21 Oct 2022 16:51:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiJUUvX (ORCPT ); Fri, 21 Oct 2022 16:51:23 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AF08F07E0 for ; Fri, 21 Oct 2022 13:51:22 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id t4-20020a635344000000b0045fe7baa222so1856754pgl.13 for ; Fri, 21 Oct 2022 13:51:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VUFds0jvnwBKnCvVY61O6n/XoqIGuG9X/fm2+lv1bqQ=; b=pU3JuqQd2kEKrJmQLFXUq6z5h3Pse28HUl8KSb11W6On0S9Q5UY4X3zSepKt5Oso/0 /1dzdipdqR4qM5qWufEUaqqyzLeTrNRGxxkWFUMLDpM9nGLzTxMgOIXYY6pWEmNpM6LD PpmKK2BIlqVpnoeUbaQLzQe7SDQZaNkB94mbjRkm7s5lpBXelV5v8x5i0gWBsQqGj/kM r34ha+pNcbx5vapGbdiYGJuJLPNjiCfZEed0uBjsunqLXiPGuAby9DV0FFh4rsSzyixh PJJex0C3GEY8Q7f0/KWd+Q9z+L76T7quD7IbCCTPJl4BNs5G+bvGgbTykEUqeZ7Vrxz/ lQqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VUFds0jvnwBKnCvVY61O6n/XoqIGuG9X/fm2+lv1bqQ=; b=kT5j1OM7c7qbsMWKIGtFu+Kk0glDrAgQdhaz59mYkRUchDVJflK/3GdTW046PbDtEH uYX+w+DiZk9Nx54kBFTCeMk8WH5eGeqwrF5MABnUWrBcURgSrKZA7qM+GywxEQ4J2WHw GXSdmCzHoTPthE+5fcVky6JfSHGwgXlSzcctX95tAPER3bJCbU5WxY2SUbvv7ziDEkNB e3tABmAn9/3agUZl2H0qjlznEyplz/KLkqVnhQn1mbWj957zscD6vwsqWC6N/s1M0PCz Rx/BSTv47zmZfmaiDfACHDS+7sMBIW2Swt8D05Jz9dmU6R9P/6agTHOQajUkCRMjzddq hltw== X-Gm-Message-State: ACrzQf0VmSDI+/GKdrdsQ/kXrro+e6LETM2XWiEzuMAtPdoVD+ysG5N/ 3LmDk09zWE/txaNxwTK9Zd0shqGAQB6GoKC46w7wBmh7BUAAY92KQGSiRSbaY8PYrES1VB1lsg9 z26izD9eDRZMHTlE+yP9eqGF5Q1p9rK7XRpSomk6otMXW1VPrADUmPRcRmiwRBXXmMyEz X-Google-Smtp-Source: AMsMyM4ZEnA8OKYk1KMJpuvrvqk3/jessUbLxMslTKuFEjktXwWqhWxwM/JhMiSC/cApAdsv2YQerq7D2NIrygVt X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90b:2705:b0:20a:b4fa:f624 with SMTP id px5-20020a17090b270500b0020ab4faf624mr24192523pjb.124.1666385481837; Fri, 21 Oct 2022 13:51:21 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:01 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-4-aaronlewis@google.com> Subject: [PATCH v6 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor check_pmu_event_filter() in preparation for masked events. No functional changes intended Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 56 +++++++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a79f0d5ecaf0..64172e082404 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -257,41 +257,51 @@ static int cmp_u64(const void *pa, const void *pb) return (a > b) - (a < b); } +static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) +{ + return bsearch(&key, filter->events, filter->nevents, + sizeof(filter->events[0]), cmp_u64); +} + +static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) +{ + if (find_filter_entry(filter, eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK))) + return filter->action == KVM_PMU_EVENT_ALLOW; + + return filter->action == KVM_PMU_EVENT_DENY; +} + +static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) +{ + int fixed_idx = idx - INTEL_PMC_IDX_FIXED; + + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + + return true; +} + static bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; - bool allow_event = true; - __u64 key; - int idx; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) return false; filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) - goto out; + return true; - if (pmc_is_gp(pmc)) { - key = pmc->eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | - ARCH_PERFMON_EVENTSEL_UMASK); - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } else { - idx = pmc->idx - INTEL_PMC_IDX_FIXED; - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - } + if (pmc_is_gp(pmc)) + return is_gp_event_allowed(filter, pmc->eventsel); -out: - return allow_event; + return is_fixed_event_allowed(filter, pmc->idx); } void reprogram_counter(struct kvm_pmc *pmc) From patchwork Fri Oct 21 20:51:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05AC0C38A2D for ; Fri, 21 Oct 2022 20:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229793AbiJUUvc (ORCPT ); Fri, 21 Oct 2022 16:51:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229887AbiJUUv0 (ORCPT ); Fri, 21 Oct 2022 16:51:26 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CD731B5767 for ; Fri, 21 Oct 2022 13:51:25 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-36885d835e9so39419387b3.17 for ; Fri, 21 Oct 2022 13:51:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cX1qMWs+H/SywAymAaJbX7LlUUPtGJMzy8lKi/w9wLU=; b=bUiXkgOzH6p5L3ZzfbwF5KGzVM4W/tn5gcfoZ+UtaXA165WeVQlLThdKw+ohMNajrx Xv6XV/J/ZWr/iOkNlgJj8Tja3rJbyKGPxqqnEhMrkY7zSFNJ2M8tXjdWek6Kl5c/jGxN pRN2IQ45Rr082VH228NpWr26wybgR9wWlVtYgzSi/87g6rhdh9ZA+ad4OLTgrVrwK39+ Nz0btz+k9s9WsEMUWCbnmLeZ9CPYiwT1WZ4QrUSyBwCWndHvIdLmAwF5lUvempf+We+i +gmkVzOUf9Z3sHlPdQMdZkFHESmLi0ye3rTTGEplTnJtG3A7NP7X1dGE1rTx5zblSlAa +4zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cX1qMWs+H/SywAymAaJbX7LlUUPtGJMzy8lKi/w9wLU=; b=mtFZDZXpn7k+ut1SyDg6M5QZFMu6d3u8lNHyqovHUy0faEG3hA4yIFYdM36QA3Ghc/ RUgD1m51F8kSsa+Xexk5kb6zLbRa3ITYYmdl1AeWbZhYRTxkEj2iwJeHfK6uqF8xVpfJ 6OANImUCQ6SBGZATBIe98BQidJMEIDvJzhEmCOQeg+5Kzs4lMhpQgIzydV+X9xvpFqbP 2vUY6EQ40ldhvGEzC5cpqE109sH6/72FDO/1fr5BQAG1pgdKlUbzrqQNpkfWm3TU1l1F +6gl5X92PsC0jHai+cSyhP6qsBcQvFfLvm6lNVqhJntUbJHnrcjX7syXNJ4yn6QQVZgv OGYg== X-Gm-Message-State: ACrzQf2CyNSCmTkn/wgDdFWUqYMTGyE4ee0m5FmIq0l66qbQpR0PFkgT HjhIPhm6PEm6CKtatPYzMQDSB2jUG9yk/ZLGU2/jNHgJh7oEvf2ZlIQAFDJEIpXHpV1K/e8lBCM 1m1C/2Zzs2qri7JEk88v3I1Vm0LIcHe8l5zArSgIgQML8eq+bEcNVcMP/Y5MaCzt5ZFG1 X-Google-Smtp-Source: AMsMyM7RGYl6aTPbMtiAV/vOCVkgtneMNOTk31PRxPKWFUHNIxlkhf9i+iKZy0zr1yJtvCkDsT4svJBF49172bXY X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a81:3c8:0:b0:368:c758:271f with SMTP id 191-20020a8103c8000000b00368c758271fmr9212937ywd.499.1666385484507; Fri, 21 Oct 2022 13:51:24 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:02 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-5-aaronlewis@google.com> Subject: [PATCH v6 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When building a list of filter events, it can sometimes be a challenge to fit all the events needed to adequately restrict the guest into the limited space available in the pmu event filter. This stems from the fact that the pmu event filter requires each event (i.e. event select + unit mask) be listed, when the intention might be to restrict the event select all together, regardless of it's unit mask. Instead of increasing the number of filter events in the pmu event filter, add a new encoding that is able to do a more generalized match on the unit mask. Introduce masked events as another encoding the pmu event filter understands. Masked events has the fields: mask, match, and exclude. When filtering based on these events, the mask is applied to the guest's unit mask to see if it matches the match value (i.e. umask & mask == match). The exclude bit can then be used to exclude events from that match. E.g. for a given event select, if it's easier to say which unit mask values shouldn't be filtered, a masked event can be set up to match all possible unit mask values, then another masked event can be set up to match the unit mask values that shouldn't be filtered. Userspace can query to see if this feature exists by looking for the capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS. This feature is enabled by setting the flags field in the pmu event filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS. Events can be encoded by using KVM_PMU_ENCODE_MASKED_ENTRY(). It is an error to have a bit set outside the valid bits for a masked event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases, including the high bits of the event select (35:32) if called on Intel. With these updates the filter matching code has been updated to match on a common event. Masked events were flexible enough to handle both event types, so they were used as the common event. This changes how guest events get filtered because regardless of the type of event used in the uAPI, they will be converted to masked events. Because of this there could be a slight performance hit because instead of matching the filter event with a lookup on event select + unit mask, it does a lookup on event select then walks the unit masks to find the match. This shouldn't be a big problem because I would expect the set of common event selects to be small, and if they aren't, the set can likely be reduced by using masked events to generalize the unit mask. Using one type of event when filtering guest events allows for a common code path to be used. Signed-off-by: Aaron Lewis --- Documentation/virt/kvm/api.rst | 77 +++++++++++-- arch/x86/include/asm/kvm_host.h | 14 ++- arch/x86/include/uapi/asm/kvm.h | 29 +++++ arch/x86/kvm/pmu.c | 197 +++++++++++++++++++++++++++----- arch/x86/kvm/x86.c | 1 + include/uapi/linux/kvm.h | 1 + 6 files changed, 281 insertions(+), 38 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index eee9f857a986..0cf07fbe3d78 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5031,6 +5031,15 @@ using this ioctl. :Parameters: struct kvm_pmu_event_filter (in) :Returns: 0 on success, -1 on error +Errors: + + ====== ============================================================ + EFAULT args[0] cannot be accessed + EINVAL args[0] contains invalid data in the filter or filter events + E2BIG nevents is too large + EBUSY not enough memory to allocate the filter + ====== ============================================================ + :: struct kvm_pmu_event_filter { @@ -5042,14 +5051,68 @@ using this ioctl. __u64 events[0]; }; -This ioctl restricts the set of PMU events that the guest can program. -The argument holds a list of events which will be allowed or denied. -The eventsel+umask of each event the guest attempts to program is compared -against the events field to determine whether the guest should have access. -The events field only controls general purpose counters; fixed purpose -counters are controlled by the fixed_counter_bitmap. +This ioctl restricts the set of PMU events the guest can program by limiting +which event select and unit mask combinations are permitted. + +The argument holds a list of filter events which will be allowed or denied. + +Filter events only control general purpose counters; fixed purpose counters +are controlled by the fixed_counter_bitmap. + +Valid values for 'flags':: + +``0`` + +To use this mode, clear the 'flags' field. + +In this mode each event will contain an event select + unit mask. + +When the guest attempts to program the PMU the guest's event select + +unit mask is compared against the filter events to determine whether the +guest should have access. + +``KVM_PMU_EVENT_FLAG_MASKED_EVENTS`` +:Capability: KVM_CAP_PMU_EVENT_MASKED_EVENTS + +In this mode each filter event will contain an event select, mask, match, and +exclude value. To encode a masked event use:: + + KVM_PMU_ENCODE_MASKED_ENTRY() + +An encoded event will follow this layout:: + + Bits Description + ---- ----------- + 7:0 event select (low bits) + 15:8 umask match + 31:16 unused + 35:32 event select (high bits) + 36:54 unused + 55 exclude bit + 63:56 umask mask + +When the guest attempts to program the PMU, these steps are followed in +determining if the guest should have access: + 1. Match the event select from the guest against the filter events. + 2. If a match is found, match the guest's unit mask to the mask and match + values of the included filter events. + I.e. (unit mask & mask) == match && !exclude. + 3. If a match is found, match the guest's unit mask to the mask and match + values of the excluded filter events. + I.e. (unit mask & mask) == match && exclude. + 4. + a. If an included match is found and an excluded match is not found, filter + the event. + b. For everything else, do not filter the event. + 5. + a. If the event is filtered and it's an allow list, allow the guest to + program the event. + b. If the event is filtered and it's a deny list, do not allow the guest to + program the event. -No flags are defined yet, the field must be zero. +When setting a new pmu event filter, -EINVAL will be returned if any of the +unused fields are set or if any of the high bits (35:32) in the event +select are set when called on Intel. Valid values for 'action':: diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7551b6f9c31c..c0ec5a863dfb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1072,6 +1072,18 @@ struct kvm_x86_msr_filter { struct msr_bitmap_range ranges[16]; }; +struct kvm_x86_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; + __u32 flags; + __u32 nr_includes; + __u32 nr_excludes; + __u64 *includes; + __u64 *excludes; + __u64 events[]; +}; + enum kvm_apicv_inhibit { /********************************************************************/ @@ -1266,7 +1278,7 @@ struct kvm_arch { /* Guest can access the SGX PROVISIONKEY. */ bool sgx_provisioning_allowed; - struct kvm_pmu_event_filter __rcu *pmu_event_filter; + struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter; struct task_struct *nx_lpage_recovery_thread; #ifdef CONFIG_X86_64 diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 46de10a809ec..4ea53188136c 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -528,6 +528,35 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0) +#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + +/* + * Masked event layout. + * Bits Description + * ---- ----------- + * 7:0 event select (low bits) + * 15:8 umask match + * 31:16 unused + * 35:32 event select (high bits) + * 36:54 unused + * 55 exclude bit + * 63:56 umask mask + */ + +#define KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, exclude) \ + (((event_select) & 0xFFULL) | (((event_select) & 0XF00ULL) << 24) | \ + (((mask) & 0xFFULL) << 56) | \ + (((match) & 0xFFULL) << 8) | \ + ((__u64)(!!(exclude)) << 55)) + +#define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \ + (GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK (GENMASK_ULL(63, 56)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (GENMASK_ULL(15, 8)) +#define KVM_PMU_MASKED_ENTRY_EXCLUDE (BIT_ULL(55)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT (56) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 64172e082404..ea478599354d 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -249,30 +249,99 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } -static int cmp_u64(const void *pa, const void *pb) +static int filter_cmp(const void *pa, const void *pb, u64 mask) { - u64 a = *(u64 *)pa; - u64 b = *(u64 *)pb; + u64 a = *(u64 *)pa & mask; + u64 b = *(u64 *)pb & mask; return (a > b) - (a < b); } -static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) + +static int filter_sort_cmp(const void *pa, const void *pb) +{ + return filter_cmp(pa, pb, (KVM_PMU_MASKED_ENTRY_EVENT_SELECT | + KVM_PMU_MASKED_ENTRY_EXCLUDE)); +} + +/* + * For the event filter, searching is done on the 'includes' list and + * 'excludes' list separately rather than on the 'events' list (which + * has both). As a result the exclude bit can be ignored. + */ +static int filter_event_cmp(const void *pa, const void *pb) +{ + return filter_cmp(pa, pb, (KVM_PMU_MASKED_ENTRY_EVENT_SELECT)); +} + +static int find_filter_index(u64 *events, u64 nevents, u64 key) +{ + u64 *fe = bsearch(&key, events, nevents, sizeof(events[0]), + filter_event_cmp); + + if (!fe) + return -1; + + return fe - events; +} + +static bool is_filter_entry_match(u64 filter_event, u64 umask) +{ + u64 mask = filter_event >> (KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT - 8); + u64 match = filter_event & KVM_PMU_MASKED_ENTRY_UMASK_MATCH; + + BUILD_BUG_ON((KVM_PMU_ENCODE_MASKED_ENTRY(0, 0xff, 0, false) >> + (KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT - 8)) != + ARCH_PERFMON_EVENTSEL_UMASK); + + return (umask & mask) == match; +} + +static bool filter_contains_match(u64 *events, u64 nevents, u64 eventsel) { - return bsearch(&key, filter->events, filter->nevents, - sizeof(filter->events[0]), cmp_u64); + u64 event_select = eventsel & kvm_pmu_ops.EVENTSEL_EVENT; + u64 umask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK; + int i, index; + + index = find_filter_index(events, nevents, event_select); + if (index < 0) + return false; + + /* + * Entries are sorted by the event select. Walk the list in both + * directions to process all entries with the targeted event select. + */ + for (i = index; i < nevents; i++) { + if (filter_event_cmp(&events[i], &event_select)) + break; + + if (is_filter_entry_match(events[i], umask)) + return true; + } + + for (i = index - 1; i >= 0; i--) { + if (filter_event_cmp(&events[i], &event_select)) + break; + + if (is_filter_entry_match(events[i], umask)) + return true; + } + + return false; } -static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) +static bool is_gp_event_allowed(struct kvm_x86_pmu_event_filter *f, + u64 eventsel) { - if (find_filter_entry(filter, eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | - ARCH_PERFMON_EVENTSEL_UMASK))) - return filter->action == KVM_PMU_EVENT_ALLOW; + if (filter_contains_match(f->includes, f->nr_includes, eventsel) && + !filter_contains_match(f->excludes, f->nr_excludes, eventsel)) + return f->action == KVM_PMU_EVENT_ALLOW; - return filter->action == KVM_PMU_EVENT_DENY; + return f->action == KVM_PMU_EVENT_DENY; } -static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) +static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter, + int idx) { int fixed_idx = idx - INTEL_PMC_IDX_FIXED; @@ -288,7 +357,7 @@ static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) static bool check_pmu_event_filter(struct kvm_pmc *pmc) { - struct kvm_pmu_event_filter *filter; + struct kvm_x86_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) @@ -585,58 +654,126 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); -static void remove_impossible_events(struct kvm_pmu_event_filter *filter) +static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *filter) +{ + u64 mask = kvm_pmu_ops.EVENTSEL_EVENT | + KVM_PMU_MASKED_ENTRY_UMASK_MASK | + KVM_PMU_MASKED_ENTRY_UMASK_MATCH | + KVM_PMU_MASKED_ENTRY_EXCLUDE; + int i; + + for (i = 0; i < filter->nevents; i++) { + if (filter->events[i] & ~mask) + return false; + } + + return true; +} + +static void convert_to_masked_filter(struct kvm_x86_pmu_event_filter *filter) { int i, j; for (i = 0, j = 0; i < filter->nevents; i++) { + /* + * Skip events that are impossible to match against a guest + * event. When filtering, only the event select + unit mask + * of the guest event is used. To maintain backwards + * compatibility, impossible filters can't be rejected :-( + */ if (filter->events[i] & ~(kvm_pmu_ops.EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK)) continue; - - filter->events[j++] = filter->events[i]; + /* + * Convert userspace events to a common in-kernel event so + * only one code path is needed to support both events. For + * the in-kernel events use masked events because they are + * flexible enough to handle both cases. To convert to masked + * events all that's needed is to add an "all ones" umask_mask, + * (unmasked filter events don't support EXCLUDE). + */ + filter->events[j++] = filter->events[i] | + (0xFFULL << KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT); } filter->nevents = j; } +static int prepare_filter_lists(struct kvm_x86_pmu_event_filter *filter) +{ + int i; + + if (!(filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS)) + convert_to_masked_filter(filter); + else if (!is_masked_filter_valid(filter)) + return -EINVAL; + + /* + * Sort entries by event select and includes vs. excludes so that all + * entries for a given event select can be processed efficiently during + * filtering. The EXCLUDE flag uses a more significant bit than the + * event select, and so the sorted list is also effectively split into + * includes and excludes sub-lists. + */ + sort(&filter->events, filter->nevents, sizeof(filter->events[0]), + filter_sort_cmp, NULL); + + i = filter->nevents; + /* Find the first EXCLUDE event (only supported for masked events). */ + if (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) { + for (i = 0; i < filter->nevents; i++) { + if (filter->events[i] & KVM_PMU_MASKED_ENTRY_EXCLUDE) + break; + } + } + + filter->nr_includes = i; + filter->nr_excludes = filter->nevents - filter->nr_includes; + filter->includes = filter->events; + filter->excludes = filter->events + filter->nr_includes; + + return 0; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { - struct kvm_pmu_event_filter tmp, *filter; + struct kvm_pmu_event_filter __user *user_filter = argp; + struct kvm_x86_pmu_event_filter *filter; + struct kvm_pmu_event_filter tmp; size_t size; int r; - if (copy_from_user(&tmp, argp, sizeof(tmp))) + if (copy_from_user(&tmp, user_filter, sizeof(tmp))) return -EFAULT; if (tmp.action != KVM_PMU_EVENT_ALLOW && tmp.action != KVM_PMU_EVENT_DENY) return -EINVAL; - if (tmp.flags != 0) + if (tmp.flags & ~KVM_PMU_EVENT_FLAGS_VALID_MASK) return -EINVAL; if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS) return -E2BIG; size = struct_size(filter, events, tmp.nevents); - filter = kmalloc(size, GFP_KERNEL_ACCOUNT); + filter = kzalloc(size, GFP_KERNEL_ACCOUNT); if (!filter) return -ENOMEM; + filter->action = tmp.action; + filter->nevents = tmp.nevents; + filter->fixed_counter_bitmap = tmp.fixed_counter_bitmap; + filter->flags = tmp.flags; + r = -EFAULT; - if (copy_from_user(filter, argp, size)) + if (copy_from_user(filter->events, user_filter->events, + sizeof(filter->events[0]) * filter->nevents)) goto cleanup; - /* Restore the verified state to guard against TOCTOU attacks. */ - *filter = tmp; - - remove_impossible_events(filter); - - /* - * Sort the in-kernel list so that we can search it with bsearch. - */ - sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); + r = prepare_filter_lists(filter); + if (r) + goto cleanup; mutex_lock(&kvm->lock); filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de..4aa667103928 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4388,6 +4388,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SPLIT_IRQCHIP: case KVM_CAP_IMMEDIATE_EXIT: case KVM_CAP_PMU_EVENT_FILTER: + case KVM_CAP_PMU_EVENT_MASKED_EVENTS: case KVM_CAP_GET_MSR_FEATURES: case KVM_CAP_MSR_PLATFORM_INFO: case KVM_CAP_EXCEPTION_PAYLOAD: diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 0d5d4419139a..b4650dc223a1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1178,6 +1178,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 +#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 224 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Fri Oct 21 20:51:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B063C433FE for ; Fri, 21 Oct 2022 20:51:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229987AbiJUUvh (ORCPT ); Fri, 21 Oct 2022 16:51:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229965AbiJUUv3 (ORCPT ); Fri, 21 Oct 2022 16:51:29 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C99061B574E for ; Fri, 21 Oct 2022 13:51:28 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h186-20020a636cc3000000b0045a1966a975so1845998pgc.5 for ; Fri, 21 Oct 2022 13:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SdYBIkNQsFd74tvgqxipUgQRM9sk8tbOtLgfO+Jzjgs=; b=IT5ofqEpLIZrOWgxvGmHD5wZyxD4CXt9kLNMr64QPdLvQwO2v9IVLIJWB50akgYfor IucRomJl8pNLwAhgnJWKp0jqdgeG4zO2wReJt3C23MObS4VFP6Y/gz9K5NMJEMkrZ0zC 7azI2c/1gbLgwun3pAx3LMlOJKlGKighI8SymIMHX5KH9vaZekf8+nrC8fV2vmPxtYN2 C9hw6GZI4JVv4I7T92Kt5R/2eJtZiaxKpV+81VBIZevgrm0h3Ku/gqtqkm5hcYcqqBC0 DL4+wfZ9sfqS4bhtQ1YS4yy5qpduUeg7X9/jcISy++FBUw1b5cAiL/WJmcj5y0BPj9js mtjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SdYBIkNQsFd74tvgqxipUgQRM9sk8tbOtLgfO+Jzjgs=; b=8B5EeA860TGrKiJJexHGvXjkB9vzHobiBlyOWz9k0R5+f3MSJeHejO7AVEEuFtTTj2 pRPBQ3j8ZwfSKyr3IDGa5qdB9GYi2E9+Ru3ELBi2TInQKatyjTlsWDXk/sTYnyTF1o/J hsHkEcxTXvbKgpwP6Br1RTqNkQgzunqJV9bCK2ayfuUOruj+i+tNJQxR47CsGaoXFyiu YPQ3rBC731kpyLjK29H2S48qYzc/R2ILE7Ab2we2E6ThA8GNUG3/ciUUTAZlp+o+n1db 0cgqtuTNVTqIi5ID/0AcmYmexcOMv1bKokBPybbo8vvuqEJfRVpFkFwDhm0OnycMZd0P 9Kog== X-Gm-Message-State: ACrzQf1yoJk/0ESxOK9117CDcugdY06bGcvqIwHl0QlUlLakPAlulV8C zku0F6LCrdDIakLmTZTwJRZ0uYxnSmAxwjLIHR5q7SLOd/d2gTR2rUclmeyK06NH99XootxmdMV WpDDRAq0CkF48biAv7Y2G+xZO1E1H28wRt3+KulVss6uKw6zUKIUmNzrIHKdn89DoEFJw X-Google-Smtp-Source: AMsMyM4pfJpms1lBe1AbFDLdgAB86fmiImaTuuX+rpPzzfx0+O9caW2rYQ5m4WRiK07ZV5ZjwC+YBY6Q/YvdmBHf X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:c94b:b0:185:2d36:9dc6 with SMTP id i11-20020a170902c94b00b001852d369dc6mr21401113pla.68.1666385488235; Fri, 21 Oct 2022 13:51:28 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:03 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-6-aaronlewis@google.com> Subject: [PATCH v6 5/7] selftests: kvm/x86: Add flags when creating a pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the flags field can be non-zero, pass it in when creating a pmu event filter. This is needed in preparation for testing masked events. No functional change intended. Signed-off-by: Aaron Lewis --- .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index ea4e259a1e2e..bd7054a53981 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -221,14 +221,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], - int nevents, uint32_t action) +create_pmu_event_filter(const uint64_t event_list[], int nevents, + uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->flags = flags; for (i = 0; i < nevents; i++) f->events[i] = event_list[i]; @@ -239,7 +240,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action); + action, 0); } /* @@ -286,7 +287,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); count = test_with_filter(vcpu, f); free(f); From patchwork Fri Oct 21 20:51:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 522A9ECDFA1 for ; Fri, 21 Oct 2022 20:51:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229999AbiJUUvi (ORCPT ); Fri, 21 Oct 2022 16:51:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229967AbiJUUve (ORCPT ); Fri, 21 Oct 2022 16:51:34 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5DF627E067 for ; Fri, 21 Oct 2022 13:51:32 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id s2-20020aa78282000000b00561ba8f77b4so1869760pfm.1 for ; Fri, 21 Oct 2022 13:51:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=d0XEPnbI3503AWDt9XzGyqaMgJgSF6P70gHQJDooX+Q=; b=d0ZdOeRtGePIg6QDyBAZVp/qKhG7uMKtttQI0vbMIDjQ+HTiB+1JVLVSAnPtYe6JxV 8XW3IuHrlEyK/A9FVgL5R+cAIGplQ+GoYXY7xuMB1xH/n3ggPZOeCprLkjkwignNcwEE Jw9BtikSEtA3ahuEutebBUpk1j+VgEzvTSXUlTz1AgGglWF2FfgDiHBHuSHzCquDsOU1 khj4JVkPfgXjyavJUyYZDTbrpCUEzRMRQjB9nYW1r4y1l9TDrkDiWprxdfpZ7NOjcpN+ Lw5rFeIQYQMEE7y1QfjP2X6BG3IqMDP671/9xi5cf9SzHSCHdJR/yUK1S0xnk5UgYMWF xLUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d0XEPnbI3503AWDt9XzGyqaMgJgSF6P70gHQJDooX+Q=; b=mx1isDrMxPc5IPJ4pgwx6hALA+O2QFo7DjefPTdjg+CfZyCQGfid/1ENYjvEdJuUIw YesjGvqb9gI+EGU7/MMfFUa2MxmdeTwwgMDM6GlJFZlq8m7ZlXhGDGT9sDysLhknhEzT SrLcTX7a62X8ynAyYloo2HHqEMd7s8gXvIKxJPSJQ0hDA0HvrLo2rw/CirVeMEruDXeV W1/VYM/5mLMP34ij4c5BXS6aDpF02Z9KoKBD3W+2Y5J+iogUw6wfIQvh3DKHnIpzLpBG 4IOwsq6B/rSAyE8EKAEfsDDV6IP0VCDPGv5VqR7tbUZbCXtrXhuTIy/uAJMq1PVQ07vR MUBQ== X-Gm-Message-State: ACrzQf2Fx98XZqXV+pg4YnQ8aI7MNcqAN5ttdkkwnfmZ/7vz2vPmwdfd OjQGSDIFqiukC3G5tBM4w1yTsR6VJKDPJbxyE/HYyRc7TabrgJPUnT62+2N7i8ZZSsK4IOada14 k9vnFCZ3wedOccloWOMbjroHZFS4CPGeppp3ZTLwfHpDdp6KDekeu33CInciZfv3OfYxd X-Google-Smtp-Source: AMsMyM7a9+yQIodcJRvBZOTagZvAFFmdkSER7atciFOqztNTIYmy5qBIdsdLL+UgXzZxGCq2b6oTRfrJLcUVBU96 X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:3011:b0:186:892d:1c4c with SMTP id o17-20020a170903301100b00186892d1c4cmr307715pla.152.1666385491495; Fri, 21 Oct 2022 13:51:31 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:04 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-7-aaronlewis@google.com> Subject: [PATCH v6 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test that masked events are not using invalid bits, and if they are, ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER. The only valid bits that can be used for masked events are set when using KVM_PMU_ENCODE_MASKED_ENTRY() with one exception: If any of the high bits (35:32) of the event select are set when using Intel, the pmu event filter will fail. Also, because validation was not being done prior to the introduction of masked events, only expect validation to fail when masked events are used. E.g. in the first test a filter event with all its bits set is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index bd7054a53981..0750e2fa7a38 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,39 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, + int nevents, uint32_t flags) +{ + struct kvm_pmu_event_filter *f; + int r; + + f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); + free(f); + + return r; +} + +static void test_filter_ioctl(struct kvm_vcpu *vcpu) +{ + uint64_t e = ~0ul; + int r; + + /* + * Unfortunately having invalid bits set in event data is expected to + * pass when flags == 0 (bits other than eventsel+umask). + */ + r = run_filter_test(vcpu, &e, 1, 0); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); + + e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -452,6 +485,7 @@ int main(int argc, char *argv[]) setbuf(stdout, NULL); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER)); + TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS)); TEST_REQUIRE(use_intel_pmu() || use_amd_pmu()); guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code; @@ -472,6 +506,8 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + test_filter_ioctl(vcpu); + kvm_vm_free(vm); test_pmu_config_disable(guest_code); From patchwork Fri Oct 21 20:51:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13015453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3089DC38A2D for ; Fri, 21 Oct 2022 20:51:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230074AbiJUUvr (ORCPT ); Fri, 21 Oct 2022 16:51:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230030AbiJUUvl (ORCPT ); Fri, 21 Oct 2022 16:51:41 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F28ED2892E1 for ; Fri, 21 Oct 2022 13:51:35 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id e10-20020a17090301ca00b00183d123e2a5so2260234plh.14 for ; Fri, 21 Oct 2022 13:51:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+NUq1pnmzHdXnrMvd4KmYhaXcmeq2zps0wpBk//Fn7g=; b=ITpGVEkh6yyr27L8Ucc5XIGyQ2RHX0gs6QFp+TTv86sLbTx1BCMuQSBjq8mV1EsC4T rEFCVj+zRRnvrh2SwZJ3PKN+5C5VbixXh39J4Qv8WzXSZOhUUJ2/dFm+P23iZyPxR9cN XFuRtfoZUsBKfixwQMwXO3qBtkhtsowPTpEj+LKvmVkjTeYb0KjZ2S3H5UQVff2BwZuD 5A5Yo99UpgsxprJOwb5QIGOHg36T2bnzKKU1YW7muaWoiw+MnjzM/zEdfQCIETs8MwcY QUdzZYmveHH3VEkGDIIxvHLqSiQU1875UrcBElEjFQ9N+SWV+4Mc2PofsTrY2Y3teLYd xUXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+NUq1pnmzHdXnrMvd4KmYhaXcmeq2zps0wpBk//Fn7g=; b=V02G/jFM4qpk9VLpQVUePjXp/SIH2gK1Rf49gmRmLCLCFntUoOC3ePEYnh+fjWFibG TS6utbyw9T7HMUQ07o0NbCgRi0uM/bDmZKW9v2Zfo8z1r83guDj02xExmbAwJuaSKQRj Oykz3g+0QSX3hSEb62X1/9TNaXQM7Wx+BBOI1M1/F1tP8f6NetmCXof71qNWyoTrlU7B WlfTYcE+8/HTiqj6RX6u9SpiVLqhVDj1BszqtxSPzkUG4krqoDkyK6jBX9fwGCneVn6n 93XdOWdZY1TwaRPeOyij4MNm5UCLgBZLKMu9ePxH5di6V7CXUssBPBLR8MYdNimJxJrK C8zg== X-Gm-Message-State: ACrzQf0zUNrh4c3CTBfVKfcUtiLmFBf2B8LXGYw2UA1KJIsBr1MsgOcg RSh5PV1o3dSZJvhSgIyeR9CeuK/a0CzS3poyQGSBno7xXr0is+Wr/UjouOk7a0Gzo0BaqiJx6Km I2OG7nPIXdMyotiOWNTl9jjGYY1sBr2Y6WYQ6rdFdI/sjHQGsoqip6FdnOFRgyDfA8KmY X-Google-Smtp-Source: AMsMyM7pbICA5w9nrLq/9c3D9CGqwq/2+bZGoZVGgpjpLw1tvmWYpf55bozfpg7wuXI2eUU5GV6wa/Kvf7jNt/8k X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:b944:b0:179:fdb0:1c39 with SMTP id h4-20020a170902b94400b00179fdb01c39mr21039281pls.98.1666385494620; Fri, 21 Oct 2022 13:51:34 -0700 (PDT) Date: Fri, 21 Oct 2022 20:51:05 +0000 In-Reply-To: <20221021205105.1621014-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221021205105.1621014-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021205105.1621014-8-aaronlewis@google.com> Subject: [PATCH v6 7/7] selftests: kvm/x86: Test masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing to show that a pmu event can be filtered with a generalized match on it's unit mask. These tests set up test cases to demonstrate various ways of filtering a pmu event that has multiple unit mask values. It does this by setting up the filter in KVM with the masked events provided, then enabling three pmu counters in the guest. The test then verifies that the pmu counters agree with which counters should be counting and which counters should be filtered for both a sparse filter list and a dense filter list. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 344 +++++++++++++++++- 1 file changed, 342 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0750e2fa7a38..926c449aac78 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,337 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +/* + * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and + * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/ + * supported on Intel Xeon processors: + * - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake. + */ +#define MEM_INST_RETIRED 0xD0 +#define MEM_INST_RETIRED_LOAD EVENT(MEM_INST_RETIRED, 0x81) +#define MEM_INST_RETIRED_STORE EVENT(MEM_INST_RETIRED, 0x82) +#define MEM_INST_RETIRED_LOAD_STORE EVENT(MEM_INST_RETIRED, 0x83) + +static bool supports_event_mem_inst_retired(void) +{ + uint32_t eax, ebx, ecx, edx; + + cpuid(1, &eax, &ebx, &ecx, &edx); + if (x86_family(eax) == 0x6) { + switch (x86_model(eax)) { + /* Sapphire Rapids */ + case 0x8F: + /* Ice Lake */ + case 0x6A: + /* Skylake */ + /* Cascade Lake */ + case 0x55: + return true; + } + } + + return false; +} + +static int num_gp_counters(void) +{ + const struct kvm_cpuid_entry2 *entry; + + entry = kvm_get_supported_cpuid_entry(0xa); + union cpuid10_eax eax = { .full = entry->eax }; + + return eax.split.num_counters; +} + +/* + * "LS Dispatch", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + */ +#define LS_DISPATCH 0x29 +#define LS_DISPATCH_LOAD EVENT(LS_DISPATCH, BIT(0)) +#define LS_DISPATCH_STORE EVENT(LS_DISPATCH, BIT(1)) +#define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2)) + +#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, false) +#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, true) + +struct perf_counter { + union { + uint64_t raw; + struct { + uint64_t loads:22; + uint64_t stores:22; + uint64_t loads_stores:20; + }; + }; +}; + +static uint64_t masked_events_guest_test(uint32_t msr_base) +{ + uint64_t ld0, ld1, st0, st1, ls0, ls1; + struct perf_counter c; + int val; + + ld0 = rdmsr(msr_base + 0); + st0 = rdmsr(msr_base + 1); + ls0 = rdmsr(msr_base + 2); + + __asm__ __volatile__("movl $0, %[v];" + "movl %[v], %%eax;" + "incl %[v];" + : [v]"+m"(val) :: "eax"); + + ld1 = rdmsr(msr_base + 0); + st1 = rdmsr(msr_base + 1); + ls1 = rdmsr(msr_base + 2); + + c.loads = ld1 - ld0; + c.stores = st1 - st0; + c.loads_stores = ls1 - ls0; + + return c.raw; +} + +static void intel_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD); + wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE); + wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7); + + r = masked_events_guest_test(MSR_IA32_PMC0); + + GUEST_SYNC(r); + } +} + +static void amd_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_K7_EVNTSEL0, 0); + wrmsr(MSR_K7_EVNTSEL1, 0); + wrmsr(MSR_K7_EVNTSEL2, 0); + + wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE); + wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE); + + r = masked_events_guest_test(MSR_K7_PERFCTR0); + + GUEST_SYNC(r); + } +} + +static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, + const uint64_t masked_events[], + const int nmasked_events) +{ + struct kvm_pmu_event_filter *f; + struct perf_counter r; + + f = create_pmu_event_filter(masked_events, nmasked_events, + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r.raw = test_with_filter(vcpu, f); + free(f); + + return r; +} + +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ +#define MAX_FILTER_EVENTS 300 +#define MAX_TEST_EVENTS 10 + +#define ALLOW_LOADS BIT(0) +#define ALLOW_STORES BIT(1) +#define ALLOW_LOADS_STORES BIT(2) + +struct masked_events_test { + uint64_t intel_events[MAX_TEST_EVENTS]; + uint64_t intel_event_end; + uint64_t amd_events[MAX_TEST_EVENTS]; + uint64_t amd_event_end; + const char *msg; + uint32_t flags; +}; + +/* + * These are the test cases for the masked events tests. + * + * For each test, the guest enables 3 PMU counters (loads, stores, + * loads + stores). The filter is then set in KVM with the masked events + * provided. The test then verifies that the counters agree with which + * ones should be counting and which ones should be filtered. + */ +const struct masked_events_test test_cases[] = { + { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow loads.", + .flags = ALLOW_LOADS, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow stores.", + .flags = ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)), + }, + .msg = "Only allow loads + stores.", + .flags = ALLOW_LOADS_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0), + }, + .msg = "Only allow loads and stores.", + .flags = ALLOW_LOADS | ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow loads and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow stores and loads + stores.", + .flags = ALLOW_STORES | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + }, + .msg = "Only allow loads, stores, and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES + }, +}; + +static int append_test_events(const struct masked_events_test *test, + uint64_t *events, int nevents) +{ + const uint64_t *evts; + int i; + + evts = use_intel_pmu() ? test->intel_events : test->amd_events; + for (i = 0; i < MAX_TEST_EVENTS; i++) { + if (evts[i] == 0) + break; + + events[nevents + i] = evts[i]; + } + + return nevents + i; +} + +static bool bool_eq(bool a, bool b) +{ + return a == b; +} + +static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, + int nevents) +{ + int ntests = ARRAY_SIZE(test_cases); + struct perf_counter c; + int i, n; + + for (i = 0; i < ntests; i++) { + const struct masked_events_test *test = &test_cases[i]; + + /* Do any test case events overflow MAX_TEST_EVENTS? */ + assert(test->intel_event_end == 0); + assert(test->amd_event_end == 0); + + n = append_test_events(test, events, nevents); + + c = run_masked_events_test(vcpu, events, n); + TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) && + bool_eq(c.stores, test->flags & ALLOW_STORES) && + bool_eq(c.loads_stores, + test->flags & ALLOW_LOADS_STORES), + "%s loads: %u, stores: %u, loads + stores: %u", + test->msg, c.loads, c.stores, c.loads_stores); + } +} + +static void add_dummy_events(uint64_t *events, int nevents) +{ + int i; + + for (i = 0; i < nevents; i++) { + int event_select = i % 0xFF; + bool exclude = ((i % 4) == 0); + + if (event_select == MEM_INST_RETIRED || + event_select == LS_DISPATCH) + event_select++; + + events[i] = KVM_PMU_ENCODE_MASKED_ENTRY(event_select, 0, + 0, exclude); + } +} + +static void test_masked_events(struct kvm_vcpu *vcpu) +{ + int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint64_t events[MAX_FILTER_EVENTS]; + + /* Run the test cases against a sparse PMU event filter. */ + run_masked_events_tests(vcpu, events, 0); + + /* Run the test cases against a dense PMU event filter. */ + add_dummy_events(events, MAX_FILTER_EVENTS); + run_masked_events_tests(vcpu, events, nevents); +} + static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, int nevents, uint32_t flags) { @@ -470,7 +801,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); - e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); + e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } @@ -478,7 +809,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) int main(int argc, char *argv[]) { void (*guest_code)(void); - struct kvm_vcpu *vcpu; + struct kvm_vcpu *vcpu, *vcpu2 = NULL; struct kvm_vm *vm; /* Tell stdout not to buffer its content */ @@ -506,6 +837,15 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + if (use_intel_pmu() && + supports_event_mem_inst_retired() && + num_gp_counters() >= 3) + vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code); + else if (use_amd_pmu()) + vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code); + + if (vcpu2) + test_masked_events(vcpu2); test_filter_ioctl(vcpu); kvm_vm_free(vm);